\documentclass[a4paper,10pt]{article}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{mdwlist}


%opening
\title{NNGridracer \\ \small{a general $Sarsa(\lambda)$ track racing agent}}
\author{Achim Meyer \\ Tobias Poll \\ Thomas Markus}

\begin{document}

\maketitle

\section{Introduction}

Many racing games use some kind of artificial opponent to make the racing more fun, but this usually is hard coded which limits the ways in which it can learn from other players and can't handle unknown tracks well. The book from Sutton and Barto\cite{sutton} contains a similar problem of a racing agent learning the best way to drive on a gridworld. In this setting the whole track is passed to the agent as a state including its location allowing it to learn the optimal route on this track, but not for any other. It would be more interesting if the agent would learn a general policy for staying on the road and driving as fast as possible avoiding any obstacles in between instead of a very specific course of action. The trick is in giving the agent a kind of limited perception of the road ahead an mapping this to the correct steering parameters. This is an interesting problem where reinforcement learning can be used to an advantage because:

\begin{itemize*}
\item Multi agent scenario's where other cars can block each other becomes very complex very quickly \\
\item Agent can deal with noisy data \\
\item The environment model can be unknown \\
\item Supervised learning gets hard relatively quick, because the optimal action is hard to determine in complex environments \\
\item The state space(that of the state-action-pairs) is greatly reduced by specifying the problem as a POMDP with a fairly limited observation size compared to the full environment state \\
\end{itemize*}

We started very ambitiously aiming for behaviour as described in \cite{cool}, but to keep it all manageable we scaled down on the physics simulation. Though the kind of perception fed to the agent is more complex.

\section{Environment}

The problem we are trying to solve is essentially a Partially Observable Markov Decision Process (POMDP) where the state space is the track and the possible location of the agent(s). The action space is each action that the agent can perform (4). The observation space are the possible relative views of the agent (described near the Inputs section below) and there is a reward function for each state.


\subsection{Track}

The main environment is called a track. The track is discrete and can be of any size. Each cell has a value in the range of [0..1] indicating the slowdown experienced when driving over the track. (i.e. street has a deceleration value of 0.01 whereas a wall has a high value such as 1.0) In addition to having a deceleration value for each cell the track also contains 0 or more checkpoints. The checkpoints allow for more steering of what direction the agent should follow. The track also contains a finish which doesn't need to be the same as the starting position. 

In the multi agent case another car is perceived by the agent the same as a wall is perceived.  The collision detection works exactly the same and there is no further punishment for the agent implemented (i.e. having a lower maximum speed after repeatedly colliding with an opponent).

\subsection{Input}

The agent has a variable number of inputs which is derived from the view. The view is a subset of track (deceleration values). This representation can be made very efficiently and is quite accurate as well\footnote{In this setting the agent doesn't learn deceleration values for new terrain, but one can extends this approach to deal with that case.}. In the cases where the view extends beyond the track the agent perceives everything outside the track as walls (a no-go area).

The agent also receives its current speed (in cells per time step) and direction in which it is turned (in degrees). The speed gets normalized to the $[0..1]$ range (the cars have no reverse). The direction is normalized to the $[-1..1]$ range.

The input is always relative to the speed and direction the agent is facing. This is done for 2 reasons:

\begin{enumerate*}
 \item The agent should look further ahead when moving faster. Similar to what human drivers seem to be doing. \\
 \item A rotating view decreases the number of situations the agent has to learn. Take for example the course circular\_complex. With a non-rotating view the corner in the top right is different from taking the corner in the bottom right. By making the view relative to the direction the agent only has to learn a single situation instead of four separate ones.
\end{enumerate*}                    






\subsection{Actions}

The agent has just 4 actions

\begin{itemize*}
\item increase its speed with 1 cell per time step \\
\item decrease its speed with 1 cell per time step  \\
\item rotate 30 degrees to the right \\
\item rotate 30 degrees to the left 
\end{itemize*}

Actions which cannot be selected are not removed from the action set for that time step. (i.e. trying to move forward when the agent is against a wall)


\subsection{Rewards}

To motivate the agent to find the shortest route to the finish a reward of -1 is given each time step. For passing one of the checkpoints a reward of 0 is given. This is to allow the agent to have intermediate goals facilitating the learning. Just a reward at the finish line will take a long time to propagate back. The agent also receives a reward of 0 when passing the finish.

The agent also receives a reward of 0.3*{number of newly explored cells}. Because the agent only gets a small part of the track as input it has no absolute sense of progression along the track. The reward for newly explored territory is to give it indirectly a measure of how well it's progressing. 

The reward for exploring new parts of the environment is sufficiently small to make the agent prefer a shorter route over exploring. 

\section{Reinforcement learning method}

We've chosen to implement the agent using the $Sarsa(\lambda)$ Reinforcement Learning algorithm. We selected $Sarsa(\lambda)$, because

\begin{itemize*}
\item it's straightforward to implement
\item our action set is discrete and small
\item and we wanted an on policy algorithm to allow continues learning
\item eligibility traces were thought to give an advantage
\end{itemize*}

The eligibility traces we thought to speed up learning quite a bit since a track can be quite large and rewards are propagated faster in the past We opted for replacing traces, because the agent repeatedly gets the same state often which would probably result in over fitting on that particular state. In hindsight the addition of eligibility traces didn't give much of an advantage and didn't produce the better results that we hoped it would, because a good behaviour is very much dependent on the current state (reactive).

\begin{figure}[h]
\lstset{commentstyle=\textit}
\begin{lstlisting}[frame=trbl,mathescape=true]{}
Initialize Q(s,a) arbitrarily and $e(s,a) = 0$ for all s,a
Repeat (for each episode)
   Initialize s,a
   Repeat (for each step of the episode)
      Take action a, observe r, s'
      Choose a' from s' using policy derived from Q (e.g. $\epsilon$-greedy)
      $\delta$ = $r + \gamma$Q(s',a') - Q(s,a)
      e(s,a) = 1
      For all s,a:
         Q(s,a) $\leftarrow$ Q(s,a) + $\alpha \delta$e(s,a)
         e(s,a) $\leftarrow \gamma \lambda e(s,a)$
      s $\leftarrow$ s'; a $\leftarrow$ a'
   until s is terminal
\end{lstlisting}
\caption{$Sarsa(\lambda)$ algorithm with replacing traces}
\end{figure}

The way the environment and the agent communicate is similar to \cite{reactive} in that at each time step the environment gives the agent it's inputs and processes the output action. Many papers implementing a RL-racing agent use the SARS simulation environment. We decided not to use it, because it wasn't in our target language (Java) and we had less control over the environment (using relative views instead of using the curvature of corners and such).

A neural network has been used as a function approximator for the Q-values. We selected a neural net, because the possible state space is potentially huge considering an arbitrary view size with double values. The neural net also learns a smooth value function which facilitates a general policy for driving instead of being tied to a specific track. The neural network has a single sigmoid hidden layer and one output unit. Another consideration for using a neural net is that we could add or remove additional inputs relatively easy from the input layer.

The number of input units was: $viewsize^2 + 2$ (speed and direction). For most experiments we used a viewsize of 7, which results of an input layer of 51 units.

Each action (4 in total) has a separate neural net safeguarding against interference effects when reinforcing another action.

\section{Experiments and Results}

In Appendix A a screenshot of each track is included which is mentioned in this paper.

\subsection{Hidden layers}

\begin{figure}[hc]
  \includegraphics[scale=0.45]{graphs/HiddenLayer_AroundTheMud}
  \caption{Influence of the number of hidden units on the speed of convergence of the agent}
\end{figure}

To compare the effect of different sizes of the hidden layer in our agents, we ran several tests with different numbers of hidden neurons on our 'around the mud track'. The constant values we used in these experiments are: $\alpha = 0.0005$, $\lambda = 0.9$ and $\gamma = 0.95$. We trained the agent for 5000 episodes with 30 hidden neurons, 50 hidden neurons, 75 hidden neurons and 100 hidden neurons. As always, we used a linear function to slowly decrease the exploration rate from complete exploration to $\epsilon=0.01$.
As expected, the larger the size of the hidden layer is, the longer it takes to converge. We didn't expect that the agent to already converge after a with 30 hidden units, but apparently the optimal policy is simple enough to be represented by a neural network this small. For more complex tracks the larger hidden layers slowly converged to a better policy than the one with a mere 30 hidden neurons, though we never saw a big improvement from a hidden layer size larger than 50 hidden neurons.


\subsection{Learning a general policy}

\begin{figure}[h]
  \includegraphics[scale=0.4]{graphs/trainedagent}
  \caption{Performance of a trained agent on a new track. Results of untrained agents are averaged over 10 runs.}
\end{figure}

In this experiment we first trained an agent on the more complex circular track for with $\alpha = 0.0001$ , 50 hidden neurons $\lambda = 0.9$ and $\gamma = 0.95$ until it converged towards an optimal policy at 5000 episodes. For the exploration rate we use a linear function which starts off with full exploration and decreases it slowly to 0.01. When the agent converged to an optimal policy, we saved the trained agent and ran it on the simple circular track with a completely greedy policy and without any learning. Since the agent learned only from relative values, it should have learned a general policy for driving and get okay results when run on a different track. To compare the results, we also ran an untrained agent with the same training values (learning speed 0.0001, 50 hidden neurons, discount value 0.95) on the new track until it converged.
As we can see in the results, the agent already drives optimally right from the start, while it has never driven on this track before. A similar agent which isn't trained takes over 8000 episodes to reach the same policy. Obviously the trained agent always takes the same amount of steps to complete the track, since it does not explore, nor does it learn.  We can conclude from this that the agent did indeed learn a kind of general driving policy based on the relative inputs and did not just learn the track layout. 


\subsection{Effect of lambda}

\begin{figure}[hc]
  \includegraphics[scale=0.4]{graphs/lambda_circularcomplex}
  \caption{Comparision of different values of $\lambda$ on Circular complex}
\end{figure}

We tried three different $\lambda$-values on the complex circular track to see how it affects the agent. The other values we used were $\alpha = 0.0005$, $\gamma = 0.95$ and we used 50 hidden neurons. The results show that a lower $\lambda$-value gives better results. This is because the previous states have not a great deal to do with the action the agent takes in the current state, the current state itself is much more important. If you alter the previous states too much, it will result in the agent driving less reflexive, which is a bad policy adjustment in a track like this. Furthermore, a high $\lambda$-value could result in a decrease of the Q-value of a previous state-action pair, while the agent did not have the information in that previous state to make the correct action (if he could not see the wall yet, he cannot respond to it).

\subsection{Effect of discount factor}

\begin{figure}[hc]
  \includegraphics[scale=0.9]{graphs/DicountFactor_circularcomplex}
  \caption{Effect of different discount values on the speed of convergence}
\end{figure}

We tested the discount value with the same settings as we used in the lambda experiments and fixed $\lambda$ to 0.9. For the other  discount values 0.5 and 0.8 we ran on the 'circular complex' racetrack, the same converging behavior appeared. The difference between  0.95 and 0.8 is distinctive though. As we have already seen in the lambda experiment, the advantages of a high discount value can not be observed, because of the nature of the task. A small discount will do the job of finding a suitable policy much faster and more stable, whereas a higher value is taking unimportant states too much into account. This does not effect the converging itself, but the speed of converging. Maybe the advantages of a high discount will be observable, when the racetrack has more shaped like a maze, i.e. when more indirect decisions have to be made to drive successfully. Our tracks are more or less straightforward, and just require course driving skills, no need for a high discount factor. Still the 0.95 converges to the same number of steps per episode as 0.8 and 0.5. It's also interesting that the 0.95 discount value decreases nearly in the same manner as the exploration rate. To analyze this similarity further experiments will have to be done, but since high discount values seem not to be attractive to us, we will not investigate this. Nevertheless it may be a hint for the importance of the exploring behavior.

\subsection{Exploring behaviours}

\begin{figure}[hc]
  \includegraphics[scale=0.35]{graphs/ExploringBehavior_aroundTheWall}
  \caption{Comparision of different methods of decreasing the exploration rate}
\end{figure}

	
At first we tried to train the agent with a static $\epsilon$-greedy exploration rate, but very quickly it became apparent that a low exploration rate at the start takes up way too much time. Instead, we resorted to starting off with a high exploration rate and decreasing this after each episode until a minimum of 0.01 was reached. We tried three different methods for this:

\begin{itemize}

	\item Steep decreasing exploration factor. Here the exploration rate was multiplied by 0.98 every episode
	\item Linearly decreasing exploration rate over the full number of episodes
	\item Linearly decreasing exploration rate over 1/2 the episodes (the exploration rate was close to zero at half of the training episodes)	

\end{itemize}
For each of the methods we used  $\alpha = 0.0005$, 50 hidden neurons $\lambda = 0.9$ and $\gamma = 0.95$.
The first method we used turned out to decrease the exploration rate too fast for anything but the simplest track, resulting in the agent taking far too long on a track, since for 99\% of the time it's getting lost by following it's far from optimal policy. A big downside of this method was also that it did not take the number of episodes we ran it for in account. We can see in the graph how bad it is at converging, obviously we had to use another method.
The second method was to simply linearly decrease the exploration rate from 1 to 0.01 over the number episodes. As you can see in the graph, this made the convergence much steadier, the agent's learning greatly improved by better exploration. 
We thought a downside of the second method was that it still had quite a high exploring rate while it got closer to the optimal policy, so an improvement would be to lower the exploration rate to the minimal exploration rate during the first half of the episode and let it learn with a static low exploration rate during the second half. The graph shows however that this approach did not improve the results at all, but was gave remarkably similar results. We can conclude from this that the only two important factors for exploration rates are that the agent needs a high exploration rate at the first part of the episode and needs to be decreased to a low exploration rate at the end.


\subsection{Increasing tracksize}


\begin{figure}[hc]
  \includegraphics[scale=0.35]{graphs/distance_aroundTheWall_smallVersion}
  \caption{Around the wall with different trackwidths}
\end{figure}

The next experiment takes place in three different sized tracks, which are similar to the around the wall track.
As one can see, for a distance of 17 squares between an agent and a goal the agent's behavior is stable in producing a nearly optimal solution. For the other tracks oscillating behaviour is observed. The angle that the agent needs to make to avoid the wall while keeping the distance travelled as short as possible changes when you keep the wall in the middle, but change the distance from the wall to start and finish. The angle is steeper for a smaller track, because only the length of the tracks changes. Therefore in the smallest track, the agent has much more steering actions to do(larger angle). This results in a robustness to exploring actions, while driving to the open spot of the wall. Since the graph is averaged over five runs, and every point of the trendline is plotted in respect to 50 episodes, this robustness results in a flat line. Also the need to take more steering actions lowers the amount of speed up actions, what explains why the 21 squares long track is sometimes done in a shorter time. In general the green line, representing the track with 25 squares distance from start to goal, has more intense oscillation. Because the high exploring rate effected in three very bad inital episodes on the middlesized track, the learning of this track is slowed significantly in the first ~1000 trails, but we think it's not related to the tracksize. Averaging the results over more episodes should make the effect disappear.  



\section {Conclusions}

We have demonstrated that a RL racing agent is possible to construct with a complex input state instead of having simple distance values to walls. The agent does learn to efficiently drive around corners and avoids obstacles. This approach can be extended to basically any kind of perception mapping it to some kind of deceleration value for the surface type.  The learned policy is also not specific to a specific track but it's a general policy for driving on the road and doesn't rely on information describing its location on the map.



\section{Discussion}

Though the initial results of runnig a trained agent on an unknown track are quite encouraging it's still far from perfect. For a true general policy the agent would have to be trained on a much more varied track dealing with many different terrain configurations. Initial experiments confirm this, because the a greedy agent trained on the track 'Circular Complex' wasn't able to finish 'Big circular', because it had never experienced that specific terrain configuration before. 

Future extensions to these experiments could be:

\begin{itemize}
\item Modelling the opponents cars differently so that the agent can discriminate between walls and opponents. As opponents move and walls do not.
\item Our program does support multiple agents racing at the same time. We also did some small experiments with them, but due to time constraints couldn't really include them in this report. There could be interesting results in multi-agent simulations.
\end{itemize}


\bibliography{references}
\bibliographystyle{plain}



\newpage

\section{Appendix A}

The source code and the raw data for the graphs in this report can also be found at: http://code.google.com/p/nngridracer

\subsection{Tracks}
\begin{figure}[h]
\includegraphics[scale=0.5]{images/aroundthewall.png}
\caption{Around the wall - A simple track for wall avoidance}
\end{figure}

\begin{figure}[h]
\includegraphics[scale=0.5]{images/circularsimple.png}
\caption{Circular simpe - A simple circular track the agent needs to complete a single lap}
\end{figure}

\begin{figure}[h]
\includegraphics[scale=0.5]{images/circularcomplex.png}
\caption{Circularcomplex - A complex track with mudd and sand, the agent needs to complete 1 lap}
\end{figure}

\begin{figure}[h]
\includegraphics[scale=0.5]{images/bigcircular.png}
\caption{Big Circular - A large track with diagonal parts of the course}
\end{figure}






\end{document}
