\documentclass{article}
\usepackage{fancyheadings,utopia,enumerate,fancyvrb,relsize,graphicx,amsmath}
\usepackage[hang, small,bf]{caption}
\usepackage[usenames,dvipsnames]{color}

\title{\textbf{Decision Making in Intelligent Systems}\\Assignment 2 - Mountain Car}
\author{Chaim Bastiaan [\#5742889] \& Kai Krabben [\#5743036]}
\date{\today}

\lhead{DMIS 2 - Mountain Car}
\rhead{Chaim Bastiaan \& Kai Krabben}
\pagestyle{fancy}
\parindent 0pt

\begin{document}
\maketitle

\section{Introduction}
In this report we present our implementation of function approximation combined with Sarsa learning to solve the Mountain Car task as described by Sutton and Barto. The Mountain Car tasks consist of driving an underpowered car up a steep mountain. Since gravity is stronger than car's engine, a full throttle is not enough to reach the top. The only solution is to move away from the goal first up the other slope to build enough inertia to reach the top on the other side.\\

The next section provides an overview of the relevant theory for this project. Section \ref{application} describes how this theory translates to the Mountain Car problem and section \ref{implementation} explains the key points and design choices of the implementation of this application. Section \ref{experiments} provides an overview of the experiments we run and the conclusion is found in section \ref{conclusion}.

\section{Theory}\label{theory}
\subsection{Function Approximation}
In the previous assignment we considered a problem within a finite state space. In this problem the state space is continuous. In order to use our standard reinforcement learning algorithms, it is therefore necessary to use some form of state space discretization. One way to do this in coarse coding. In coarse coding the state is defined by several continuous parameters. This state space is partitioned into smaller sets known as \emph{features}. Since features can overlap, we can represent a continuous state in a discrete way by the features that is lays in.\\ 

A special for of coarse coding is \emph{tile coding}. In tile coding the state parameters are mapped into multiple partitions. Each partition or tiling has a small offset. We define the discrete value of a state by the tiles that it activates. Since tile coding deals with binary features, the total numbers of features present at any time is always the same. In 2-D state spaces, a grid is the most simple way to partition the state space.

\section{Application}\label{application}
In the case of the Mountain Car task we defined a state by it's velocity and position. We partitioned the state space into a grid of $10\times10$ tiles. Two forms of discretization where tested: 
\begin{itemize}
	\item a simple discretization consisting of one grid
	\item Tile coding with ten grids, each with another random offset within the width and height of one tile. 
\end{itemize}
Sarsa learning was used to learn the Q-values for each state-action pair. Since both the function approximation methods above only discretize the state and not the state-action pair, a distinct grid or tiling was kept for each action. After the Q-values where learnt, they were used to find the best policy.

\section{Implementation}\label{implementation}
We implemented the Mountain Car problem in Python (2.6). Running the software (that is using the learning algorithm and performing a drive experiment) is as easy as hitting \textit{run} in the Python runtime environment. The remains of this section provides a description of the key points and design choices.

\subsection{Representation of the problem}
\paragraph{Actions}
The set of possible actions is the simple list $[-1,0,1]$, of which the elements stand for respectively a full throttle reverse, zero throttle and full throttle forwards. 
\paragraph{States}
A state is represented by its velocity and location in a simple list $[v,x]$. This is a continuous representation that needs to be discretized before learning can take place.
\paragraph{Transition Model}
The transition model is a function mapping a state $s$ and action $a$ to a next state. It returns the next state according to the following (deterministic) equation:
\begin{eqnarray*}
	v_{t+1} &=& \textrm{bound}[v_t + 0.001a_t - 0.0028\cos(2x_t - 0.5)]\\
	x_{t+1} &=& \textrm{bound}[x_t + v_{t+1}]
\end{eqnarray*}

The bound operation enforces $-1.2\leq x_{t+1} \leq0.5$ and $-0.07\leq v_{t+1} \leq0.07$

\paragraph{Reward Model}
The reward after each state transition is $-1$ until the terminal state is reached.

\subsection{State discretization}
Class \texttt{Grid} represents a 2-D grid. It can be initialized with a number of tiles in the $x$ and $y$ direction and with an offset in these directions. A tiling is made of several grids with different offsets within a range of one cell. Thus, for each action, several grids are used to calculate the Q-value for that action. 

\section{Experiments and Results}\label{experiments}
Two plots of the resulting drive can be seen in figure \ref{plot}, where the x-axis corresponds to timesteps in the experiment and the y-axis corresponds to the absolute distance to the goal in the x-direction. Evidently, the car with the underpowered engine has to drive away from the goal at first and oscillate, in order to build enough inertia to reach the goal.
Both plots show the same, but in the left plot, the speed is represented by the size of each graph. A positive velocity is red, while a negative velocity is depicted as blue. This plot, too, seems very intuitive, as the velocity increases during the experiment and while driving up the mountain once again, the velocity drops. Driving back, the velocity swaps sign and combined with gravity allows the car to run up the other side.

Due to our internal representation and our knowledge of the plotting capability of Python, the main differences we could notice about pure discretization (using one grid without offset) and multiple grids, of which at least one with an offset (trying 1 to 10 extra grids), were that discretization performed much faster, while multiple grids did not seem to add anything to the trajectory.
The preferred hypothesis here is that while using multiple tilings converges faster within episodes, this cannot easily be pointed out in our current implementation. We merely observe (even using one episode) that the car correctly reaches the final destination and that the number of states passed in-between is the same or very similar.

%When we did try to plot the grids (see plot \ref{gridplot}), interesting patterns were observed, such as that every x-position passed through with a certain velocity received the same final value. In the plot (using a grid of 10x10), the x-axis represents the discretized velocity, the y-axis corresponds to the discretized x-position and the whiteness of each pixel represents the value of theta 
When we did try to plot the grids (see plot \ref{gridplot}), interesting patterns were observed, such as that every x-position passed through with a certain velocity received the same final value. In the plot (using a grid of 10x10), the x-axis represents the discretized velocity, the y-axis corresponds to the discretized x-position and the whiteness of each pixel represents the value of theta.
However the above observation is likely due to an error in implementation or plotting, as the outputted gridplots contained highly interpolated maps, while we expected at least in the discretized case a more discrete image. Something that also is likely to have occurred while plotting is that values higher than 255 are simply cut off, resulting in unobservable value differences in large white areas.

%We are still in doubt about whether this is due to an error in implementation or an artefact of the problem. It is also possible that values higher than 255 are simply cut off, resulting in unobservable value differences in the white areas.

\begin{figure}%
\includegraphics[scale=0.3]{plot_fix2.png}%
\includegraphics[scale=0.3]{plot_speed.png}%
\caption{Plotting the final drive experiment (x-axis: number of state, y-axis: absolute x-distance to goal)}%
\label{plot}%
\end{figure}

\begin{figure}%
\includegraphics[scale=0.3]{agridsample.png}%
\caption{An example plot of theta on 10x10 grid (x-axis: discretized velocity, y-axis: discretized x-position, whiteness: theta value}%
\label{gridplot}%
\end{figure}

\section{Conclusion and Discussion}\label{conclusion}
We found the theory for this assignment quite difficult to understand. However, in the end we succeeded in solving the Mountain Car task and we think we understand the basic ideas of reinforcement learning with continuous states through discretization. We also understand the difficulty and importance of feature engineering. Although the algorithms are quite straightforward, it is the feature engineering where the expertise comes on and what eventually determines the rate of success. 

While the car in our experiments reaches its goal regardless of its initial position, velocity and action, it may be that our learning algorithm has a small bug, yet still performs well. The number of grids (or the resolution of the tiling, whatever one prefers to call it) does not effect the results in an obvious way, which may or may not be due to a mistake in implementation. In any case, the car finds a fine policy to reach the goal, driving faster as it oscillates between the walls of the mountain in order to achieve its goal in a non-straightforward manner.
\end{document}