%&latex
\documentclass[10pt,twocolumn,letterpaper]{article}
\usepackage{amssymb}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage{subfigure}
% \usepackage[T1]{fontenc}

\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{booktabs}

\newcommand{\RR}{\mathbb{R}}
\newcommand{\ra}{\rightarrow}
\newcommand{\eps}{\epsilon}

% Include other packages here, before hyperref.

% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex.  (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
%\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}

\cvprfinalcopy % *** Uncomment this line for the final submission

\def\cvprPaperID{****} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}

% Pages are numbered in submission mode, and unnumbered in camera-ready
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}

%%%%%%%%% TITLE
\title{Learning to Play 2D Video Games}

\author{Justin Johnson\\
{\tt\small jcjohns@stanford.edu}
% For a paper whose authors are all at the same institution,
% omit the following lines up until the closing ``}''.
% Additional authors and addresses can be added with ``\and'',
% just like the second author.
% To save space, use either the email address or home page, not both
\and
Mike Roberts\\
{\tt\small mlrobert@stanford.edu}
\and
Matt Fisher\thanks{Mike and Justin are enrolled in CS 229, but Matt is not. Matt is a senior PhD student in the Stanford Graphics Group, who will has advised and collaborated with Mike and Justin on this project. He wrote the game model learning algorithm mentioned in Section~\ref{section:model-based}.}\\
{\tt\small mdfisher@stanford.edu}
}

\maketitle
\thispagestyle{empty}

%%%%%%%%% ABSTRACT
\input{abstract.tex}
\vspace{-1pc}

\input{introduction.tex}

\section{Games}
\newcommand{\gamespace}{6pt}
To evaluate our system we implemented a number of games of complexity comparable to early arcade games. Each game contains a number of distinct object types, and game state consists of a fixed configuration of objects. The state space $S$ of a game is the set of all possible object configurations. Unless otherwise noted, the action space of each game is $A=\{L,R,U,D,\emptyset\},$ and consists of one action corresponding to each of the four cardinal directions and the do-nothing action $\emptyset$.

Each game is played over a series of episodes, where an episode consists of many frames. To prevent a perfect player from playing indefinitely, we cap episode length where appropriate. In all situations, early termination of an episode due to capping earns the player zero reward in the final frame of the episode.
\\*[\gamespace]
\textbf{\textsc{Grid-World.~~~~}}In this game, the player controls a character on a $5\times5$ grid. During each frame of the game, the player may move in any direction or remain stationary. The player begins each episode in the lower left corner of the grid and must reach the upper right corner. When this goal is achieved, the player receives a positive reward and the episode ends. In addition, the player receives a negative reward for stepping on the central square. We evaluate the player's performance by counting the number of frames per episode. Fewer frames per episode indicates better performance, since it means that the player navigated to the goal square more quickly.
\\*[\gamespace]
\textbf{\textsc{Eat-The-Fruit.~~~~}}
Similar to {\sc GridWorld}, the player controls a character on a fixed sized grid. At the start of each episode, an apple appears on a randomly chosen square. The player begins in the lower left corner of the grid and must move to the apple. After eating the apple, the player receives a reward and the episode ends. As in {\sc Gridworld}, we measure a player's performance on this game by counting the number of frames per episode.
\\*[\gamespace]
\textbf{\textsc{Dodge-The-Missile.~~~~}}
In this game the player controls a space ship which can move left or right across the bottom of the screen, so the action set is $A=\{L,R,\emptyset\}$. Missiles and powerups spawn at the top of the screen and fall toward the player. The player receives a positive reward for collecting powerups; being hit by a missile incurs a negative reward and causes the episode to end. We cap the episode length at 5000 frames. We evaluate the player's performance by counting both the number of frames per episode and the number of powerups collected per episode. Larger numbers for each metric indicate better performance.
\\*[\gamespace]
\textbf{\textsc{Frogger.~~~~}}
In this game, the player controls a frog which can move in any direction or remain stationary. The player begins each episode at the bottom of the screen and must guide the frog to the top of the screen. This goal is made more challenging by cars that move horizontally across the screen. The episode ends when the frog either reaches the top of the screen or is hit by a car. The former earns a reward of $r_1>0$ and the latter receives a reward of $r_2<0$. We evaluate the player's performance by computing her average reward per episode.
\\*[\gamespace]
\textbf{\textsc{Pong.~~~~}}
In this game, two paddles move up and down across the left and right sides of the screen while volleying a ball back and forth. The player controls the left paddle, whereas the game controls the right paddle. The action space is $A=\{U,D,\emptyset\}$. Failing to bounce the ball yields a negative reward and ends the episode. We cap the episode length at 50 successful bounces. We evaluate the player's performance by counting the number of successful bounces per episode.
\\*[\gamespace]
\begin{figure}
\centering
\includegraphics[width=0.39\textwidth]{figures/absoluteFeatures.png}
\includegraphics[width=0.39\textwidth]{figures/relativeFeatures.png}
\caption{Our tile-coded feature representation. We encode the absolute positions of game objects (top) as well as relative positions of game objects (bottom) in spatial bins. Relative positions are computed separately for all pairs of object types. For any game state $s\in S$, this results in a feature vector $\phi(s)$ of dimension $d=O(k^2)$ where $k$ is the number of distinct object types in the game. To be used in the SARSA learning algorithm, the feature transform must also encode the action $a_i\in A=\{a_0,\ldots,a_{|A|-1}\}$ that is to be taken from the current game state $s$.
%To this end, our final feature vector $\phi(s,a_i)$ is of dimension $|A|d$; this feature final feature vector has $\phi(s)$ written at indices $(i-1)|A|$ to $i|A|-1$ with zeros at all other indices.
To this end, our final feature vector $\phi(s,a_i)$ is simply the vector $\phi(s)$ with all indices shifted by $i|A|$ and with zeros at all other positions.}
\label{fig:schematic}
\end{figure}
\textbf{\textsc{Snake.~~~~}}
In this game, the player controls a snake of fixed length that moves around a maze. With no player input, the snake moves forward at a constant rate; pressing a direction key changes the direction that the snake travels. The episode ends with a negative reward if the snake head intersects either a wall or the snake body. We cap the epsiode length at 180 frames. We evaluate the player's performance by counting the number of frames that it survives per episode.
\\*[\gamespace]
\textbf{\textsc{Dance-Dance-Revolution.~~~~}}
In this game, arrows appear at the bottom of the screen and scroll toward targets at the top of the screen. Whenever an arrow overlaps its corresponding target, the player must press the direction key corresponding to the direction of the arrow. The trivial strategy of pressing every arrow at every frame is impossible, since the player can press at most one direction per frame. Each episode lasts for 1000 frames, and a player's performance is measured by the fraction of arrows that it successfully hits.

\input{features.tex}

\newcommand{\cmt}[1]{\textcolor{OliveGreen}{//\,\,#1}}
\begin{algorithm}
\begin{algorithmic}
\Function{LEARN-GAME}{\,}
\State $\tau\gets0$, $w\gets0$
\State $s \gets$ Initial state, $a \gets$ Initial action
\State $\eps\gets\eps_0$ \cmt{Initial exploration rate}
\Repeat
        \State Take action $a$, observe next state $s'$ and reward $r$
        \State $s' \gets$ CHOOSE-ACTION($s, \eps$)
        \State $\delta \gets r + \gamma w^T\phi(s',a') - w^T\phi(s, a)$ %\cmt{Compute difference}
        \State $\tau \gets \lambda \tau$
        \For{$\phi_i(s,a) \neq 0$}
                \State $\tau_i = 1$
        \EndFor
        \State $w \gets w + w\alpha\delta \tau$ \cmt{Update weight vector}
        \State $\eps\gets\eps_d\eps$ \cmt{Decay exploration rate}
\Until{termination}
\EndFunction
\end{algorithmic}
\caption{Learn to play a game using the SARSA($\lambda$) algorithm with linear function approximation. See Section~\ref{section:learning} for definitions of the variables.}
\label{alg:sarsa}
\end{algorithm}

\input{model-based.tex}

\begin{figure}
\begin{flushright}
\subfigure{\includegraphics[width=0.5\textwidth]{figures/perf1.pdf}} \\*
\subfigure{\hspace{0.79in}\includegraphics[width=0.39\textwidth]{figures/perf2.pdf}}
\end{flushright}
\caption{Top: Comparison of total computation time for learning \textsc{Dodge-The-Missile} using different combinations of sparse and dense feature and trace vectors. Bottom: The average number of nonzero features for \textsc{Dodge-The-Missile}. We observe that the average number of nonzero features is very small, which we exploit to drastically reduce computation times. }
\label{fig:sparse}
\end{figure}

\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/gridWorld1.pdf}
%\includegraphics[width=0.23\textwidth]{figures/gridWorld2.pdf}
\caption{The effect of different values for $\lambda$. on the convergence of our algorithm when playing {\sc Grid-World}. We show convergence rates for various values of $\lambda$ relative to random movement. Lower is better. We observe that our algorithm outperforms random play across a wide range of values for $\lambda$.}
\label{fig:lambda}
\end{figure}


\section{Model-Free Reinforcement Learning}
\label{section:learning}
Our system uses the SARSA($\lambda$) algorithm with linear function approximation (see Algorithm~\ref{alg:sarsa}) to learn a gamelay policy. SARSA \cite{mohri2012foundations} is an online model-free algorithm for reinforcement learning. The algorithm iteratively computes a state-action value function $Q:S\times A\ra\RR$ based on the rewards received in the two most recently observed states. SARSA($\lambda$) \cite{wiering1998fast} is a variant that updates the state-action value function based on the rewards received over a large window of recently observed states. In our case, the full state space $S$ of an unknown game may be very large, so we approximate the state-value function as $Q(s,a)=w^T\phi(s,a)$, where $\phi:S\times A\ra\RR^n$ is the feature transform described in Section~\ref{section:features} and $w\in\RR^n$ is a weight vector.

At each game state $s$, the algorithm chooses an action $a$ using an $\epsilon$-greedy policy: with probability $\epsilon$ the action is chosen randomly, and with probability $1-\epsilon$ the action is chosen to satisfy $a=\arg\max_{a\in A}Q(s,a)$. The parameter $\epsilon\in[0,1]$ controls the relative importance of exploration and exploitation, and as such is known as the \emph{exploration rate}. In our implementation  we decay $\epsilon$ exponentially over time. This encourages exploration near the beginning of the learning process, and exploitation near the end of the learning process.

The algorithm keeps track of recently seen states using a \emph{trace vector} $\tau\in\RR^n$. More specifically, $\tau$ records the recency with which each feature has been observed to be nonzero. The sparsity of typical feature vectors causes $\tau$ to be sparse as well. This sparsity can be exploited for computational efficiency (see Figure~\ref{fig:sparse}).

We update the trace vector using a parameter $\lambda\in[0,1]$, which controls the extent to which recently seen features contribute to state-value function updates. Varying the value of $\lambda$ can affect the rate at which the learning algorithm converges (see Figure~\ref{fig:lambda}). Admittedly, we found that different values of $\lambda$ were required for each game in order to achieve the best possible performance. For example, we used $\lambda=0.3$ for \textsc{Dance-Dance-Revolution} and $\lambda=0.8$ for \textsc{Dodge-The-Missile}.

The algorithm also depends on a \emph{learning rate} $\alpha\in[0,1]$, which has similar meaning to the learning rate in gradient descent. We found that $\alpha$ required tuning for each game. For example,  \textsc{Frogger} performed best with $\alpha=0.01$, whereas \textsc{Snake} performed best with $\alpha=0.00001$.

\section{Results}
Our system successfully learns to play \textsc{Grid-World}, \textsc{Eat-The-Fruit}, \textsc{Dodge-The-Missile}, \textsc{Frogger}, \textsc{Pong}, and \textsc{Dance-Dance-Revolution}. We evaluate the performance of our system on these games by comparing with game agents that choose random actions at every frame to show that substantial learning takes place (see Figures~\ref{fig:lambda} and~\ref{fig:perf}). Our system learns to play \textsc{Snake} successfully only when we simplify the game by reducing the length of the snake body to 1 (see Figure~\ref{fig:snake-perf}).

\newcommand{\perfFigWidth}{0.375\textwidth}
\begin{figure*}
\centering
\hspace{-3pc}
\subfigure{\label{subfig:fruitEater-perf}\includegraphics[width=\perfFigWidth]{figures/eatTheFruit.pdf}}
\hspace{-1.7pc}
\subfigure{\label{subfig:ddr-perf}\includegraphics[width=\perfFigWidth]{figures/ddr.pdf}}
\hspace{-1.7pc}
\subfigure{\label{subfig:dtm1-perf}\includegraphics[width=\perfFigWidth]{figures/dtm1.pdf}}
\hspace{-3pc}
\vspace{-0.5pc} \\*
\hspace{-3pc}
\subfigure{\label{subfig:dtm2-perf}\includegraphics[width=\perfFigWidth]{figures/dtm2.pdf}}
\hspace{-1.7pc}
\subfigure{\label{subfig:frogger-perf}\includegraphics[width=\perfFigWidth]{figures/frogger.pdf}}
\hspace{-1.7pc}
\subfigure{\label{subfig:pong-perf}\includegraphics[width=\perfFigWidth]{figures/pong.pdf}}
\hspace{-3pc}
\caption{
        Performance of our algorithm relative to random play for {\sc Eat-The-Fruit} (top-left, lower is better), {\sc Dance-Dance-Revolution} (top-middle), {\sc Dodge-The-Missile} (top-right and bottom-left), {\sc Frogger} (bottom-middle), and {\sc Pong} (bottom-right). For {\sc Dodge-The-Missile}, we capped the episode length at 5000 frames. For {\sc Pong}, we capped the episode length at 50 bounces. For {\sc Frogger} , we set $r_1=0.2$ and $r_2=-1$. Note that after our algorithm has learned to play  {\sc Dodge-The-Missile}  effectively, it is capable of collecting powerups while simultaneously avoiding missiles. Note that since continuously trying to move upwards can be a viable strategy when playing {\sc Frogger}, we also compare the performance of our algorithm to an AI player that continuously tries to move upwards.}
\label{fig:perf}
\end{figure*}

%\begin{figure*}
%\centering
%\subfigure[]{\label{subfig:fruitEater-perf}\includegraphics[width=0.3\textwidth]{figures/eatTheFruit.pdf}}
%\subfigure[]{\label{subfig:ddr-perf}\includegraphics[width=0.3\textwidth]{figures/ddr.pdf}}
%\subfigure[]{\label{subfig:dtm1-perf}\includegraphics[width=0.3\textwidth]{figures/dtm1.pdf}}
%\subfigure[]{\label{subfig:dtm2-perf}\includegraphics[width=0.3\textwidth]{figures/dtm2.pdf}}
%\subfigure[]{\label{subfig:frogger-perf}\includegraphics[width=0.3\textwidth]{figures/frogger.pdf}}
%\subfigure[]{\label{subfig:pong-perf}\includegraphics[width=0.3\textwidth]{figures/pong.pdf}}
%\caption{
%       \subref{subfig:fruitEater-perf}: Performance of our algorithm on {\sc Eat-The-Fruit} relative to random play. Lower is better.
%       \subref{subfig:ddr-perf}: Performance of our algorithm on {\sc Dance-Dance-Revolution} relative to random play.
%       \subref{subfig:dtm1-perf}-\subref{subfig:dtm2-perf}: Performance of our algorithm on {\sc Dodge-The-Missile} relative to random play. \subref{subfig:dtm1-perf}: Frames survived per episode. Episode length was capped at 5000 frames. \subref{subfig:dtm2-perf}: Powerups obtained per episode. Note that after our algorithm has learned to play effectively, it is capable of collecting powerups while simultaneously avoiding missiles.
%       \subref{subfig:frogger-perf}: Performance of our algorithm on {\sc Frogger} relative to random play with $r_1=2$ and $r_2=-10$. Since continuously trying to move upwards can be a viable strategy when playing {\sc Frogger}, we also compare the performance of our algorithm to an AI player that continuously tries to move upwards.
%       \subref{subfig:pong-perf}: Performance of our algorithm on {\sc Pong} relative to random play. Episode length was capped at 50 bounces.
%}
%\label{fig:perf}
%\end{figure*}

\begin{figure*}
\centering
\hspace{-6pc}
\subfigure{\label{subfig:snake-easy-short}\includegraphics[width=0.26\textwidth]{figures/snake1.pdf}}
\subfigure{\label{subfig:snake-easy-long}\includegraphics[width=0.26\textwidth]{figures/snake2.pdf}}
\subfigure{\label{subfig:snake-hard-short}\includegraphics[width=0.26\textwidth]{figures/snake3.pdf}}
\subfigure{\label{subfig:snake-hard-long}\includegraphics[width=0.26\textwidth]{figures/snake4.pdf}}
\hspace{-6pc}
\caption{Performance of our algorithm on {\sc Snake} relative to random play. We evaluate our algorithm's performance on the following four different game variations: empty game board with a snake body length of 1 (left), empty game board with a snake body length of 10 (left-middle), relatively cluttered game board with snake body length of 1 (right-middle), and relatively cluttered game board with a snake body body length of 10 (right). Our algorithm was able to learn effectively on the cluttered game board, but not with the longer body. This is because having a longer body requires longer-term decision making. On the other hand, a short body makes it possible to play according to a relatively greedy strategy, even on a relatively cluttered game board.}
\label{fig:snake-perf}
\end{figure*}


%%
%\begin{algorithm}
%\begin{algorithmic}
%\Function{CHOOSE-ACTION}{$s, \eps$}
%       \State $x \gets$ a random number drawn uniformly from $[0,1]$
%       \If{$x < \eps$}
%               % \State \cmt{Exploration}
%               \State\Return a randomly chosen action $a\in A$
%       \Else
%               % \State \cmt{Exploitation}
%               \State\Return $\arg\max_{a\in A} w^T\phi(s,a)$
%       \EndIf
%\EndFunction
%\end{algorithmic}
%\end{algorithm}
%\begin{algorithm}
%\begin{algorithmic}
%\State \cmt{Global Variables}
%\State $S \gets$ Set of states
%\State $A \gets$ Finite set of actions
%\State $\phi\gets$ Feature transform $S\rightarrow\mathbb{R}^n$
%\State $w\gets$ Weight vector $\in\mathbb{R}^n$
%\State $\tau\gets$ Trace vector $\in\mathbb{R}^n$
%\State $\alpha\gets$ Learning rate $\in[0,1]$
%\State $\gamma\gets$ Discount factor $\in[0,1]$
%\State $\epsilon_0\gets$ Initial exploration rate $\in[0,1]$
%\State $\epsilon_d\gets$ Exploration decay rate $\in[0,1]$ 
%\end{algorithmic}
%\end{algorithm}
%
{\footnotesize
\bibliographystyle{ieee}
\bibliography{refs}
}

\end{document}

