%&latex
\documentclass[10pt,twocolumn,letterpaper]{article}

\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{booktabs}

% Include other packages here, before hyperref.

% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex.  (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
%\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}

\cvprfinalcopy % *** Uncomment this line for the final submission

\def\cvprPaperID{****} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}

% Pages are numbered in submission mode, and unnumbered in camera-ready
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}

%%%%%%%%% TITLE
\title{Learning to Play 2D Video Games}

\author{Justin Johnson\\
{\tt\small jcjohns@stanford.edu}
% For a paper whose authors are all at the same institution,
% omit the following lines up until the closing ``}''.
% Additional authors and addresses can be added with ``\and'',
% just like the second author.
% To save space, use either the email address or home page, not both
\and
Mike Roberts\\
{\tt\small mlrobert@stanford.edu}
\and
Matt Fisher\thanks{Note that Mike and Justin are enrolled in CS 229, but Matt is not. Matt is a senior PhD student in the Stanford Graphics Group, who will be advising and collaborating with Mike and Justin on this project.}\\
{\tt\small mdfisher@stanford.edu}
}

\maketitle
\thispagestyle{empty}

%%%%%%%%% ABSTRACT

\begin{abstract}
In this report, we outline the progress we have made on our \emph{Learning to Play 2D Video Games} project. Our goal in this project is to implement a machine learning system which can learn to model and play simple 2D video games. More specifically, we focus on the problem of building a general system that is capable of learning to play a variety of different games well, rather than trying to build a system that can play a single game perfectly. With this in mind, we collected 10,000 frames of gameplay from two simple 2D games. We use this data to train and evaluate a set of decision tree classifiers that predicts future game state based on current game state. We refer to the learned mapping from current game state to future game state as the \emph{game model}. Despite using general visual cues as features, we are able to learn a highly accurate game model. We have started to experiment with using our game model to subsequently learn a gameplay policy using fitted value iteration.
\end{abstract}

\section{Introduction}

AI systems are capable of playing specific video games, such as Mario \cite{Karakovskiy2012}  and Starcraft \cite{Young2012}, with comparable skill to expert human players. However, all such AI systems rely on a human to somehow perform the challenging and tedious task of specifying the game rules, objectives and entities.

For example, state-of-the-art AI systems for playing Mario and Starcraft can play these games effectively, even when faced with challenging and complex game states. However, these systems rely heavily on hand-crafted heuristics and search algorithms that are specific to the game they target, and are not readily generalizable to other games.

In contrast, systems for General Game Playing (GGP) \cite{Genesereth2005}, such as CadiaPlayer \cite{Bjornsson2009}, can play novel games for which they were not specifically designed. However, GGP systems rely on a human to provide a complete formal specification of the game rules, objectives, and entities in a logical programming language similar to Prolog. Arriving at such a formal specification is very tedious even for the simplest games. This limitation significantly constrains the applicability of GGP systems.

In this project, we aim for greater generality than is available in state-of-the-art AI game playing systems. Although the 2D games we consider in this project seem trivial compared to most modern video games, they remain beyond the reach of general AI game playing systems. Therefore, designing a general AI system that can play these seemingly simple games effectively is an important step forward in the field of AI game playing.

\section{Collecting Training Data\footnote{Disclosure: Matt implemented both of the games described in this section, as well as the hand-written AI players for each game.}}

\begin{figure}
\begin{center}
\includegraphics[width=0.475\textwidth]{figures/games.png}
\end{center}
   \caption{The games we are using to train and test our learning system: \textsc{Snake} (left) and and \textsc{Dodge-the-Missile} (right).}
\label{fig:teaser}
\end{figure}

The goal of our system is to learn to play games by observing examples of gameplay. To ensure that our system is capable of generalizing across games, we train and test using examples of gameplay collected from two distinct games (see Figure \ref{fig:teaser}):

\textsc{Snake} is a simple variant of the classic arcade game of the same name, where the player controls a long articulated snake that can move freely in a 2D grid. The goal of the game is to collect apples that appear randomly throughout the 2D grid while dodging fixed obstacles.

\textsc{Dodge-the-Missile} is a simple variant of Space Invaders, where the player controls a small spaceship on the bottom of the screen by moving it left and right. Small missile objects, as well as apples, fall at a fixed rate from the top of the screen. The objective of the game is to dodge the missiles and collect the apples, while staying alive for as long as possible.

For each game, we collect examples of gameplay by recording the observable state of the game as it is being played by some competent player. In our system, we record the game's observable state over 10,000 game timesteps. We refer to a single game timestep as a \textit{frame}.

The examples of gameplay we collect may be generated by a human or a competent AI player. In our system, we used competent AI players that were hand-written for each game.

\section{Learning a Game Model\footnote{Disclosure: Matt designed the features used in this section and implemented the ID3 algorithm for training the binary decision tree classifiers.}}

After having collected sufficient training data for our system, our next step is to learn how the game behaves. We formulate this problem as a supervised learning task. Roughly speaking, our input features encode the current observable game state at time $t$, as well as the input provided by the player at time $t$. Our target variables encode the game state at time $t+1$. Our goal is to learn a mapping from current game states to future game states, and we refer to this learned mapping as the \emph{game model}. Ideally, our learned game model would be able to predict the game state  at time $t+1$, given the observed game state and input from the player at time $t$. In this section, we describe our approach for learning a game model. 

\subsection{Feature Design}

Since we want our learning system to generalize across games, we must avoid including any game-specific state in our features. For example, explicitly encoding the position of Mario, along with the positions of game entities that we know can harm Mario, into our features would run counter to our goal of generality. However, we must encode the observable game state with sufficient fidelity to make accurate predictions.
On the other hand, we must carefully design features of sufficiently low dimensionality that our supervised learning problem remains computationally tractable.

With these competing concerns in mind, we quantize the positions of rendered objects, which we refer to as \textit{sprites}, to a small 2D grid (e.g., 32$\times$32). We also encode game state \textit{locally}. In other words, we do not try to we encode the \textit{global} game state at time $t$ into a single training example. Instead, we encode the \textit{local} game state at time $t$ for each cell $c$ of our quantized grid into a distinct training example. The input features for each training example encode the following information:
\begin{itemize}
\item 
The local neighborhood (e.g., 3$\times$3) around $c$ on our quantized grid. We mark each cell in this neighborhood with an ID to indicate what sprite type, if any, is in that cell.
\item
The location on our quantized grid of any sprites that are rendered only once per frame, as well as the local neighborhoods around them.
\item
The number of occurrences for each type of sprite.
\item
The player input.
\end{itemize}
The target variables for each training example encode the type of sprite located at $c$ at time $t+1$.

\subsection{Predicting Future Game States}

\begin{figure}
\begin{center}
\includegraphics[width=0.475\textwidth]{figures/evaluation.png}
\end{center}
   \caption{Accuracy of our learned game models as a function of the temporal distance into the future we are trying to predict (frame look-ahead). Since the background is the most common visual element in our games, we show classifiers that always predict that every cell is background to establish meaningful baseline measurements. \textsc{Single Level Snake} is the same as \textsc{Snake}, but trained and tested on a single game level. We believe the performance of \textsc{Dodge-the-Missile} falls off relatively rapidly because our model does not accurately capture the stochastic appearances of new missiles at the top of the screen.}
\label{fig:evaluation}
\end{figure}

% \begin{table*}
% \small
% \begin{center}
% \begin{tabular}{@{}lllllllllll@{}}
% \toprule
% & \multicolumn{10}{c}{\textbf{Predicted Sprite Type}}\\
% \cmidrule(r){2-11}
% \textbf{Actual Sprite Type} & block & body & headU & headR & headD & headL & apple & goldenApple & badApple & none \\
% \midrule
% block & 747756 & 45 & 0 & 0 & 15 & 0 & 92 & 62 & 9 & 918 \\
% body & 0 & 207873 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 1142 \\
% headU & 0 & 0 & 1212 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
% headR & 0 & 0 & 0 & 1046 & 0 & 0 & 0 & 0 & 0 & 0 \\
% headD & 0 & 0 & 0 & 0 & 1717 & 0 & 0 & 0 & 0 & 1 \\
% headL & 0 & 0 & 0 & 0 & 0 & 1023 & 0 & 0 & 0 & 0 \\
% apple & 0 & 1 & 0 & 0 & 0 & 0 & 4330 & 1 & 1 & 173 \\
% goldenApple & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 1022 & 0 & 113 \\
% badApple & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 3222 & 29 \\
% none & 1171 & 2519 & 0 & 0 & 340 & 0 & 61627 & 35410 & 11454 & 2364972 \\
% \bottomrule
% \end{tabular}
% \end{center}
%    \caption{Confusion matrix for our \textsc{snake} game model with 1 frame look-ahead.}
% \label{table1}
% \end{table*}

% \begin{table*}
% \small
% \begin{center}
% \begin{tabular}{@{}lllllllllll@{}}
% \toprule
% & \multicolumn{10}{c}{\textbf{Predicted Sprite Type}}\\
% \cmidrule(r){2-11}
% \textbf{Actual Sprite Type} & block & body & headU & headR & headD & headL & apple & goldenApple & badApple & none \\
% \midrule
% block & 684785 & 27403 & 30 & 17 & 176 & 32 & 7870 & 244 & 1123 & 24816 \\
% body & 193 & 148977 & 41 & 15 & 106 & 48 & 5361 & 450 & 1660 & 52092 \\
% headU & 0 & 46 & 912 & 0 & 0 & 0 & 54 & 1 & 10 & 189 \\
% headR & 3 & 52 & 0 & 790 & 1 & 0 & 10 & 1 & 19 & 170 \\
% headD & 26 & 172 & 0 & 0 & 1240 & 0 & 24 & 3 & 1 & 236 \\
% headL & 5 & 88 & 0 & 0 & 0 & 776 & 12 & 1 & 4 & 134 \\
% apple & 12 & 190 & 3 & 4 & 18 & 3 & 1233 & 12 & 155 & 2868 \\
% goldenApple & 1 & 45 & 1 & 0 & 2 & 0 & 45 & 135 & 6 & 903 \\
% badApple & 0 & 87 & 2 & 0 & 16 & 2 & 95 & 10 & 2198 & 843 \\
% none & 11787 & 122498 & 783 & 774 & 5607 & 1075 & 109484 & 11228 & 73508 & 2129449 \bottomrule
% \end{tabular}
% \end{center}
%    \caption{Confusion matrix for our \textsc{snake} game model with 20 frame look-ahead.}
% \label{table1}
% \end{table*}

In the interest of simplicity, we formulate the task of predicting the next game state as a series of binary classification problems on individual cells of our quantized grid. Recall that our training examples encode local game state. So for each cell $c$ in our quantized grid, and for each distinct type of sprite $k$, we predict whether or not an instance of $k$ will be located at $c$ at time $t+1$.

To be clear, if we assume there are $n_k$ different types of sprites in our game, then we must train $n_k$ distinct binary classifiers. This formulation implies that, in order to predict a complete game state, we must invoke each of these $n_k$ binary classifiers for each cell $c$ in our quantized grid.

Since our features encode some categorical data (i.e., IDs to represent what kinds of sprites that are located in local neighborhoods), and since the game models we are trying to learn can exhibit discontinuities (i.e., two similar input states might occasionally map to two very different output states), we use decision trees to represent our hypotheses for the binary classification problems described above. We train each decision tree using the ID3 algorithm \cite{Quinlan1986}.

\subsection{Evaluation}

It is worth noting that by repeatedly querying our game model, we can we can make long-range predictions about future game states. This allows us to evaluate the accuracy of our model as a function of the distance into the future we are trying to predict. We refer to this temporal distance into the future as \textit{look-ahead}.
We believe that evaluating model accuracy as a function of look-ahead is more meaningful than evaluating model accuracy for a single look-ahead value (e.g. 1-frame-look-ahead).

To evaluate the accuracy of our model, we captured an additional 5,000 frames of gameplay for each game. To be clear, our learning system had no access to these additional frames during training. For each frame of this testing data, we measured all classification errors and grouped them by sprite type. We show the error rate of our model as a function of look-ahead in Figure \ref{fig:evaluation}.

\section{Learning a Gameplay Policy}

After having learned a game model, we use this model as a simulator and attempt to learn a sensible strategy for playing the game. We formulate this problem as a reinforcement learning task. Roughly speaking, our goal is to learn a mapping from the set of possible game states to a set of sensible actions to perform in each state, and we refer to this mapping as the \textit{gameplay policy}. In this section, we describe our approach for learning a gameplay policy.

We begin with the observation that, although the state spaces of the games we consider are very large, the action spaces of the games we consider is very small. For example, there are roughly 10$^{32\times32}$ different possible states in \textsc{Snake}, but only 4 possible actions. This observation motivates our use of fitted value iteration to learn a gameplay policy.

\subsection{Feature Design}

Whereas game model learning allowed us to define our game state locally, fitted value iteration requires us to define game state globally. Moreover, if we use a linear regression model for estimating our value function, then we are effectively assuming that we can estimate the value of a state with a linear combination of that state's features. This assumption is violated by our previous encoding of game state, since our previous encoding includes categorical data for which linear combinations are not well-defined. Therefore, we must encode game state differently when applying fitted value iteration.

When applying fitted value iteration, our encoding of game state takes inspiration from the bag-of-visual-words representation for images \cite{Wallraven2003}. We precompute a codebook of small local neighborhoods (e.g., 3$\times$3) that appear somewhere in our training data. Given a new state $S$ as input, we encode $S$ as a histogram over codebook entries. Additionally, we assume that we have access to all the integers being displayed on the screen. We concatenate the histogram over codebook entries and all integers being displayed on the screen into a feature vector representing $S$.

\subsection{Defining a Reward Function}

Fitted value iteration requires us to define a reward function for each of our states. Defining a reward function for a particular game is relatively straightforward. However, defining a reward function that generalizes across a variety of games is more challenging. Rather than choosing a single reward function that attempts to generalize across every 2D game, we define a \textit{representative set} of reward functions that we believe will generalize across most 2D games.
With this representative set of reward functions in hand, we modify the inner loop of fitted value iteration (i.e., the loop that iterates over possible actions) to also iterate over possible reward functions. We leave the remainder of the fitted value iteration algorithm unchanged.

Our representative set of reward functions currently includes the integers on that appear on screen, although we are actively experimenting with different reward functions. 
%! translational invariance?

%! kernel trick to try many reward functions?

%! decision trees can treat categorical data fine, highly discontinuous game model

%! entropy to pick suitable bag-of-words features?

{\small
\bibliographystyle{ieee}
\bibliography{refs}
}

\end{document}
