%&latex
\documentclass{article} % For LaTeX2e
\usepackage{nips12submit_e,times}
%\documentstyle[nips12submit_09,times,art10]{article} % For LaTeX 2.09


\title{Learning to Play 2D Video Games}


\author{
Justin Johnson\\
Stanford University\\
\texttt{jcjohns@stanford.edu} \\
\And
Mike Roberts\\
Stanford University\\
\texttt{mlrobert@stanford.edu} \\
\And
Matt Fisher\\
Stanford University\\
\texttt{mdfisher@stanford.edu}
\thanks{Note that Mike and Justin are enrolled in CS 229, but Matt is not. Matt is a senior PhD student in the Stanford Graphics Group, who will be advising and collaborating with Mike and Justin on this project.}\\
}

% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to \LaTeX{} to determine where to break
% the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{}
% puts 3 of 4 authors names on the first line, and the last on the second
% line, try using \AND instead of \And before the third author name.

\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}

\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}


\maketitle

\begin{abstract}
In this report, we outline the progress we have made on our \emph{Learning to Play 2D Video Games} project. Our goal in this project is to implement a machine learning system which can learn to model and play simple 2D video games. More specifically, we focus on the problem of building a general system that is capable of learning to play a variety of different games well, rather than trying to build a system that can play a single game perfectly. With this in mind, we collected 10,000 frames of gameplay from two simple 2D video games. We use this data to train and evaluate a decision tree classifier that predicts future game state based on current game state. We refer to the learned mapping from current game state to future game state as the \emph{game model}. Despite using general visual cues as features, we are able to learn a highly accurate game model. Finally, we use our game model to learn a gameplay policy using fitted value iteration. We demonstrate preliminary results of our gameplay policy learning on synthetic test data.
\end{abstract}

\section{Introduction}

AI systems are capable of playing specific games, such as Mario [REF] and Starcraft [REF], with comparable skill to expert human players. However, all such AI systems rely on a human to somehow perform the challenging and tedious task of formally specifying the game's rules and objectives.

For example, state-of-the-art AI systems for playing Mario [REF] and Starcraft [REF] can play these games effectively, even when faced with challenging and complex game states. However, these systems rely heavily on hand-crafted heuristics and search algorithms that are specific to the game they target, and are not readily generalizable to other games.

In contrast, the General Game Playing (GGP) framework of [REF] can play novel games for which the framework was not specifically designed. In other words, GGP explicitly addresses the problem of generalizing across multiple games. However, GGP relies on a human to provide as input a complete formal specification of the game rules, objectives, and relevant game entities. Arriving at such a formal specification is very tedious even for the simplest games (see Figure [FIG]). This limitation significantly constrains the applicability of GGP.

In this project, we aim for greater generality than is available in state-of-the-art AI game playing systems. Although the 2D games we consider in this project seem trivial compared to most modern video games, they remain beyond the reach of general AI game playing systems. Therefore, designing a general AI system that can play these seemingly simple games effectively is an important step forward in the field of AI game playing.

\section{Collecting Training Data\footnote{Disclosure: Matt implemented both of the games described in this section, as well as the hand-written AI players for each game.}}

The goal of our system is to learn to play video games from examples of gameplay. Moreover, we want our algorithm to learn in a way that is generalizable across games. With these goals in mind, we have collected training data for two distinct games (see Figure [FIG]): 

\textsc{Snake} is a simple variant of the classic arcade game of the same name, where the player controls a long articulated snake that can move freely in a 2D grid. The goal of the game is to collect apples that appear randomly throughout the 2D grid while dodging fixed obstacles.

\textsc{Dodge-the-Missile} is a simple variant of Space Invaders, where the player controls a small spaceship on the bottom of the screen by moving it left and right. Small missile objects, as well as apples, fall at a fixed rate from the top of the screen. The objective of the game is to dodge the missiles and collect the apples, while staying alive for as long as possible.

We generate training data using a competent AI player that was hand-written for each game. As the hand-written AI player proceeds through the game, we simply record the game's observable state over 10,000 game timesteps. We refer to a single game timestep as a \textit{frame}.
\section{Learning a Game Model\footnote{Disclosure: Matt designed the features used in this section and implemented the ID3 algorithm for training the binary decision tree classifiers.}}

After having collected sufficient training data for our system, our next step is to learn a model for how the game behaves. We formulate learning a game model as a supervised learning task. Roughly speaking, our input features encode the current observable game state at time $t$, as well as the controller input provided by the player at time $t$. Our target variables encode the game state at time $t+1$. Our goal is to learn a mapping from current game states to future game states, and we refer to this learned mapping as the \emph{game model}. Ideally, our learned game model would be able to predict the game state $S_{t+1}$ at time $t+1$, given the observed game state $S_t$ and controller input $I_t$ at time $t$. In this section, we describe our approach for learning the game model. 

\subsection{Designing Features for Game Model Learning}

Since we want our learning system to generalize across games, we must avoid including any game-specific state in our features. For example, explicitly encoding the position of Mario, along with the positions of game entities that we know can harm Mario, into our features would run counter to our goal of generality. However, we must encode the observable game state with sufficient fidelity to make accurate predictions.
On the other hand, we must carefully design features of sufficiently low dimensionality that our supervised learning problem remains computationally tractable.

With these competing concerns in mind, we quantize the positions of rendered objects, which we refer to as \textit{sprites}, to a small 2D grid (e.g., $32\times32$). We also encode game state \textit{locally}. In other words, we do not try to we encode the \textit{global} game state at time $t$ into a single training example. Instead, we encode the \textit{local} game state at time $t$ for each cell $c$ of our quantized grid into a distinct training example. The input features for each training example encode the following information:
\begin{itemize}
\item 
The local neighborhood (e.g., $3\times3$) around $c$ on our quantized grid. We mark each cell in this neighborhood with an ID to indicate what kind of sprite, if any, is in that cell.
\item
The location on our quantized grid of any sprites that are rendered only once per frame, as well as the local neighborhoods around them.
\item
The number of occurences for each type of sprite.
\item
The controller input state.
\end{itemize}
The target variables for each training example encode the type of sprite located at $c$ at time $t+1$.

\subsection{Predicting Future Game States}

In the interest of simplicity, we formulate the task of predicting the next game state as a series of binary classification problems on individual cells of our quantized grid. Recall that our training examples encode local game state. So for each cell $c$ in our quantized grid, and for each distinct type of sprite $k$, we predict whether or not an instance of $k$ will be located at $c$ at time $t+1$ using a binary classifier.

To be clear, if we assume there are $n_k$ different types of sprites in our game, then we must train $n_k$ distinct binary classifiers. This formulation implies that, in order to predict a complete game state, we must invoke each of these $n_k$ binary classifiers repeatedly for each cell $c$ in our quantized grid. We represent each binary classifier as a decision tree, which we train using the ID3 algorithm [REF]. 



To evaluate the accuracy of our game model, we captured an additional 5000 frames of gameplay for each game. To be clear, our learning algorithm had no access to these additional frames during training. For each frame of our testing data, we measured all classification errors and grouped them by sprite type.
It is worth noting that by repeatedly querying our learned game model, we can we can make long-range predictions about the future game state. With this in mind, we also measured classification error rates as a function of how long-range our predictions are (see Figure [FIG]). For each game, we show confusion matrices for 1-frame-lookahead, 10-frame-lookahead, and 20-frame-lookahead in Tables 1--9.

\section{Learning a Gameplay Policy}
!
goodness = linear combination
of features; can't encode categorical data
directly

!
can't be local



!
translational invariance


! decision trees can treat categorical data fine, highly discontinuous game model

action set small, state set is somewhat continuous, so fitted value iteration is appropriate

?

neighborhood of the spot you're asking about, neighborhoods around each unique sprite, all observed ints, controller state, for each sprite type how many occur on screen

for each unique sprite, location and neighborhood

for our learning algorithm, while still being low-dimensional enough for our learning algorithms to be In our formulation of game model learning, we assume that we have encoded the current observable game state into a set of feature. There are many possible encodings We vectors given a set of features $X=\{x_1,x_2,\ldots,x_n\}$ where each $x_i=$ is a  

Instead, we encode the positions of the fixed-size image patches being rendered on the screen, which we refer to as \emph{sprites}, and the values of any integers that appear on the screen. It is worth noting that we do not make any game-specific assumptions about what these sprites and integers mean.

In other words, our goal is to learn a representation of the game's rules, given a set of gameplay examples as input.

Although our project ultimately aims for generality, we have focused our efforts so far on the more modest goal of implementing an end-to-end system that can play a single game effectively.?

?

i
In order to apply any reasonable learning algorithm, we need to transform our gameplay examples into sets of features. into goal of our system is to learn to play video games from examples of gameplay. Moreover, we want our algorithm to learn in a way that is generalizable across games. With these goals in mind, we have collected training data for two distinct games (see Figure [FIG]): 

\textsc{Snake} is a simple variant of the classic console game of the same name, where the player controls a long articulated snake that can move freely in a 2D grid. The goal of the game is to collect apples that appear randomly throughout the 2D grid, while dodging fixed obstacles.

\textsc{Dodge-the-Missile} is a simple variant of Space Invaders, where the player controls a small spaceship on the bottom of the screen by moving it left and right. Small missile objects, as well as apples, fall at a fixed rate from the top of the screen. The objective of the game is to dodge the missles and collect the apples, while staying alive for as long as possible.

We generate our training data using a competent AI player that was hand-written for each game. We simply record the game state of the hand-written AI player over 10,000 game timesteps. We refer to a single game timestep as a frame

Instead, we encode the positions of the fixed-size image patches being rendered on the screen, which we refer to as \emph{sprites}, and the values of any integers that appear on the screen. It is worth noting that we do not make any game-specific assumptions about what these sprites and integers mea?




\subsection{Style}


Quinlan, J. R. 1986. Induction of Decision Trees. Mach. Learn. 1, 1 (Mar. 1986), 81-106

Sergey Karakovskiy and Julian Togelius (2012):The Mario AI Benchmark and Competitions . IEEE Transactions on Computational Intelligence and AI in Games (TCIAG), volume 4 issue 1, 55-67.  ?

http://karpathy.ca/portfolio/tetris.php

"General game playing: Game description language specification"?


%% \subsection{Double-blind reviewing}

%% This year we are doing double-blind reviewing: the reviewers will not know 
%% who the authors of the paper are. For submission, the NIPS style file will 
%% automatically anonymize the author list at the beginning of the paper.

%% Please write your paper in such a way to preserve anonymity. Refer to
%% previous work by the author(s) in the third person, rather than first
%% person. Do not provide Web links to supporting material at an identifiable
%% web site.

%%\subsection{Electronic submission}
%%
%% \textbf{THE SUBMISSION DEADLINE IS JUNE 1, 2012. SUBMISSIONS MUST BE LOGGED BY
%% 23:00, JUNE 1, 2012, UNIVERSAL TIME}

%% You must enter your submission in the electronic submission form available at
%% the NIPS website listed above. You will be asked to enter paper title, name of
%% all authors, keyword(s), and data about the contact
%% author (name, full address, telephone, fax, and email). You will need to
%% upload an electronic (postscript or pdf) version of your paper.

%% You can upload more than one version of your paper, until the
%% submission deadline. We strongly recommended uploading your paper in
%% advance of the deadline, so you can avoid last-minute server congestion.
%%
%% Note that your submission is only valid if you get an e-mail
%% confirmation from the server. If you do not get such an e-mail, please
%% try uploading again. 



\section{Citations, figures, tables, references}
\label{others}

These instructions apply to everyone, regardless of the formatter being used.

\subsection{Citations within the text}

Citations within the text should be numbered consecutively. The corresponding
number is to appear enclosed in square brackets, such as [1] or [2]-[5]. The
corresponding references are to be listed in the same order at the end of the
paper, in the \textbf{References} section. (Note: the standard
\textsc{Bib\TeX} style \texttt{unsrt} produces this.) As to the format of the
references themselves, any style is acceptable as long as it is used
consistently.

As submission is double blind, refer to your own published work in the 
third person. That is, use ``In the previous work of Jones et al.\ [4]'',
not ``In our previous work [4]''. If you cite your other papers that
are not widely available (e.g.\ a journal paper under review), use
anonymous author names in the citation, e.g.\ an author of the
form ``A.\ Anonymous''. 


\subsection{Footnotes}

Indicate footnotes with a number\footnote{Sample of the first footnote} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches
(12~picas).\footnote{Sample of the second footnote}

\subsection{Figures}

All artwork must be neat, clean, and legible. Lines should be dark
enough for purposes of reproduction; art work should not be
hand-drawn. The figure number and caption always appear after the
figure. Place one line space before the figure caption, and one line
space after the figure. The figure caption is lower case (except for
first word and proper nouns); figures are numbered consecutively.

Make sure the figure caption does not get separated from the figure.
Leave sufficient space to avoid splitting the figure and figure caption.

You may use color figures. 
However, it is best for the
figure captions and the paper body to make sense if the paper is printed
either in black/white or in color.
\begin{figure}[h]
\begin{center}
%\framebox[4.0in]{$\;$}
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\end{center}
\caption{Sample figure caption.}
\end{figure}

\subsection{Tables}

All tables must be centered, neat, clean and legible. Do not use hand-drawn
tables. The table number and title always appear before the table. See
Table~\ref{sample-table}.

Place one line space before the table title, one line space after the table
title, and one line space after the table. The table title must be lower case
(except for first word and proper nouns); tables are numbered consecutively.

\begin{table}[t]
\caption{Sample table title}
\label{sample-table}
\begin{center}
\begin{tabular}{ll}
\multicolumn{1}{c}{\bf PART}  &\multicolumn{1}{c}{\bf DESCRIPTION}
\\ \hline \\
Dendrite         &Input terminal \\
Axon             &Output terminal \\
Soma             &Cell body (contains cell nucleus) \\
\end{tabular}
\end{center}
\end{table}

\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files.
In particular, do not modify the width or length of the rectangle the text
should fit into, and do not change font sizes (except perhaps in the
\textbf{References} section; see below). Please note that pages should be
numbered.

\section{Preparing PostScript or PDF files}

Please prepare PostScript or PDF files with paper size ``US Letter'', and
not, for example, ``A4''. The -t
letter option on dvips will produce US Letter files.

Fonts were the main cause of problems in the past years. Your PDF file must
only contain Type 1 or Embedded TrueType fonts. Here are a few instructions
to achieve this.

\begin{itemize}

\item You can check which fonts a PDF files uses.  In Acrobat Reader,
select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is
available out-of-the-box on most Linux machines.

\item The IEEE has recommendations for generating PDF files whose fonts
are also acceptable for NIPS. Please see
http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf

\item LaTeX users:

\begin{itemize}

\item Consider directly generating PDF files using \verb+pdflatex+
(especially if you are a MiKTeX user). 
PDF figures must be substituted for EPS figures, however.

\item Otherwise, please generate your PostScript and PDF files with the following commands:
\begin{verbatim} 
dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps
ps2pdf mypaper.ps mypaper.pdf
\end{verbatim}

Check that the PDF files only contains Type 1 fonts. 
%For the final version, please send us both the Postscript file and
%the PDF file. 

\item xfig "patterned" shapes are implemented with 
bitmap fonts.  Use "solid" shapes instead. 
\item The \verb+\bbold+ package almost always uses bitmap
fonts.  You can try the equivalent AMS Fonts with command
\begin{verbatim}
\usepackage[psamsfonts]{amssymb}
\end{verbatim}
 or use the following workaround for reals, natural and complex: 
\begin{verbatim}
\newcommand{\RR}{I\!\!R} %real numbers
\newcommand{\Nat}{I\!\!N} %natural numbers 
\newcommand{\CC}{I\!\!\!\!C} %complex numbers
\end{verbatim}

\item Sometimes the problematic fonts are used in figures
included in LaTeX files. The ghostscript program \verb+eps2eps+ is the simplest
way to clean such figures. For black and white figures, slightly better
results can be achieved with program \verb+potrace+.
\end{itemize}
\item MSWord and Windows users (via PDF file):
\begin{itemize}
\item Install the Microsoft Save as PDF Office 2007 Add-in from
http://www.microsoft.com/downloads/details.aspx?displaylang=en\&familyid=4d951911-3e7e-4ae6-b059-a2e79ed87041
\item Select ``Save or Publish to PDF'' from the Office or File menu
\end{itemize}
\item MSWord and Mac OS X users (via PDF file):
\begin{itemize}
\item From the print menu, click the PDF drop-down box, and select ``Save
as PDF...''
\end{itemize}
\item MSWord and Windows users (via PS file):
\begin{itemize}
\item To create a new printer
on your computer, install the AdobePS printer driver and the Adobe Distiller PPD file from
http://www.adobe.com/support/downloads/detail.jsp?ftpID=204 {\it Note:} You must reboot your PC after installing the
AdobePS driver for it to take effect.
\item To produce the ps file, select ``Print'' from the MS app, choose
the installed AdobePS printer, click on ``Properties'', click on ``Advanced.''
\item Set ``TrueType Font'' to be ``Download as Softfont''
\item Open the ``PostScript Options'' folder
\item Select ``PostScript Output Option'' to be ``Optimize for Portability''
\item Select ``TrueType Font Download Option'' to be ``Outline''
\item Select ``Send PostScript Error Handler'' to be ``No''
\item Click ``OK'' three times, print your file.
\item Now, use Adobe Acrobat Distiller or ps2pdf to create a PDF file from
the PS file. In Acrobat, check the option ``Embed all fonts'' if
applicable.
\end{itemize}

\end{itemize}
If your file contains Type 3 fonts or non embedded TrueType fonts, we will
ask you to fix it. 

\subsection{Margins in LaTeX}
 
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+
from the graphicx package. Always specify the figure width as a multiple of
the line width as in the example below using .eps graphics
\begin{verbatim}
   \usepackage[dvips]{graphicx} ... 
   \includegraphics[width=0.8\linewidth]{myfile.eps} 
\end{verbatim}
or % Apr 2009 addition
\begin{verbatim}
   \usepackage[pdftex]{graphicx} ... 
   \includegraphics[width=0.8\linewidth]{myfile.pdf} 
\end{verbatim}
for .pdf graphics. 
See section 4.4 in the graphics bundle documentation (http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps) 
 
A number of width problems arise when LaTeX cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command.


\subsubsection*{Acknowledgments}

Use unnumbered third level headings for the acknowledgments. All
acknowledgments go at the end of the paper. Do not include 
acknowledgments in the anonymized submission, only in the 
final paper. 

\subsubsection*{References}

References follow the acknowledgments. Use unnumbered third level heading for
the references. Any choice of citation style is acceptable as long as you are
consistent. It is permissible to reduce the font size to `small' (9-point) 
when listing the references. {\bf Remember that this year you can use
a ninth page as long as it contains \emph{only} cited references.}

\small{
[1] Alexander, J.A. \& Mozer, M.C. (1995) Template-based algorithms
for connectionist rule extraction. In G. Tesauro, D. S. Touretzky
and T.K. Leen (eds.), {\it Advances in Neural Information Processing
Systems 7}, pp. 609-616. Cambridge, MA: MIT Press.

[2] Bower, J.M. \& Beeman, D. (1995) {\it The Book of GENESIS: Exploring
Realistic Neural Models with the GEneral NEural SImulation System.}
New York: TELOS/Springer-Verlag.

[3] Hasselmo, M.E., Schnell, E. \& Barkai, E. (1995) Dynamics of learning
and recall at excitatory recurrent synapses and cholinergic modulation
in rat hippocampal region CA3. {\it Journal of Neuroscience}
{\bf 15}(7):5249-5262.
}

\end{document}
