\section{Model-Based Reinforcement Learning}
\label{section:model-based}

To learn an appropriate gameplay policy, we began with the following observation. Although the state spaces of the games we consider are very large, the action spaces of the games we consider are very small. For example, there are roughly 10$^{32\times32}$ different possible states in \textsc{Snake}, but only 5 actions. This observation motivated us to learn gameplay policies by performing fitted value iteration.

Since fitted value iteration requires access to a game model, we learned one from recorded examples of gameplay. More specifically, we formulated the program of learning a game model as a supervised learning problem, which we addressed by training a collection of decision trees. Very roughly speaking, our input features encoded the current observable game state at time $t$, as well as the input provided by the player at time $t$. Our target variables encoded the game state at time $t+1$. We then learned a game model by training a collection of decision trees. See our midterm progress report for details.

Using our learned game model, as well as the feature transform $\phi$ described in Section~\ref{section:features}, we were equipped to apply fitted value iteration to learn a gameplay policy. At its core, fitted value iteration approximates the value function as $V(s)\approx\theta^T\phi(s)$. The weight vector $\theta$ is found by solving a linear regression problem with design matrix $\Phi(\vec s)=(\phi(s_1),\ldots,\phi(s_m))^T$ where $\vec s=(s_1,\ldots,s_m)$ is a vector of states. Unfortunately, we observed severe numeric instability using this approach for games as simple as \textsc{Grid-World}. We speculate that this instability stems from a severe rank-deficiency of the feature matrix $\Phi(\vec s)$.

In the case of \textsc{Grid-World}, the rank-deficiency of $\Phi(\vec s)$ occurs because there are a total of 25 unique states, assuming a 5$\times5$ game area. Therefore, $\Phi(\vec s)$ can have at most 25 unique rows no matter the dimension of $\vec s$, so $\Phi(\vec s)$ has rank $\leq25$. However, using the feature transform described in Section~\ref{section:features}, $\Phi(\vec s)$ will much greater than 25 columns. The linear regression problem is therefore underconstrained and has an infinite number of solutions. We considered taking the minimum-norm solution, but it is not clear that this approach would sensibly approximate our unknown value function.

To make matters worse, this numeric instability becomes more pronounced for more complex games. The dimension of the feature transform $\phi(s)$ (and hence the number of columns of $\Phi(\vec s)$) grows quadratically with a large constant as the number of unique object types increases. Even if this is smaller than the number of sensible game states, a numerically stable regression step would require a large number of training samples, which could quickly become computationally infeasible.

% To make matters worse, the rank-deficiency of $\phi(s)$ will become more pronounced for more complex games. This is because the maximum number of unique rows in $\phi(s)$ grows exponentially with the number of object types, whereas the number of columns in $\phi(s)$ grows super-exponentially in the number of object types. Confronted with an infinite number of solutions to this ill-posed problem, we considered taking the minimum-norm solution for $\theta$. However, it is not clear to us that this solution would sensibly approximate our unknown value function.

The numerical stability problems associated with fitted value iteration prompted us to markedly reconsider our technical approach. This lead us to the model-free reinforcement learning algorithm we describe in the following section.
