\section{Cross Entropy Method}
One of the first things we tried was to naively implement the Cross Entropy
Method on the Tetris problem as a black box optimization problem. The Cross
Entropy Method is an optimization strategy which repeatedly samples inputs to
a black box reward function, and then re-fits a Gaussian distribution to the
``elite'' set of samples which performed the best. We slightly modified Cross
Entropy to include a noise inflation constant which inflates the current
covariance estimate so that the distribution does not converge too quickly
(Algorithm \ref{alg.cross_entropy}).

\begin{algorithm}
\caption{Cross Entropy Method}
\label{alg.cross_entropy}
\begin{algorithmic}
\State Given $N$, the number of samples
\State $K$, the size of the elite set
\State $M$, the number of iterations
\State $R(\theta)$ the reward function of weight vector $\theta$
\State $\Theta = \{\theta_1 \ldots \theta_N\}$ the sample set of weight vectors
\State $S = \{s_1 \ldots s_N\}$ a score set
\State $\mu_0$ the starting mean
\State $\Sigma_0$ the starting covariance
\State $\alpha$ a noise constant
\State Sample $\Theta$ according to $\mathcal{N}(\Sigma_0, \mu_0)$
\State \textbf{for} iteration $i = 1$ to $M$:
    \State \hspace{\algorithmicindent} Score the samples: $S = R(\theta_1\ldots\theta_N)$ 
    \State \hspace{\algorithmicindent} \textbf{sort} $\Theta$ according to $S$
    \State \hspace{\algorithmicindent} $\mu_i := \frac{1}{K} \sum^K_{k
    =1}{\theta_k}$ \State \hspace{\algorithmicindent} $\Sigma_i := \frac{1}{K -
    1}\sum^K_{k=1}{(\theta_k - \mu_i)(\theta_k - \mu_i)^T}$ 
    \State \hspace{\algorithmicindent} $\Sigma_i := \Sigma_i + \alpha\textbf{I}$
    \State Resample $\Theta$ according to $\mathcal{N}(\Sigma_i, \mu_i)$
     \State \textbf{return} $\mu_M, \Sigma_M$
\end{algorithmic}
\end{algorithm}

In practice, we chose the reward function $R$ to simply forward-simulate a
policy until completion 4 to 30 times and average the number of lines cleared.
We found that it was important to average over some number of runs due to the
vast variance in the performance of our controller.

\begin{figure}
  \centering
    \includegraphics[width=0.9\textwidth]{cross_entropy_noise}
      \caption{Convergence of the Cross Entropy Method when $\alpha=0$ compared to
  when $\alpha$ decreases linearly from $100$ to $0$.}
  	\label{fig.noise}
\end{figure}

Using $N=100, K=10, M=100, \Sigma_0=100\textbf{I}, \mu_0 = \textbf{0}$, we
consistently get a best Tetris player which clears, on average, 30,000 lines
before death. Adding a noise constant $\alpha$ which decreases from 100 to 0 at
each iteration significantly improved the convergence rate of Cross Entropy on
this problem (Figure \ref{fig.noise}), but unfortunately we were unable to get
much better performance than the average of 30,000 lines. We believe this points to a deficiency either
in our control policy, or our features.

\begin{figure}
    \includegraphics[width=0.9\textwidth]{cross_entropy_convergence}
      \caption{Comparing the sampled weights on two features: \textbf{OVERHANGS} and
  \textbf{LINES CLEARED} over the course of 10 iterations.}
  \label{fig.convergence}
  \centering
\end{figure}

Analysis of the convergence of the algorithm seems to show that two features:
\textbf{OVERHANGS} and \textbf{ISOLATION} are by far the most important (they
have the least variance, and the highest absolute weight). Features based on
height, number of squares, or lines cleared do not seem to be nearly as
important. Removing all height features has negligible impact on the final
performance of the Tetris player.




