\section{Policy Gradient}
Policy gradient methods are a family of reinforcement learning techniques that optimize the total rewards by gradient descent. 

Two flavors of the policy gradient methods are implemented and compared in this project: \textbf{Likelihood Ratio Methods and REINFORCE} and \textbf{Natural Gradient}.

\subsection{Likelihood Ratio Methods and REINFORCE}
We assume a stochastic policy parametrized by $\theta$: $\pi_{\theta}: s \rightarrow p_{\theta}(a | s)$. For a stochastic process such as the game Tetris, we consider the expected total reward $J$ over some distribution of the trajectory $\xi$

\begin{align}
J_{\theta} & = \sum_{\xi \in \Xi} p(\xi | \theta) R(\xi) \\
\nabla_{\theta} J & = \nabla_{\theta}\sum_{\xi \in \Xi} p(\xi | \theta) R(\xi) = \sum_{\xi \in \Xi} \nabla_{\theta}p(\xi | \theta) R(\xi)
\end{align}
% \sum_{\xi \in \Xi} p(\xi | \theta) R(\xi) \\
% \nabla_{\theta}\sum_{\xi \in \Xi} p(\xi | \theta) R(\xi)

The trick is to express the gradient also as an expectation so that we can turn
it into average over trials. Concretely: 

\begin{align}
\nabla_{\theta} J & = \sum_{\xi \in \Xi} \nabla_{\theta}p(\xi | \theta) R(\xi) = \sum_{\xi \in \Xi} \frac{p(\xi | \theta) }{p(\xi | \theta) } \nabla_{\theta}p(\xi | \theta) R(\xi)\\
& = \sum_{\xi \in \Xi} p(\xi | \theta)  \frac{\nabla_{\theta} p(\xi | \theta) }{p(\xi | \theta) } R(\xi)\\
& = \sum_{\xi \in \Xi} p(\xi | \theta) \nabla_{\theta} log(p(\xi | \theta)) R(\xi)= \frac{1}{N} \sum \nabla_{\theta} log(p(\xi | \theta)) R(\xi)
\end{align}

In the case of Tetris where we use the softmax policy: $\pi_{\theta}(a|s): \frac{e^{\theta^T f(s, a)}}{\sum_a e^{\theta^T f(s, a)}} $, where $f(s, a)$ is some feature computed from the state-action pair. The gradient of log-probability over the entire trajectory is thus simply summing up the difference between the feature of the chosen action and the expected feature:

\begin{equation}
\nabla_{\theta} log(p(\xi | \theta)) = \sum_{t=1}^{T}\left(f(s_t, a_t) - \sum_{a'}\pi_{\theta}(a'|s_t)f(s_t, a')\right)
\end{equation}

This method however, does not take advantage of the Markov Property, namely a decision make at time $t$ can only affect the trajectory after it. One key insight from the Actor/Critic method is that instead of scaling all time step by the total reward $R(\xi)$, each step receives a scale factor proportional to the reward-to-go:

\begin{equation}
\nabla_{\theta} J = E_{p(\xi)}\left[\sum_{t=1}^{T}\left(\nabla_\theta log(\pi(a_t|s_t)\sum_{\tau=t+1}^{T}r(s_t)\right) \right]
\end{equation}

\subsection{Natural Policy Gradient}
When the features are badly scaled or highly correlated, naive gradient descent can perform poorly because a small step in the parameter space can result in a big change in the function value. We thus use the Fisher-information distance instead of the Euclidean distance to calculate the gradient called natural gradient.

The algorithm is very similar to Likelihood Ratio and the algorithm is summarized as following:
\begin{algorithm}
\caption{Compute Natural Policy Gradient}
\begin{algorithmic}
\State Average $M$ games with the current $\theta$
\State obtain the following statistics:

    \State \hspace{\algorithmicindent} Policy derivative for each step: $\psi_t=\nabla_\theta log(\pi(a_t|s_t)$
    \State \hspace{\algorithmicindent} Eligibility trace: $\phi = E\left[\sum_{t=1}^{T}\psi_t\right]$
    \State \hspace{\algorithmicindent} Vanilla gradient: $g = E\left[\sum_{t=1}^{T}\psi_t \sum_{\tau=t+1}^{T}r(s_t)\right]$
    \State \hspace{\algorithmicindent} Fisher matrix:\quad $F_\theta = E\left[(\sum_{t=1}^{T}\psi_t)(\sum_{t=1}^{T}\psi_t)^T\right]$
\State The natural gradient: $g_{NG} = F_\theta^{-1}g$ \\
\Return gradient estimates $g_{NG}$
\end{algorithmic}
\end{algorithm}

The natural gradient method is normally faster than the vanilla gradient thanks to the Fisher matrix. The drawback however, is that $F_\theta$ can be close to singular. We have to manually inject noise to the diagonal until the condition number drops below a threshold.

Using the same set of features, both methods can achieve an average of 30,000 lines cleared (200,000 maximum) but natural policy gradient is in general takes 5 times fewer iterations to reach optimality. 