% !TEX root = POLYATHLONrEPORT.tex
In this section we explain our preliminary ideas and how we built on them to design the final algorithm.
\subsection{Preliminary algorithm}
The preliminary reinforcement learning algorithm is based on value-action actor-critic policy gradient \cite{silvernotes}. The advantage of this algorithm is that it facilitates learning both value function and policy. This is particularly useful for dynamic environments where the agent could benefit from having a stochastic policy. The pseudo-code of our original algorithm is described in Algorithm \ref{algorithm1}. While the actor is learning the ascent direction of the policy gradient, the critic is learning the value function using temporal difference (TD) method. In our design, the policy originally followed a Gibbs distribution
\begin{eqnarray}
\pi_{\theta}(s,a)&=&\frac{e^{\theta_{a}^T \phi}}{\sum_{b}e^{\theta_{b}^T \phi}},
\label{gibbs_softmax}
\end{eqnarray}
where $\theta_{a}$ is the policy parameter for action $a$ and $\phi$ is the feature vector. The gradient of log of this policy is calculated as
\begin{eqnarray}
\nabla_{\theta}\log \pi_{\theta}(s,a)&=& \phi (1-\pi_{\theta}(s,a)).
\end{eqnarray}

\begin{algorithm}
\textbf{Action-Value Actor-Critic} \;
Initialize $\theta$ and $s$ \;
Sample $a \sim \pi_{\theta}$ \;
\For {each step} {
Receive the reward $r$ from the environment \;
Receive the observation $\acute{s}$ from the environment \;
Perform an action $\acute{a} \sim \pi_{\theta}(\acute{s},a)$ \;
$\delta=r+\gamma Q_{w}(\acute{s},\acute{a})- Q_{w}(s,a)$ \;
$\theta = \theta + \nabla_{\theta}log \pi_{\theta}(s,a)Q_{w}(s,a)$ \;
$w = w + \beta \delta \phi$ \;
$a = \acute{a}$ \;
$s = \acute{s}$ \;
}
\caption{action-value actor-critic policy gradient}
\label{algorithm1}
\end{algorithm}

\subsection{Choice of features}
For the features, we decided to use linear function approximation for its simplicity in gradient descent. Tile coding \cite{sutton1998introduction} is a popular and simple linear approximation method, but the choice of tiling depends on the task, and the binary nature of the tiles means that tile coding tends to generalise in a similar manner, with full generalisation within tiles and no generalisation outside. Since we wanted to make full use of generalisation in the value function approximation to speed up learning in the initial phase, we decided instead to use a continuous linear approximation method, and settled on radial basis function networks (RBFNs) \cite{broomhead:1988}.

The main problems with radial basis functions are that they require a large amount of fine-tuning of their parameters by the programmer and, like tile coding, that the programmer constructs the features specifically for the task at hand. While one always has the choice of placing many units close together at equal spacing over the whole state space, this introduces computational issues and slows down learning. One ideally wants to have just as many features as necessary to represent the value function well with no redundancy. For the Polyathlon domain, then, we would need an algorithm that dynamically allocates units in the network as learning progresses. We implemented one such algorithm, described in \cite{wang:2007}, for actor-critic TD learning, but modified to use ordinary radial basis function rather than the fuzzy rules described in the paper (their fuzzy approach does offer some advantages, such as different widths for each input dimension, but we wanted to keep things as simple as possible) and eligibility traces. The activation of each unit in state $\vect{s}$ is given by
\begin{eqnarray}
    \varphi(\vect{s}) = \exp \left( - \frac{\lVert \vect{s} - \vect{\mu} \rVert^2}{2 \sigma} \right)
\label{eq:rbf}
\end{eqnarray}
The algorithm uses a normalised RBFN. Normalisation has the advantage of improving performance in overlapping receptive fields of the RBF units, by making the network less sensitive to poor selection of the centre and width parameters, as noted by \cite{shorten:1994}. However, it may also cause undesirable side effects such as shifting maxima away from the centres and reactivating units far from their centres.

We originally implemented an off-policy actor-critic algorithm as described in Algorithm \ref{algorithm1} using a radial basis function network for the features, but in order to get the dynamic allocation of features from \cite{wang:2007} we decided to follow their implementation more closely so as not to risk unnecessary bugs. Therefore, we abandonded the policy-gradient approach and switched from an action-value critic to a state-value critic.

The adaptive algorithm in \cite{wang:2007} introduces a number of new parameters to control the addition of new units and the merging of similar units, which we noticed traded one problem (feature selection) for another (tuning more parameters). A very small variation in one parameter could mean the difference between endlessly adding new units and adding none at all in one task, and we got two additional learning rates for the centre and width parameters, respectively.

\subsection{Action selection}
To ensure a good exploration/exploitation tradeoff, we initially chose to implement Gibbs/Boltzmann softmax action selection, as described in equation \ref{gibbs_softmax}. It has the theoretical advantage over $\epsilon$-greedy selection that actions with higher values (or preferences, using actor-critic TD-learning) have a higher probability of being selected. However, in the mountain car task, during some runs the first episode would go on for a very long time, and the absolute value of the value function become so large that the computation fails in the exponentiation step and gives NaN. We tried normalising the weights after every update in order to avoid this, but it caused problems. After trying the action selection scheme suggested in \cite{wang:2007}, where a normally distributed noise factor is added to each actor output before greedy selection, we decided in the end to use $\epsilon$-greedy action selection for its simplicity. Although the exploration parameter should be decreased over time to facilitate exploitation of the learned policy, we left it out until we could show that the algorithm was performing well.

\subsection{Final algorithm}
The final algorithm, adapted from \cite{wang:2007} with the main difference that equation \ref{eq:rbf} is used to compute the activation of each RBF unit, is given in Algorithm \ref{algorithm2}. For details, see \cite{wang:2007}.

\begin{algorithm}
\textbf{Adaptive Radial Basis Function Network Actor-Critic} \;
Initialize $\theta$ and $s$ \;
Sample $a \sim \pi_{\theta}$ \;
\For {each step until end of episode} {
Receive the observation $s_t$ from the environment at time $t$ \;
Compute the normalised activation of each basis function \;
Compute the actor output $A_k(s_t)$ for each action $a_k$ and the critic output $V(s_t)$ \;
Select an action using $\epsilon$-greedy selection, execute the action, receive reward $r$ and observe new state $s_{t+1}$ \;
Compute the value function for the new state $V(s_t)$ and TD error $\delta$ \;
If necessary, add a new unit \;
Adjust actor weights $\vect{w}_j$ and critic weight $v_j$ for each node $j$ \;
Adjust center $\vect{\mu}_j$ and width $\sigma_j$ of each node $j$ \;
If necessary, merge similar units \;
}
\caption{adaptive RBFN actor-critic with $\epsilon$-greedy action selection}
\label{algorithm2}
\end{algorithm}