\section{Methods}\label{sec:methods}

\begin{figure*}[!t]
  \centerline{
    \subfloat[TD network]{
      \resizebox{0.4\textwidth}{!}{
        $ \mbox{State} s \rightarrow \left\{ \vcenter {
          \vbox {
            \xymatrix@C=4pc@R=0.25pc {
              *++[o][F-]{}  \ar@{-}[dr] \ar@{-}[dddr] \ar@{-}[dddddddr] & &\\
              & *++[o][F-]{} \ar@{-}[dddr]& \\
              *++[o][F-]{} \ar@{-}[ur] \ar@{-}[dr] \ar@{-}[dddddr] & &\\
              & *++[o][F-]{} \ar@{-}[dr]& \\
              & & *++[o][F-]{} \ar[r]^-{V(s)} & \\
              \vdots & \vdots & \\
              & & \\
              & *++[o][F-]{} \ar@{-}[uuur]& \\
              *++[o][F-]{} \ar@{-}[uuuuuuur] \ar@{-}[uuuuur] \ar@{-}[ur] & &\\
            }
          }
        }
        \right.  $
      }
    }
    \label{subfig:td}  
    \hfil
    \subfloat[Q-learning network]{
      \resizebox{0.4\textwidth}{!}{
        $ \mbox{State} s \rightarrow \left\{ \vcenter {
          \vbox {
            \xymatrix@C=4pc@R=0.25pc {
              *++[o][F-]{} \ar@{-}[dr] \ar@{-}[dddr] \ar@{-}[dddddddr] & & *++[o][F-]{} \ar[r]^-{Q(s,a_1)} &\\
              & *++[o][F-]{} \ar@{-}[ur] \ar@{-}[dr] \ar@{-}[dddddddr] & \\
              *++[o][F-]{} \ar@{-}[ur] \ar@{-}[dr] \ar@{-}[dddddr] & & *++[o][F-]{} \ar[r]^-{Q(s,a_2)} &\\
              & *++[o][F-]{} \ar@{-}[uuur] \ar@{-}[ur] \ar@{-}[dddddr] & \\
              & & \\
              \vdots & \vdots & \vdots \\
              & & \\
              & *++[o][F-]{} \ar@{-}[uuuuuuur] \ar@{-}[uuuuur] \ar@{-}[dr]  & \\
              *++[o][F-]{} \ar@{-}[uuuuuuur] \ar@{-}[uuuuur] \ar@{-}[ur] & & *++[o][F-]{} \ar[r]^-{Q(s,a_n)} & \\
            }
          }
        } \right.  $
      }
    }
    \label{subfig:qlrn}}
  \caption{Topologies of function approximators. A TD-network (a) tries to
    approximate the value of the state presented at the input. A Q-learning
    network (b) tries to approximate the values of all the possible actions in
    the state presented at the input.} \label{fig:nns}
\end{figure*}


In this section we give a short introduction to reinforcement learning and
sequential decision problems. In reinforcement learning, the learner is a
decision making agent that takes actions in an environment and receives a reward
(or penalty) for its actions in trying to solve a problem \cite{SuttonBarto,
wiering2012sota}.  After a set of trial-and-error runs it should learn the best
policy, which is the sequence of actions that maximize the total reward.

We assume an underlying Markov decision process, which is formally defined by
(1) A finite set of states $s \in S$; (2) A finite set
of actions $a \in A$; (3) A transition function $T(s,a,s')$, specifying the
probability of ending in state $s'$ after taking action $a$ in state $s$; (4) A
reward function $R(s,a)$, providing the reward the agent will receive for
executing action $a$ in state $s$, where $r_{t}$ denotes the reward obtained at
time $t$; (5) A discount factor $0 \leq \gamma \leq 1$ which discounts later
rewards compared to immediate rewards.


\subsection{Value Functions} We want our agent to learn an optimal
\textit{policy} for mapping states to actions. The policy defines the action to
be taken in any state $s: a = \pi(s)$. The value of a policy $\pi$,
$V^{\pi}(s)$, is the expected cumulative reward that will be received when the
agent follows the policy starting at state $s$. It is defined as
\cite{SuttonBarto}
\begin{equation}
  V^{\pi}(s) = E \left[ \sum_{i=0}^{\infty}\gamma^{i}r_{i} | s_{0} = s, \pi \right].
\end{equation}
The optimal policy is the one which has the largest state-value in all states.

Instead of learning values of states $V(s_{t})$ we could also choose to work
with values of state-action pairs $Q(s_{t},a_{t})$. $V(s_{t})$ denotes how good
it is for the agent to be in state $s_{t}$ whereas $Q(s_{t},a_{t})$ denotes how
good it is for the agent to perform action $a_{t}$ in state $s_{t}$. The Q-value
of such a state-action pair $\{s,a\}$ is given by \cite{sutton1988learning}:
\begin{equation}   
  Q^{\pi}(s,a) = E \left[\sum_{i=0}^{\infty}\gamma^{i}r_{i}|s_{0} = 
  s,a_{0}=a,\pi\right].
\end{equation} 

\subsection{Reinforcement Learning Algorithms} When playing against an opponent,
the results of the agent's actions are not deterministic. After it has made its
move, its opponent moves.  In such a case, the Q-value of a certain state-action
pair is given by
\begin{equation}
  Q(s_{t},a_{t}) = E\left[r_{t+1}\right]
  + \gamma \sum_{s_{t+1}}T(s_{t},a_{t},s_{t+1})\max_{a_{t+1}}Q(s_{t+1},a_{t+1})
\end{equation}
Here, $s_{t+1}$ is the state the agent encounters \emph{after}
its opponent has made his move. We cannot do a direct assignment in this case
because for the same state and action, we may receive a different reward or move
to different next states. What we \emph{can} do is keep a running average. This
is known as the \textit{Q-learning algorithm} \cite{watkins1992q}:
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
  \label{eq:qlearning}
  \hat{Q}(s_{t},a_{t}) &{}\leftarrow{}& \hat{Q}(s_{t},a_{t}) \nonumber \\
  &&{+}\: \alpha(r_{t+1} + \gamma \max_{a_{t+1}}\hat{Q}(s_{t+1},a_{t+1}) 
  - \hat{Q}(s_{t},a_{t})) \nonumber \\
\end{eqnarray}
where $0 < \alpha \leq 1$ is the learning rate. We can think of equation
\ref{eq:qlearning} as reducing the difference between the current $Q$ value
and the backed-up estimate. Such algorithms are called temporal difference
algorithms \cite{sutton1988learning}. Once the algorithm is finished, the agent can use
the value of state action pairs to select the action with the best expected
outcome:
\begin{equation}
  \pi (s) = \arg\max_{a} \hat{Q}(s,a)
\end{equation}

If an agent would only follow the strategy it estimates to be optimal, it might
never learn better strategies. To circumvent this, an exploration strategy
should be used. In $\varepsilon$-greedy exploration, there is a probability
of $\varepsilon$ that the agent executes a random action.  $\varepsilon$ tends
to be gradually decreased during training.

Sarsa, the on-policy variant of Q-learning, takes this exploration strategy into
account. It differs from Q-learning in that it does not use the discounted
Q-value of the subsequent state with the highest Q-value to estimate the Q-value
of the current state. Instead, it uses the discounted Q-value of the state that
would be reached when using the exploration strategy:
\begin{equation}
  \hat{Q}(s_t,a_t) \leftarrow \hat{Q}(s_t,a_t) + \alpha(r_{t+1} + \gamma
  \hat{Q}(s_{t+1},a_{t+1}) - \hat{Q}(s_t,a_t)) \label{eq:sarsa-update}
\end{equation}
where $a_{t+1}$ is the action prescribed by the exploration strategy.

The idea of temporal differences can also be used to learn $V(s)$ values,
instead of $Q(s,a)$. TD learning (or TD(0) \cite{sutton1988learning}) uses the
following update rule to update a state value:
\begin{equation}
  \label{eq:tdupdate}
  V(s_{t}) \leftarrow V(s_{t}) + \alpha \left(r_{t+1}+\gamma V(s_{t+1}) -
  V(s_{t}) \right)
\end{equation}

\subsection{Function Approximators}

In problems of modest complexity, it might be feasible to actually store the
values of all states or state-action pairs in lookup tables. However, Othello's
state space size is approximately $10^{28}$ \cite{vaneck2008application}.  This
is problematic for at least two reasons. First of all, the space complexity of
the problem is much too large to be stored. Furthermore, after training our
agent it might be asked to evaluate states or state-action pairs which it has
not encountered during training and it would have no clue how to do so. Using a
look-up table would cripple the agent's ability to generalize to unseen input
patterns.

For these two reasons, we instead train multi-layer neural networks to estimate
the $V(s)$ and $Q(s,a)$ values. During the learning process, the neural network
learns a mapping from state descriptions to either $V(s)$ or $Q(s,a)$ values.
This is done by computing a `target' value according to (\ref{eq:qlearning}) in
the case of Q-learning or (\ref{eq:tdupdate}) in the case of
TD-learning. The learning rate $\alpha$ in these functions is set to 1, since we
already have the learning rate of the neural network to control the effect
training examples have on estimations of $V(s)$ or $Q(s,a)$. This means that
(\ref{eq:qlearning}) and (\ref{eq:sarsa-update}) respectively simplify
to
\begin{equation}
  \label{eq:qsimple}   
  \hat{Q}(s_t,a_t) \leftarrow r_{t+1} + \gamma
  \max_{a_{t+1}}\hat{Q}(s_{t+1},a_{t+1}) 
\end{equation}
and
\begin{equation}
  \label{eq:ssimple}  
  \hat{Q}(s_t,a_t) \leftarrow r_{t+1} + \gamma\hat{Q}(s_{t+1},a_{t+1}).  
\end{equation} 
  
Similarly, (\ref{eq:tdupdate}) simplifies to
\begin{equation}
  \label{eq:tdsimple}   
  V(s_{t}) \leftarrow r_{t+1}+\gamma V(s_{t+1}).
\end{equation}

A Q-learning or Sarsa network consists of one or more units to represent a
state. The output consists of as many units as there are actions that can be
chosen.  A TD-learning network also has of one or more units to represent a
state. It has a single output approximating the value of the state at the input.
Figure \ref{fig:nns} illustrates the layout of both networks.



\subsection{Application to Othello}
\label{subsec:appl}
In implementing all three learning
algorithms in our Othello framework, there is one important factor to account
for: The fact that we have to wait for our opponent's move before we can learn
either a $V(s)$ or a $Q(s,a)$ value.  Therefore, we learn the value of the
\textit{previous} state or state-action pair at the beginning of each turn --
that is, before a move is performed.
Every turn except the first, our Q-learning agent goes through the following
steps:
\begin{enumerate}
  \item Observe the current state $s_t$
  \item For all possible actions $a'_t$ in $s_t$ use NN to compute $\hat{Q}(s_t,a'_t)$
  \item Select an action $a_t$ using a policy $\pi$
  \item \label{item:q_upd} According to (\ref{eq:qsimple}) compute the target value of the previous
    state-action pair $\hat{Q}^{\mathrm{new}}(s_{t-1},a_{t-1})$
  \item Use NN to compute current estimate of the value of the previous
    state-action pair $\hat{Q}(s_{t-1},a_{t-1})$
  \item Adjust the NN by backpropating the error
    $\hat{Q}^{\mathrm{new}}(s_{t-1},a_{t-1}) - \hat{Q}(s_{t-1},a_{t-1})$
  \item $s_{t-1} \leftarrow s_t$, $a_{t-1} \leftarrow a_t$
  \item Execute action $a_t$
\end{enumerate}

The Sarsa implementation is very similar, except that in step \ref{item:q_upd}
it uses (\ref{eq:ssimple}) to compute the target value of the previous
state-action pair instead of (\ref{eq:qsimple}).

In online TD-learning we are learning values of \textit{afterstates}, that is:
the state directly following the execution of an action, before the opponent had
made his move. During playing, the agent can then evaluate all accessible
afterstates and choose the one with the highest $V(s^a)$. Each turn except the
first, our TD-agent performs the following steps:

\begin{enumerate}  
  \item Observe the current state $s_t$  
  \item For all afterstates $s'_t$ reachable from $s_t$ use NN to compute
    $V(s'_t)$  
  \item Select an action leading to afterstate $s^a_t$ using a policy $\pi$   
  \item According to (\ref{eq:tdsimple}) compute the target value of previous
    afterstate $V^{\mathrm{new}}(s^a_{t-1})$   
  \item Use NN to compute the current value of the previous afterstate
    $V(s^a_{t-1})$   
  \item Adjust the NN by backpropating the error $V^{\mathrm{new}}(s^a_{t-1}) -
    V(s^a_{t-1})$   
  \item $s^a_{t-1} \leftarrow s^a_t$   
  \item Execute action resulting in afterstate $s^a_t$  
\end{enumerate}



\subsection{Learning from Self-Play and Against an Opponent}

We compare three strategies by which an agent can learn from playing training
games: playing against itself; learning from playing against a fixed opponent,
using both its own moves and the opponent's moves and learning from playing
against a fixed opponent using only its own moves.

\subsubsection{Learning from Self-Play}

When learning from self-play, we have both agents share the same neural
network which is used for estimating the $Q(s,a)$ and $V(s)$ values. In this
case, both agents use the algorithm described in subsection \ref{subsec:appl},
adjusting the weights of the same neural network. 

\subsubsection{Learning from Both Own and Opponent's Moves}

When an agent learns from both its own moves and its opponent's moves, it
still learns from its own moves according to the algorithms described in
subsection \ref{subsec:appl}. In addition to that, it also keeps track of its
opponent's moves and previously visited (after-)states. Once an opponent
has chosen an action $a_t$ in state $s_t$, the Q-learning and Sarsa agent will

\begin{enumerate}
  \item Compute the target value of the opponent's previous state-action pair
    $\hat{Q}^{\mathrm{new}}(s_{t-1},a_{t-1})$ according to equation
    \ref{eq:qsimple} (Q-learning) or \ref{eq:ssimple} (Sarsa)
  \item Use the NN to compute the current estimate of the value of the
    opponent's previous state action pair $\hat{Q}^{\mathrm{new}}(s_{t-1},a_{t-1})$
  \item Adjust the NN by backpropating the difference between the target and the
    estimate
\end{enumerate}

Similarly, when the TD-agent learns from its opponent it will do the following
once an opponent has reached an afterstate $s_t^a$:

\begin{enumerate}
   \item According to (\ref{eq:tdsimple}) compute the target value of
      the opponent´s previous afterstate $V^{\mathrm{new}}(s^a_{t-1})$  
  \item Use NN to compute the current value of the opponent´s previous
    afterstate $V(s^a_{t-1})$
  \item Adjust the NN by backpropating the difference between the target and
    the estimate
\end{enumerate}

\subsubsection{Learning from Its Own Moves}

When an agent plays against a fixed opponent and only learns from its own
moves, it simply follows the algorithm described in subsection
\ref{subsec:appl}, without keeping track of the moves its opponent made and
the (after-)states its opponent visited.
