\section{Experiments and Results}\label{sec:experiments}

In training our learning agents, we use feedforward multi-layer perceptrons
with 50 hidden nodes as function approximators. A sigmoid function
\begin{equation} f(a) = \frac{1}{1+e^{-a}} \end{equation} is used on both the
hidden and the output layer. The weights of the network are randomly
initialized to values between -0.5 and 0.5. States are represented by a input
vector of 64 nodes, each corresponding to a square on the Othello board. Values
corresponding to squares are 1 when the square is taken by the learning agent
in question, -1 when it is taken by its opponent and 0 when it is empty. The
reward associated with a terminal state is 1 for a win, 0 for a loss and 0.5
for a draw. The probability of exploration $\varepsilon$ is initialized to 0.1
and linearly decreases to 0 over the course of all training episodes. In all
experiments, the learning rate for the neural network were coarsely
optimized: for Q-learning an Sarsa a value of 0.01 was used and for
TD-learning a value of 0.001.

\subsection{Fixed Players}

We create three fixed players: one random player \textsc{rand} and two
positional players, \textsc{heur} and \textsc{bench}. These players are both
used as fixed opponents and benchmark players. The random player always
takes a random move based on the available actions. The positional players have
a table attributing values to all squares of the game board. They use the
following evaluation function: 
\begin{equation}
 V = \sum_{i=1}^{64} c_i w_i \label{eq:posval}
\end{equation}
where $c_i$ is 1 is the square $i$ is occupied by the player's own disc, -1
when it is occupied by an opponent's disc and 0 when it is unoccupied, and $w_i$
is the positional value of a square $i$.
The two positional players differ in the weights $w_i$ they attribute to
squares. Player \textsc{heur} uses weights used in multiple other Othello
researches \cite{yoshioka1999strategy, lucas2006temporal, wiering2012neural}.
Player \textsc{bench} uses an evaluation function created using co-evolution
\cite{lucas2006temporal} and has been used as a benchmark player before as well
\cite{wiering2012neural}. The weights used by \textsc{heur} and \textsc{bench}
are shown in figure \ref{fig:posvalues}.

\begin{figure}[h]
  \centering
  \centerline{
    \subfloat[]{
      \resizebox{0.2\textwidth}{!}{
        \begin{squarecells}{8}
          100 & -25 & 10 & 5 & 5 & 10 & -25 & -100 \nl
          -25 & -25 & 2 & 2 & 2 & 2 & -25 & -25 \nl
          10 & 2 & 5 & 1 & 1 & 5 & 2 & 10 \nl
          5 & 2 & 1 & 2 & 2 & 1 & 2 & 5 \nl
          5 & 2 & 1 & 2 & 2 & 1 & 2 & 5 \nl
          10 & 2 & 5 & 1 & 1 & 5 & 2 & 10 \nl
          -25 & -25 & 2 & 2 & 2 & 2 & -25 & -25 \nl
          100 & -25 & 10 & 5 & 5 & 10 & -25 & 100 \nl
        \end{squarecells}
      }
    }
    \hfil
    \subfloat[]{
      \resizebox{0.2\textwidth}{!}{
        \begin{squarecells}{8}
          80 & -26 & 24 & -1 & -5 & 28 & -18 & 76 \nl
          -23 & -39 & -18 & -9 & -6 & -8 & -39 & -1 \nl
          46 & -16 & 4 & 1 & -3 & 6 & -20 & 52 \nl
          -13 & -5 & 2 & -1 & 4 & 3 & -12 & -2 \nl
          -5 & -6 & 1 & -2 & -3 & 0 & -9 & -5 \nl
          48 & -13 & 12 & 5 & 0 & 5 & -24 & 41 \nl
          -27 & -53 & -11 & -1 & -11 & -16 & -58 & -15 \nl
          87 & -25 & 27& -1 & 5 & 36 & -3 & 100 \nl
        \end{squarecells}
      }
    }
  }
  \caption{Positional values used by player \textsc{heur} (a) and player
    \textsc{bench} (b, trained using co-evolution \cite{lucas2006temporal}).}
    \label{fig:posvalues}
\end{figure}

The positional players use (\ref{eq:posval}) to evaluate the state
directly following an own possible move, i.e. before the opponent has made a
move in response. They choose the action which results in the afterstate
with the highest value according to (\ref{eq:posval}).



\subsection{Testing the Algorithms}

To gain a good understanding of the performances of both the learning and the
fixed players, we have them play multiple games, both players playing black and
white. All players except \textsc{rand} have a deterministic strategy
during testing. To prevent having one player win all training games, we
initialize the board as one of 236 possible starting positions after four turns
have passed\footnote{In other literature, 244 possible board configurations
after four turns are mentioned. We found there to be 244 different sequences of
legal moves from the starting board to the fifth turn, but that they result in
236 unique positions.}. During both training and testing, we cycle through all
the possible positions, ensuring that all positions are used the same number of
times. Each position is used twice: the agent plays both as white and black.
Table \ref{tbl:performance_fixed} shows the average performance per game of the
fixed strategies when tested against each other in this way. We are interested
in whether the relative performances might be reflected in the learning player's
performance when training against the three fixed players.

\begin{table}[h]
  \caption{Performances of the fixed strategies when playing against each
    other. The performances of the games involving player \textsc{rand} are the
    averages of 472.000 games (1.000 games from each of the 472 different
    starting positions).}
  \label{tbl:performance_fixed}
  \centering
  \small
  \begin{tabular}{|c|c|c|}
    \hline
    \textsc{heur} - \textsc{bench} & \textsc{bench} - \textsc{rand} &
    \textsc{rand} - \textsc{heur} \\
    \hline
    0.55 - 0.45 & 0.80 - 0.20 & 0.17 - 0.83 \\
    \hline
  \end{tabular}
\end{table}

\subsection{Comparison}

We use the fixed players both to train the algorithms and to test them.
In the experiments in which players \textsc{heur} and \textsc{bench} were used
as opponents in the test games a total of 2.000.000 games were played during
training. After each 20.000 games of training, the algorithms played 472 games
versus respectively \textsc{bench} or \textsc{heur}. Tables \ref{tbl:bench} and
\ref{tbl:heur} show the best performance of each algorithm when testing
against players \textsc{bench} and \textsc{heur} after having trained against
the various opponents through the different strategies: Itself, \textsc{heur},
\textsc{heur} when learning from its opponent's moves (\textsc{heur-lrn}),
\textsc{bench}, \textsc{bench} when learning from its opponent's moves
(\textsc{bench-lrn}), \textsc{rand} and \textsc{rand} when learning from its
opponent's moves (\textsc{rand-lrn}).

\begin{table}[h]
  \caption{Performances of the learning algorithms when tested versus player
    \textsc{bench}. Each column shows the performance in the test session where
    the learning player played best, averaged over a total of ten experiments.
    The standard error ($\hat{\sigma} / \sqrt{n}$) is shown as well.}
  \label{tbl:bench}
  \center{
    \footnotesize
    \scalebox{0.9}{
      \begin{tabular}{| c c c c |}
        \hline
        Train vs. & Q-learning & Sarsa & TD-Learning \\
        \hline
        \textsc{bench} & $\boldsymbol{ 0.871 \pm 0.009 }$ & $ \boldsymbol{0.859
          \pm 0.006} $ & $ 0.700 \pm 0.007 $ \\
        \textsc{bench-lrn} & $ 0.780 \pm 0.008 $ & $ 0.816 \pm 0.006 $ & $ 0.628 \pm 0.009 $ \\
        Itself & $ 0.721 \pm 0.011 $ & $ 0.699 \pm 0.011 $ & $\boldsymbol{ 0.723 \pm 0.017 }$ \\
        \textsc{heur} & $ 0.582 \pm 0.008 $ & $ 0.574 \pm 0.018 $ & $ 0.522 \pm 0.008 $ \\
        \textsc{heur-lrn} & $ 0.563 \pm 0.008 $ & $ 0.427 \pm 0.015 $ & $ 0.355 \pm 0.010 $ \\
        \textsc{rand} & $ 0.330 \pm 0.010 $ & $ 0.307 \pm 0.009 $ & $ 0.356 \pm 0.011 $ \\
        \textsc{rand-lrn} & $ 0.418 \pm 0.018 $ & $ 0.300 \pm 0.012 $ & $ 0.332 \pm 0.008 $ \\
        \hline
    \end{tabular}
    }
  }
\end{table}

\begin{table}[h]
  \caption{Performances of the learning algorithms when tested versus player
    \textsc{heur}. Each column shows the performance in the test session where
    the learning player played best, averaged over a total of ten experiments.
    The standard error ($\hat{\sigma} / \sqrt{n}$) is shown as well.}
  \label{tbl:heur}
  \centering
  \footnotesize
  \scalebox{0.9}{
    \begin{tabular}{| c c c c |}
      \hline
      Train vs. & Q-learning & Sarsa & TD-Learning \\
      \hline
      \textsc{heur} & $\boldsymbol{ 0.810 \pm 0.009 }$ & $ \boldsymbol{ 0.809 \pm
        0.005 }$ & $\boldsymbol{ 0.775 \pm 0.005 }$ \\
      \textsc{heur-lrn} &$ 0.651 \pm 0.006 $ & $ 0.728 \pm 0.013 $ & $ 0.666 \pm 0.007 $ \\
      Itself & $ 0.641 \pm 0.016 $ & $ 0.631 \pm 0.015 $ & $ 0.767 \pm 0.005 $ \\
      \textsc{bench-lrn} & $ 0.476 \pm 0.012 $ & $ 0.361 \pm 0.011 $ & $ 0.725 \pm 0.009 $ \\
      \textsc{bench} & $ 0.397 \pm 0.016 $ & $ 0.440 \pm 0.012 $ & $ 0.708 \pm 0.009 $ \\
      \textsc{rand-lrn} & $ 0.356 \pm 0.014 $ & $ 0.498 \pm 0.009 $ & $ 0.610 \pm 0.010 $ \\
      \textsc{rand} & $ 0.426 \pm 0.007 $ & $ 0.428 \pm 0.016 $ & $ 0.644 \pm 0.015 $ \\
      \hline
    \end{tabular}
  }
\end{table}


For each test session, the results were averaged over a total of ten
experiments. The tables show the averaged results in the session in which the
algorithms, on average,  performed best. Figures \ref{fig:bench} and
\ref{fig:heur} show how the performance develops during training when tested
versus players \textsc{bench} and \textsc{heur}.

\begin{figure*}[ht!]
  \centerline{\subfloat[]{\includegraphics[width=0.3\textwidth]{img/QvsB}}
    \hfil
    \subfloat[]{\includegraphics[width=0.3\textwidth]{img/SvsB.pdf}}
    \hfil
    \subfloat[]{\includegraphics[width=0.3\textwidth]{img/TvsB.pdf}}
  }
  \caption{Average performance of the algorithms over ten experiments. With
  2.000.000 of training against the various opponents and testing Q-learning,
  Sarsa and TD-learning versus player \textsc{bench} (a, b and c respectively).}
  \label{fig:bench}
\end{figure*}

\begin{figure*}[ht!]
  \centerline{\subfloat[]{\includegraphics[width=0.3\textwidth]{img/QvsH}}
    \hfil
    \subfloat[]{\includegraphics[width=0.3\textwidth]{img/SvsH.pdf}}
    \hfil
    \subfloat[]{\includegraphics[width=0.3\textwidth]{img/TvsH.pdf}}
  }
  \caption{Average performance of the algorithms over ten experiments. With
  2.000.000 of training against the various opponents and testing Q-learning,
  Sarsa and TD-learning versus player \textsc{heur} (a, b and c respectively).}
  \label{fig:heur}
\end{figure*}

In the experiments in which the algorithms are tested versus player
\textsc{rand}, a total of 500.000 training games were played. Table
\ref{tbl:rand} shows the best performance when training against each of the
various opponents through the different strategies. Figure \ref{fig:rand} shows
how the performance develops during training when testing versus player
\textsc{rand}.

\begin{table}[h]
  \caption{Performances of the learning algorithms when tested versus player
    \textsc{rand}. Each column shows the performance in the test session where the
    learning player played best, averaged over a total of ten experiments. The
    standard error ($\hat{\sigma} / \sqrt{n}$) is shown as well.}
  \label{tbl:rand}
  \centering
  \footnotesize
  \scalebox{0.9}{
    \begin{tabular}{| c c c c |}
      \hline
      Train vs. & Q-learning & Sarsa & TD-Learning \\
      \hline
      Itself & $\boldsymbol{ 0.949 \pm 0.003 }$ & $\boldsymbol{ 0.946 \pm 0.002 }$
      & $ \boldsymbol{0.975 \pm 0.002 }$ \\

      \textsc{rand} & $ 0.893 \pm 0.006 $ & $ 0.906 \pm 0.005 $ & $ 0.928 \pm 0.003 $ \\
      \textsc{bench-lrn} & $ 0.893 \pm 0.004 $ & $ 0.896 \pm 0.003 $ & $ 0.924 \pm 0.007 $ \\
      \textsc{rand-lrn} & $ 0.892 \pm 0.007 $ & $ 0.885 \pm 0.004 $ & $ 0.917 \pm 0.003 $ \\
      \textsc{heur-lrn} & $ 0.914 \pm 0.004 $ & $ 0.837 \pm 0.007 $ & $ 0.814 \pm 0.008 $ \\
      \textsc{heur} & $ 0.850 \pm 0.007 $ & $ 0.827 \pm 0.007 $ & $ 0.912 \pm 0.004 $ \\
      \textsc{bench} & $ 0.792 \pm 0.017 $ & $ 0.783 \pm 0.018 $ & $ 0.879 \pm 0.007 $ \\
      \hline
    \end{tabular}
  }
\end{table}

\begin{figure*}[ht!]
  \centering
  \begin{tabular}{c c c}
    \includegraphics[width=0.3\textwidth]{img/QvsR.pdf} &
    \includegraphics[width=0.3\textwidth]{img/SvsR.pdf} &
    \includegraphics[width=0.3\textwidth]{img/TvsR.pdf} \\
    \small (a) & \small (b) & \small (c) \\
  \end{tabular}
  \caption{Average performance of the algorithms over ten experiments.
    With 500.000 games of training against the various opponents and testing
    Q-learning, Sarsa and TD-learning versus player \textsc{rand} (a, b and
    c respectively).}
  \label{fig:rand}
\end{figure*}



These results allow for the following observations:

\begin{itemize}
  \item \textbf{Mixed policies} There is not a clear benefit to paying 
    attention to the opponent's moves when learning against a fixed player.
    Tables \ref{tbl:bench}, \ref{tbl:heur} and \ref{tbl:rand} seem to indicate
    that the doubling of perceived training moves does not improve performance
    as much as getting input from different policies decreases it.

  \item \textbf{Generalization} Q-learning and Sarsa perform best when having
    trained with the same player against which they are tested. When training
    against that player, the performance is best when the learning player does
    not pay attention to its opponent's moves. For both Q-learning and Sarsa,
    training against itself comes in at a third place in the experiments where
    the algorithms are tested versus \textsc{heur} and \textsc{bench}. For
    TD-learning however, the performance when training against itself is similar
    or even better than the performance after training against the same player
    used in testing. This seems to indicate that TD-learner achieves a higher
    level of generalization. This is due to the fact that the TD-learner learns
    values of states while the other two algorithms learn values of actions in
    states.

  \item \textbf{Symmetry} The TD-learner achieves a low performance against
    \textsc{bench} when having trained against \textsc{heur-lrn}, \textsc{rand}
    and \textsc{rand-lrn}. However, the results of the TD-learner when tested
    against \textsc{heur} lack a similar result. We speculate that this can be
    attributed to the lack of symmetry in \textsc{bench}'s positional values.

\end{itemize}

Using our results, we can now return to the research questions posed in the
introduction:

\begin{itemize}
  \item \textbf{Question} How does the performance of each algorithm after
    learning through self-play compare to the perfomance after playing against a
    fixed opponent, whether paying attention to its opponent's moves or just its
    own?
    
    \textbf{Answer} Q-learning and Sarsa learn best when they train against the
    same opponent against which they are tested. TD-learning seems to learn best
    when training against itself. None of the algorithms benefit from paying
    attention to its opponent's moves when training against a fixed strategy.
  
  \item \textbf{Question} When each algorithm would use their best strategy,
    which algorithm will perform best?
    
    \textbf{Answer} When Q-learning and Sarsa train against \textsc{bench} and
    \textsc{heur} without learning from their opponent's moves while tested
    against the same players, they clearly outperform TD after it has trained
    against itself. However, if we compare the performance for each of the three
    algorithms after training against itself, TD significantly outperforms Q-learning
    and Sarsa when tested against \textsc{heur} and \textsc{rand}. When tested
    against \textsc{bench} after training against itself, the difference between
    TD-learning and Q-learning is insignificant.    
    
  \item \textbf{Question} How does the skill level of the fixed opponent
    affect the final performance when an agent learns against a fixed
    opponent?
    
    \textbf{Answer} From  table \ref{tbl:performance_fixed} we see that player
    \textsc{heur} performs better against \textsc{rand} than \textsc{bench}.
    This is also reflected in the performances of the algorithms versus
    \textsc{rand} after having trained with \textsc{heur} and \textsc{bench}
    respectively. From table \ref{tbl:performance_fixed} we see as well that
    \textsc{heur} has a better performance than \textsc{bench} when the two
    players play against each other. This difference in performance also
    seems to be partly reflected in our results: When Q-learning and Sarsa
    train against player \textsc{heur} they obtain a higher performance when tested against
    \textsc{bench} than vice versa. However, we don't find a similar result for
    TD-learning. That might be attributed to the fact that
    \textsc{bench}'s weights values are not symmetric and therefore
    \textsc{bench} might pose a greater challenge to TD-learning than
    \textsc{heur}.

\end{itemize}
