\section{Introduction}\label{sec:introduction}

\IEEEPARstart{M}{any} real-life decision problems are sequential in nature.
People are often required to sacrifice an immediate pay-off for the benefit of a
greater reward later on. Reinforcement learning is the field of research which
concerns itself with enabling artificial agents to make sequential decisions
that maximize the overall reward \cite{sutton1998reinforcement, wiering2012sota}.

Because of their sequential nature, games are a popular application of
reinforcement learning algorithms. The backgammon learning program TD-Gammon
\cite{tesauro1995temporal} showed the potential of reinforcement learning algorithms by
achieving an expert level of play from training games generated by self-play.
Other applications to games include chess \cite{thrun1995learning}, checkers
\cite{schaeffer2001temporal} and Go \cite{schraudolph1994temporal}. The game of Othello has also proven
to be a useful testbed to examine the dynamics of machine learning methods such
as evolutionary neural networks \cite{moriarty1995discovering}, $n$-tuple
systems \cite{lucas2008learning}, Q-learning \cite{vaneck2008application} and
structured neural networks \cite{wiering2012neural}. 

When using reinforcement learning to learn to play a game, an agent plays a
large number of training games. In this research we compare different ways of
learning from training games. Additionally, we look at how the level of play of
the training opponent affects the final performance. These issues are
investigated for three canonical reinforcement learning algorithms.  TD-learning
\cite{sutton1988learning} and Q-learning \cite{watkins1992q} have both been applied to
Othello before \cite{wiering2012neural, vaneck2008application}.  Additionally,
we compare the on-policy variant of Q-learning, Sarsa \cite{rummery1995online}.

In using reinforcement learning to play Othello, we can use at least three
different strategies: First, we can have a learning agent train against itself.
Its evaluation function will become more and more accurate during training, and
there will never be a large difference in level of play between the training
agent and its opponent. A second strategy would be to train while playing
against a player which is fixed, in the sense that its playing style does not
change during training. The agent would learn from both its own moves and the
moves its opponent makes. The skill level of the non-learning player can either
be high or low. A third strategy consists of letting an agent train against a
fixed opponent but only have it learn from its own moves.

This paper examines the differences between these three strategies. It attempts
to answer the following research questions:

\begin{itemize}
  
  \item How does the performance of each algorithm after learning through
    self-play compare to the performance after playing against a fixed opponent,
    whether paying attention to its opponent's moves or just its own?

  \item When each algorithm would use their best strategy,
    which algorithm will perform best?

  \item How does the skill level of the fixed opponent affect the final
    performance when an agent learns against a fixed opponent? 

\end{itemize}

Earlier research considered similar issues for backgammon
\cite{wiering2010self}. There, it was shown that learning from playing against
an expert is the best strategy.

When learning by paying attention to a fixed opponent's moves as well, an agent
doubles the amount of training data it receives. However, it tries to learn a
policy while half of the input it perceives was obtained by following a
different policy. This research will show whether this doubling of training data
is able to compensate for the inconsistency of policies.

In our experimental setup, three benchmark players will be used in both the
train runs and the test runs. The results will therefore also show possible
differences between the effect this similarity between training and testing will
have on the test performance for each of the three algorithms.

\textbf{Outline.} In section \ref{sec:othello} we shortly explain the game of
Othello and give an overview of earlier research on the game. In section
\ref{sec:methods}, we discuss the theory behind the used algorithms. Section
\ref{sec:experiments} describes the experiments that we performed and the
results obtained. A conclusion will be presented in section
\ref{sec:conclusion}.
