%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% !TEX root = ../sutton_learning_1988.tex
\chapter{Examples of faster learning with TD methods}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In this section
we begin to address the claim that
TD methods make more efficient use of their experience
than do supervised-learning methods,
that they converge more rapidly and
make more accurate predictions along the way.
TD methods have this advantage
whenever the data sequences have a certain statistical structure
that is ubiquitous in prediction problems.
This structure naturally arises
whenever the data sequences are generated by a dynamical system,
that is,
by a system that has a state
which evolves and is partially revealed over time.
Almost any real system is a dynamical system,
including the weather,
national economies,
and chess games.
In this section,
we develop two illustrative examples:
a game playing example to help develop intuitions,
and a random-walk example as a simple demonstration with experimental results.

\section{A game-playing example}\label{sec:3_1}

It seems counter-intuitive that
TD methods might learn more efficiently than supervised-learning methods.
In learning to predict an outcome,
how can one do better than
by knowing and using the true actual outcome as a performance standard?
How can using a biased and potentially inaccurate subsequent prediction
possibly be a better use of the experience?
The following example is meant to
provide an intuitive understanding of how this is possible.  

Suppose there is a game position that you have learned is bad for you,
that has resulted most of the time in a loss and only rarely in a win for your side.
For example,
this position might be a backgammon race in which you are behind,
or a disadvantageous configuration of cards in blackjack.
\autoref{fig:1} represents a simple case of such a position as a single ``bad'' state that has led 90\% of the time to a loss and only 10\% of the time to a win.
Now suppose you play a game that reaches a novel position (one that you have never seen before),
that then progresses to reach the bad state,
and that finally ends nevertheless in a victory for you.
That is,
over several moves it follows the path shown by dashed lines in
\autoref{fig:1}.
As a result of this experience,
your opinion of the bad state would presumably improve,
but what of the \emph{novel} state?
What value would you associate with \emph{it} as a result of this experience?
\begin{figure}[htpb]
    \centering
    \includegraphics[width=0.5\textwidth]{figure1.png}
    \caption{
        A game-playing example showing the inefficiency of supervised-learning
        methods.
        Each circle represents a position or class of positions from a
        two-person board game.
        The ``bad'' position is known from long experience
        to lead 90\% of the time to a loss and only 10\% of the time to a win.
        The first game in which the ``novel'' position occurs evolves
        as shown by the dashed arrows.
        What evaluation should the novel position receive
        as a result of this experience?
        Whereas TD methods correctly conclude that
        it should be considered another bad state,
        supervised-learning methods associate it fully with winning,
        the only outcome that has followed it.
    }
    \label{fig:1}
\end{figure}

A supervised-learning method would form a pair from the novel state and the win that followed it,
and would conclude that the novel state is likely to lead to a win.
A TD method,
on the other hand,
would form a pair from the novel state and the bad state that immediately followed it,
and would conclude that the novel state is also a bad one,
that it is likely to lead to a loss.
Assuming we have properly classified the ``bad'' state,
the TD method's conclusion is the correct one;
the novel state led to a position that you know usually leads to defeat;
what happened after that is irrelevant.
Although both methods should converge to the same evaluations with infinite experience,
the TD method learns a better evaluation from this limited experience.

The TD method's prediction would also be better had the game been lost after reaching the bad state,
as is more likely.
In this case,
a supervised-learning method would tend to associate the novel position fully with losing,
whereas a TD method would tend to associate it with the bad position's 90\% chance of losing,
again a presumably more accurate assessment.
In either case,
by adjusting its evaluation of the novel state towards the bad state's evaluation,
rather than towards the actual outcome,
the TD method makes better use of the experience.
The bad state's evaluation is a better performance standard because it is uncorrupted by random factors that subsequently influence the final outcome.
It is by eliminating this source of noise that TD methods can outperform supervised-learning procedures.

In this example,
we have ignored the possibility that the bad state's previously learned evaluation is in error.
Such errors will inevitably exist and will affect the efficiency of TD methods in ways that cannot easily be evaluated in an example like this.
The example does not prove TD methods will be better on balance,
but it does demonstrate that a subsequent prediction can easily be a better performance standard than the actual outcome.

This game playing example can also be used to show how TD methods can fail.
Suppose the bad state is usually followed by defeats except when it is preceded by the novel state,
in which case it always leads to a victory.
In this odd case,
TD methods could not perform better and might perform worse than supervised-learning methods.
Although there are several techniques for eliminating or minimizing this sort of problem,
it remains a greater difficulty for TD methods than it does for supervised-learning methods.
TD methods try to take advantage of the information provided by the temporal sequence of states,
whereas supervised-learning methods ignore it.
It is possible for this information to be misleading,
but more often it should be helpful.

Finally,
note that although this example involved learning an evaluation function,
nothing about it was specific to evaluation functions.
The methods can equally well be used to predict outcomes unrelated to the player's goals,
such as the number of pieces left at the end of the game.
If TD methods are more efficient than supervised-learning methods in learning evaluation functions,
then they should also be more efficient in general prediction-learning problems.

\section{A random-walk example}\label{sec:3_2}

The game-playing example is too complex to analyze in great detail.
Previous experiments with TD methods have also used complex domains
(e.g., \cite{samuel_studies_1959};
\cite{sutton_temporal_1984};
\cite{barto_neuronlike_1983};
\cite{anderson_learning_1986, anderson_strategy_1987}).
Which aspects of these domains can be simplified or eliminated,
and which aspects are essential in order for TD methods to be effective?
In this paper,
we propose that the only required characteristic is that the system predicted be a dynamical one,
that it have a state that can be observed evolving over time.
If this is true,
then TD methods should learn more efficiently than supervised-learning methods even on very simple prediction problems,
and this is what we illustrate in this subsection.
Our example is one of the simplest of dynamical systems,
that which generates \emph{bounded random walks}.
\begin{figure}[htpb]
    \centering
    \includegraphics[width=0.8\textwidth]{figure2}
    \caption{
        A generator of bounded random walks.
        This Markov process generated the data sequences in the example.
        All walks begin in state D. From states B, C, D, E, and F,
        the walk has a 50-50 chance of moving either to the right
        or to the left.
        If either edge state, A or G, is entered, then the walk terminates.
    }
    \label{fig:2}
\end{figure}

A bounded random walk is a state sequence generated by
taking random steps to the right or to the left until a boundary is reached.
Figure 2 shows a system that generates such state sequences.
Every walk begins in the center state \(D\).
At each step the walk moves to a neighboring state,
either to the right or to the left with equal probability.
If either edge state (\(A\) or \(G\)) is entered,
the walk terminates.
A typical walk might be \(DCDEFG\).
Suppose we wish to estimate the probabilities of
a walk ending in the rightmost state, \(G\),
given that it is in each of the other states.

We applied linear supervised-learning and TD methods to
this problem in a straightforward way.
A walk's outcome was defined to be \(z = 0\) for a walk
ending on the left at \(A\) and
\(z = 1\) for a walk ending on the right at \(G\).
The learning methods estimated the expected value of \(z\);
for this choice of \(z\),
its expected value is equal to the probability of a rightside termination.
For each non-terminal state \(i\),
there was a corresponding observation vector \(x_i\);
if the walk was in state \(i\) at time \(t\) then \(x_t = \x_i\).
Thus,
if the walk \(DCDEFG\) occurred,
then the learning procedure would be given
the sequence \(\x_D,\x_C,\x_D,\x_E,\x_F\), 1.
The vectors \(\{\x_i\}\) were the unit basis vectors of length 5,
that is,
four of their components were 0 and the fifth was 1
(e.g., \(\x_D = (0,0,1,0,0)^T\)),
with the one appearing at a different component for each state.
Thus,
if the state the walk was in at time \(t\) has its 1
at the \(i\)\oth component of its observation vector,
then the prediction \(P_t = w^Tx_t\) was simply the value of the \(i\)\oth component of w.
We use this particularly simple case to make this example as clear as possible.
The theorems we prove later for a more general class of dynamical systems require only that the set of observation vectors {xi} be linearly independent.

Two computational experiments were performed using observation-outcome sequences generated as described above.
In order to obtain statistically reliable results,
100 training sets,
each consisting of 10 sequences,
were constructed for use by all learning procedures.
For all procedures,
weight increments were computed according to TD(\(\lambda\)),
as given by \eqref{eq:compute_Delta_w_t_discounted_incremently}.
Seven different values were used for \(\lambda\).
These were \(\lambda\) = 1,
resulting in the Widrow-Hoff supervised-learning procedure,
\(\lambda\) = 0,
resulting in linear TD(0),
and also \(\lambda\) = 0.1, 0.3, 0.5, 0.7, and 0.9,
resulting in a range of intermediate TD procedures.

In the first experiment,
the weight vector was not updated after each sequence as indicated by
\eqref{eq:update_w}.
Instead,
the \(\Delta w\)'s were accumulated over sequences and only used to update the weight vector after the complete presentation of a training set.
Each training set was presented repeatedly to each learning procedure until the procedure no longer produced any significant changes in the weight vector.
For small \(\alpha\),
the weight vector always converged in this way,
and always to the same final value,
independent of its initial value.
We call this the \emph{repeated presentations} training paradigm.

The true probabilities of rightside termination---the ideal predictions---for each of the nonterminal states can be computed as described in section 4.1.
These are 1/6, 1/3, 1/2, 2/3 and 5/6 for
states \(B\), \(C\), \(D\), \(E\) and \(F\),
respectively.
As a measure of the performance of a learning procedure on a training set,
we used the root mean squared (RMS) error between the procedure's asymptotic predictions using that training set and the ideal predictions.
Averaging over training sets,
we found that performance improved rapidly as \(\lambda\) was reduced below 1 (the supervised-learning method) and was best at \(\lambda\) = 0 (the extreme TD method),
as shown in Figure 3.
\begin{figure}[htpb]
    \centering
    \includegraphics[width=0.8\textwidth]{figure3}
    \caption{
        Average error on the random walk problem under repeated presentations.
        All data are from TD(\(\lambda\))
        with different values of \(\lambda\).
        The dependent measure used is the RMS error
        between the ideal predictions and those found
        by the learning procedure after being repeatedly presented with
        the training set until convergence of the weight vector.
        This measure was averaged over 100 training sets
        to produce the data shown.
        The \(\lambda\) = 1 data point is
        the performance level attained by the Widrow-Hoff procedure.
        For each data point,
        the standard error is approximately σ = 0.01,
        so the differences between the Widrow-Hoff procedure
        and the other procedures are highly significant.
    }
    \label{fig:3}
\end{figure}

This result contradicts conventional wisdom.
It is well known that,
under repeated presentations,
the Widrow-Hoff procedure minimizes the RMS error between its predictions and the actual outcomes in the training set (Widrow and Stearns, 1985).
How can it be that this optimal method performed worse than all the TD methods for \(\lambda < 1\)?
The answer is that the Widrow-Hoff procedure only minimizes error on the training set;
it does not necessarily minimize error for future experience.
In the following section,
we prove that in fact it is linear TD(0) that converges to what can be considered the optimal estimates for matching future experience---those consistent with the maximum-likelihood estimate of the underlying Markov process.

The second experiment concerns the question of learning rate when the the training set is presented just once rather than repeatedly until convergence.
Although it is difficult to prove a theorem concerning learning rate,
it is easy to perform the relevant computational experiment.
We presented the same data to the learning procedures,
again for several values of \(\lambda\),
with the following procedural changes.
First,
each training set was presented once to each procedure.
Second,
weight updates were performed after each sequence,
as in (1),
rather than after each complete training set.
Third,
each learning procedure was applied with a range of values for the learning-rate parameter \(\alpha\).
Fourth,
so that there was no bias either toward rightside or leftside terminations,
all components of the weight vector were initially set to 0.5.

The results for several representative values of \(\lambda\) are shown in Figure 4.
Not surprisingly,
the value of \(\alpha\) had a significant effect on performance,
with best results obtained with intermediate values.
For all values,
however,
the Widrow-Hoff (TD(1)) procedure produced the worst estimates.
All of the TD methods with \(\lambda < 1\) performed better both in absolute terms and over a wider range of \(\alpha\) values than did the supervised-learning method.
\begin{figure}[htpb]
    \centering
    \includegraphics[width=0.8\textwidth]{figure4}
    \caption{
        Average error on random walk problem after experiencing 10 sequences.
        All data are from TD(\(\lambda\)) with
        different values of α and \(\lambda\).
        The dependent measure is the RMS error
        between the ideal predictions and
        those found by the learning procedure
        after a single presentation of a training set.
        This measure was averaged over 100 training sets.
        The \(\lambda\) = 1 data points represent
        performances of the Widrow-Hoff supervised-learning procedure.
    }
    \label{fig:4}
\end{figure}

Figure 5 plots the best error level achieved for each \(\lambda\) value,
that is,
using the \(\alpha\) value that was best for that \(\lambda\) value.
As in the repeated-presentation experiment,
all \(\lambda\) values less than 1 were superior to the \(\lambda\) = 1 case.
However,
in this experiment the best \(\lambda\) value was not 0,
but somewhere near 0.3.
\begin{figure}[htpb]
    \centering
    \includegraphics[width=0.8\textwidth]{figure5}
    \caption{
        Average error at best α value on random walk problem.
        Each data point represents the average
        over 100 training sets of the error in
        the estimates found by TD(\(\lambda\)),
        for particular \(\lambda\) and α values,
        after a single presentation of a training set.
        The \(\lambda\) value is given by the horizontal coordinate.
        The α value was selected from those shown in Figure 4
        to yield the lowest error for that \(\lambda\) value.
    }
    \label{fig:5}
\end{figure}

One reason \(\lambda\) = 0 is not optimal for this problem is that TD(0) is relatively slow at propagating prediction levels back along a sequence.
For example,
suppose states D, E, and F all start with the prediction value 0.5,
and the sequence \(\x_D,\x_E,\x_F\) ,1 is experienced.
TD(0) will change only F's prediction,
whereas the other procedures will also change E's and D's to decreasing extents.
If the sequence is repeatedly presented,
this is no handicap,
as the change works back an additional step with each presentation,
but for a single presentation it means slower learning.

This handicap could be avoided by working backwards through the sequences.
For example,
for the sequence \(\x_D,\x_E,\x_F\), 1,
first F's prediction could be updated in light of the 1,
then E's prediction could be updated toward F's new level,
and so on.
In this way the effect of the 1 could be propagated back to the beginning of the sequence with only a single presentation.
The drawback to this technique is that it loses the implementation advantages of TD methods.
Since it changes the last prediction in a sequence first,
it has no incremental implementation.
However,
when this is not an issue,
such as when learning is done offline from an existing database,
working backward in this way should produce the best predictions.
