%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% !TEX root = ../sutton_learning_1988.tex
\chapter{Theory of temporal-difference methods}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Section4. Theory of temporal-difference methods

In this section,
we provide a theoretical foundation for temporal-difference methods.
Such a foundation is particularly needed for these methods
because most of their learning is done
on the basis of previously learned quantities.
``Bootstrapping'' in this way may be what makes TD methods efficient,
but it can also make them difficult to analyze
and to have confidence in.
In fact,
hitherto no TD method has ever been proved stable
or convergent to the correct predictions.
The theory developed here concerns the linear TD(0) procedure
and a class of tasks typified by the random walk example discussed
in the preceding section.
Two major results are presented:
1) an asymptotic convergence theorem for linear TD(0)
when presented with new data sequences;
and 2) a theorem that linear TD(0) converges
under repeated presentations to the optimal (maximum likelihood) estimates.
Finally,
we discuss how TD methods can be viewed as gradient-descent procedures.

\section{Convergence of linear TD(0)}\label{sec:4_1}

The theory presented here is for data sequences generated
by absorbing Markov processes
such as the random walk process discussed in the preceding section.
Such processes,
in which each next state depends only on the current state,
are among the formally simplest dynamical systems.
They are defined by a set of terminal states T,
a set of nonterminal states N,
and a set of transition probabilities \(p_{ij}\) \((i\in N, j\in N\cup T)\),
where each \(p_{ij}\) is
the probability of a transition from state \(i\) to state \(j\),
given that the process is in state \(i\).
\uwave{%
The ``absorbing'' property means
that indefinite cycles among the nonterminal states are not possible;%
all sequences (except for a set of zero probability) eventually terminate.
}

Given an initial state \(q_1\),
an absorbing Markov process provides a way of generating a state sequence
\(q_1,q_2,...,q_{m+1}\),
where \(q_{m+1} \in T\).
We will assume the initial state is chosen probabilistically from among the nonterminal states,
each with probability \(\mu_i\).
As in the random walk example,
we do not give the learning algorithms direct knowledge of the state sequence,
but only of a related observation-outcome sequence \(x_1,x_2,...,x_m,z\).
Each numerical observation vector \(x_t\) is chosen dependent only on the corresponding nonterminal state \(q_t\),
and the scalar outcome \(z\)is chosen
dependent only on the terminal state \(q_{m+1}\).
In what follows,
we assume that there is a specific observation vector \(\x_i\) corresponding
to each nonterminal state \(i\) such that if \(q_t = i\),
then \(x_t = \x_i\).
For each terminal state \(j\),
we assume outcomes \(z\) are selected from an arbitrary probability
distribution with expected value \(\overline{z}_j\).

The first step toward a formal understanding of any learning procedure is to prove that it converges asymptotically to the correct behavior with experience.
The desired behavior in this case is to map each nonterminal state's
observation vector \(\x_i\) to the true expected value of the outcome
\(z\)given that the state sequence is starting in \(i\).
That is,
we want the predictions \(P(\x_i,w)\) to equal \(E\{z|i\}\), \(\forall i \in N\).
Let us call these the ideal predictions.
Given complete knowledge of the Markov process,
they can be computed as follows:
\begin{equation*}
    E\{z|i\} =
    \sum_{j\in T}p_{ij}\overline{z}_j +
    \sum_{j\in N}p_{ij}\sum_{k\in T}p_{jk}\overline{z}_k +
    \sum_{j\in N}p_{ij}\sum_{k\in N}p_{jk}\sum_{l\in T}p_{kl}\overline{z}_l +
    \ldots
\end{equation*}

For any matrix \(M\),
let \([M]_{ij}\) denote its \(ij\)\oth component,
and,
for any vector \(v\),
let \([v]_i\) denote its \(i\)\oth component.
Let \(Q\) denote
the matrix with entries \([Q]_{ij} = p_{ij}\) for \(i,j \in N\),
and let \(h\) denote the vector
with components \([h]_i = \sum_{j\in T} p_{ij}\overline{z}_j\)
for \(i \in N\).
Then we can write the above equation as
\begin{equation}\label{eq:5}
    E \{z|i\} = \left[ \sum_{k=0}^{\infty} Q^kh\right]_i
              = \left[ (I-Q)^{-1}h\right]_i
\end{equation}
The second equality and the existence of the limit and the inverse are assured by Theorem A.1.
This theorem can be applied here
because the elements of \(Qk\) are
the probabilities of going from one nonterminal state to another in k steps;
for an absorbing Markov process,
these probabilities must all converge to 0 as \(k\to \infty\).

If the set of observation vectors \(\{\x_i | i \in N \}\) is linearly independent,
and if \(\alpha\) is chosen small enough,
then it is known that the predictions of the WidrowHoff rule converge in expected value to the ideal predictions
(e.g., see \cite{widrow_adaptive_1985}).
We now prove the same result for linear TD(0):
\begin{thm}\label{thm:2}
    For any absorbing Markov chain,
    for any distribution of starting probabilities\(\mu_i\),
    for any outcome distributions
    with finite expected values \(\overline{z}_j\),
    and for any linearly independent set of observation vectors
    \(\{\x_i | i \in N \}\),
    there exists an \(\epsilon>0\) such that,
    for all positive and for any initial weight vector,
    the predictions of linear TD(0)
    (with weight updates after each sequence)
    converge in expected value to the ideal predictions (5).
    That is,
    if \(w_n\) denotes the weight vector
    after \(n\) sequences have been experienced,
    then
    \( \lim_{n \to \infty} E\{\x_i^T w_n\} = E\{z|i\}
    = [(I-Q)^{-1}h]_i\),
    \(\forall i\in N\).
\end{thm}
\begin{proof}\label{pf:2}
Linear TD(0) updates \(w_n\) after each sequence as follows,
where \(m\) denotes the number of observation vectors in the sequence:
\begin{align*}
      w_{n+1}=
      & w_n + \sum_{t=1}^{m}\alpha(P_{t+1}-P_t)\grad_wP_t
      \qquad \mbox{where}\; P_{m+1}\define z\\
    = & w_n + \sum_{t=1}^{m-1}\alpha(P_{t+1}-P_t)\grad_wP_t +
    \alpha(z-P_m)\grad_mP_m \\
    = & w_n + \sum_{t=1}^{m-1}\alpha(w_n^T\x_{q_t+1}-w_n^T\x_{q_t})\x_{q_t} +
    \alpha(z-w_n^T\x_{q_m})\x_{q_m},
\end{align*}
where \(\x_{qt}\) is the observation vector corresponding to the state \(q_t\)
entered at time \(t\) within the sequence.
This equation groups the weight increments
according to their time of occurrence within the sequence.
Each increment corresponds to a particular state transition,
and so we can alternatively group them
according to the source and destination states of the transitions:
\begin{equation*}
    w_{n+1} = w_n +
    \sum_{i\in N}\sum_{j\in T}\eta_{ij}\alpha (w_n^T\x_j-w_n^T\x_i)\x_i +
    \sum_{i\in N}\sum_{j\in T}\eta_{ij}\alpha (z-w_n^T\x_i)\x_i,
\end{equation*}
where \(\eta_{ij}\) denotes the number of times
the transition \(i \to j\) occurs in the sequence.
(For \(j \in T\), all but one of the \(\eta_{ij}\) is 0.)

Since the random processes generating state transitions and outcomes
are independent of each other,
we can take the expected value of each term above,
yielding
\begin{multline}\label{eq:6}
    E\{w_{n+1}|w_n\} = w_n + 
    \sum_{i\in N}\sum_{j\in T}d_ip_{ij}\alpha (w_n^T\x_j-w_n^T\x_i)\x_i + \\
    \sum_{i\in N}\sum_{j\in T}d_ip_{ij}\alpha (\overline{z} -w_n^T\x_i)\x_i,
\end{multline}
where \(d_i\) is the expected number of times the Markov chain is in state \(i\) in one sequence,
so that dipij is the expected value of \(\eta_{ij}\).
For an absorbing Markov chain \citep[e.g.,][p. 46]{kemeny_finite_1976}:
\begin{equation}\label{eq:7}
    dT = \mu^T(I − Q)^{-1},
\end{equation}
where \([d]_i = d_i\) and \([\mu]_i = \mu_i\), \(i \in  N\).
Each \(d_i\) is strictly positive,
because any state for which \(d_i = 0\) has no probability of
being visited and can be discarded.

Let \(\overline{w}_n\) denote the expected value of \(w_n\).
Then,
since the dependence of \(E\{w_{n+1} | w_n\}\) on \(w_n\) is linear,
we can write
\begin{equation*}
    \overline{w}_{n+1} = \overline{w}_n + 
    \sum_{i\in N}\sum_{j\in T}d_ip_{ij}
        \alpha (\overline{w}_n^T\x_j-\overline{w}_n^T\x_i)\x_i +
    \sum_{i\in N}\sum_{j\in T}d_ip_{ij}
        \alpha (\overline{z} -\overline{w}_n^T\x_i)\x_i,
\end{equation*}
an iterative update formula in \(\overline{w}_n\) that depends only on initial conditions.
Now we rearrange terms and convert to matrix and vector notation,
letting \(D\) denote the diagonal matrix with diagonal entries
\([D]_{ii} = d_i\) and \(X\) denote the matrix with columns \(\x_i\):
\begin{align*}
\bar{w}_{n+1} &=\bar{w}_{n}+\alpha \sum_{i \in N} d_{i} \mathbf{x}_{i}\left(\sum_{j \in T} p_{i j} \bar{z}_{j}+\sum_{j \in N} p_{i j} \bar{w}_{n}^{T} \mathbf{x}_{j}-\bar{w}_{n}^{T} \mathbf{x}_{i} \sum_{j \in N \cup T} p_{i j}\right) \\
&=\bar{w}_{n}+\alpha \sum_{i \in N} d_{i} \mathbf{x}_{i}\left([h]_{i}+\sum_{j \in N} p_{i j} \bar{w}_{n}^{T} \mathbf{x}_{j}-\bar{w}_{n}^{T} \mathbf{x}_{i}\right) \\
&=\bar{w}_{n}+\alpha X D\left(h+Q X^{T} \bar{w}_{n}-X^{T} \bar{w}_{n}\right)
\end{align*}

\begin{align*}
& X^{T} \bar{w}_{n+1} \\
=& X^{T} \bar{w}_{n}+\alpha X^{T} X D\left(h+Q X^{T} \bar{w}_{n}-X^{T} \bar{w}_{n}\right) \\
=& \alpha X^{T} X D h+\left(I-\alpha X^{T} X D(I-Q)\right) X^{T} \bar{w}_{n} \\
=& \alpha X^{T} X D h+\left(I-\alpha X^{T} X D(I-Q)\right) \alpha X^{T} X D h 
+ \left(I-\alpha X^{T} X D(I-Q)\right)^{2} X^{T} \bar{w}_{n-1} \\
& \vdots \\
=& \sum_{k=0}^{n-1}\left(I-\alpha X^{T} X D(I-Q)\right)^{k} \alpha X^{T} X D h 
+\left(I-\alpha X^{T} X D(I-Q)\right)^{n} X^{T} w_{0}
\end{align*}

Assuming for the moment that
\(lim_{n\to\infty}(I − \alpha X^TXD(I − Q))^n = 0\),
then,
by theorem A.1,
the sequence \(\{X^T\overline{w}_n\}\) converges to
\begin{align*}
\lim_{n \rightarrow \infty} X^{T} \bar{w}_{n} &=\left(I-\left(I-\alpha X^{T} X D(I-Q)\right)\right)^{-1} \alpha X^{T} X D h \\
&=(I-Q)^{-1} D^{-1}\left(X^{T} X\right)^{-1} \alpha^{-1} \alpha X^{T} X D h \\
&=(I-Q)^{-1} h;
\end{align*}
\begin{align*}
\lim_{n \rightarrow \infty} E\left\{\mathbf{x}_{i}^{T}
w_{n}\right\}=\left[(I-Q)^{-1} h\right]_{i} \quad \forall i \in N,
\end{align*}
which is the desired result.
Note that \(D^{-1}\) must exist because \(D\) is diagonal with all positive diagonal entries,
and \((X^TX)^{−1}\) must exist by Theorem A.2.

It thus remains to show that \(\lim_{n\to\infty}(I − \alpha X^TXD(I − Q))^n = 0\).
We do this by first showing that \(D(I−Q)\) is positive definite,
and then that \(X^TXD(I−Q)\) has a full set of eigenvalues all of whose real parts are positive.
This will enable us to show that
\(\alpha\) can be chosen such that all eigenvalues of \(I−\alpha X^TXD(I−Q)\) are less than 1 in modulus,
which assures us that its powers converge.

We show that \(D(I−Q)\) is positive definite1 by applying the following
lemma \citep[see][p. 23, for a proof.]{varga_matrix_1962}:

Lemma If A is a real,
symmetric,
and strictly diagonally dominant matrix with positive diagonal entries,
then A is positive definite.

We cannot apply this lemma directly to \(D(I −Q)\) because it is not symmetric.
However,
by Theorem A.3,
any matrix A is positive definite exactly when the symmetric matrix
\(A + A^T\) is positive definite,
so we can prove that \(D(I − Q)\) is positive definite by applying the
lemma to \(S = D(I −Q)+(D(I −Q))^T\).
\(S\) is clearly real and symmetric;
it remains to show that it has positive diagonal entries and is strictly diagonally dominant.

First,
we note that
\begin{align*}
[D(I-Q)]_{i j}=\sum[D]_{i k}[I-Q]_{k j}=[D]_{i i}[I-Q]_{i j}=d_{i}[I-Q]_{i j}.
\end{align*}
We will use this fact several times in the following.

\(S\)'s diagonal entries are positive,
because
\begin{align*}
      [S]_{ii} 
    = & [D(I − Q )]_{ii} + [(D(I − Q))T]_{ii} \\
    = & 2[D(I −Q)]_{ii} \\
    = & 2d_i[I −Q]_{ii} \\
    = & 2d_i(1−p_{ii}) > 0,\; i\in N.
\end{align*}
Furthermore,
\(S\)'s off-diagonal entries are non-positive,
because,
for \(i \neq j\),
\begin{align*}
      [S]_{ij} 
    = & [D(I − Q)]_{ij} + [(D(I − Q))^T]_{ij} \\
    = & d_i[I − Q]_{ij} + d_j[I − Q]_{ji} \\
    = & −d_ip_{ij} − d_jp_{ji} \leq 0.
\end{align*}

\(S\) is strictly diagonally dominant if and only if
\(|[S]_{ii}| ≥ \sum_{j\neq i}|[S]_{ij}|\),
for all \(i\),
with strict inequality holding for at least one \(i\).
However,
since \([S]_{ii} > 0\) and \([S]_{ij} ≤ 0\),
we need only show that \([S]_{ii} ≥ −\sum_{j\neq i}[S]_{ij}\),
in other words,
that \(Pj[S]_{ij} ≥ 0\),
which can be directly shown:
\begin{align*}
\sum_{j}[S]_{i j} &=\sum_{j}\left([D(I-Q)]_{i j}+\left[(D(I-Q))^{T}\right]_{i j}\right) \\
&=\sum_{j} d_{i}[I-Q]_{i j}+\sum_{j} d_{j}[I-Q]_{j i} \\
&=d_{i} \sum_{j}[I-Q]_{i j}+\left[d^{T}(I-Q)\right]_{i} \\
&=d_{i}\left(1-\sum_{j} p_{i j}\right)+\left[\mu^{T}(I-Q)^{-1}(I-Q)\right]_{i} \qquad \mbox{by (7)}\\
&=d_{i}\left(1-\sum_{j} p_{i j}\right)+\mu_{i} \\
& \geq 0
\end{align*}
Furthermore,
strict inequality must hold for at least one \(i\),
because \(\mu_i\) must be strictly positive for at least one \(i\).
Therefore,
\(S\) is strictly diagonally dominant and the lemma applies,
proving that S and D(I−Q) are both positive definite.

Next we show that \(X^TXD(I − Q)\) has a full set of eigenvalues all of whose real parts are positive.
First of all,
the set of eigenvalues is clearly full,
because the
matrix is nonsingular,
being the product of three matrices, \(X^TX\), \(D\), and \(I − Q\),
that we have already established as nonsingular.
Let \(\lambda\) and \(y\) be any eigenvalue-eigenvector pair.
Let \(y = a + bi\) and \(z = (X^TX)^{−1}y \neq 0\) (i.e., \(y = X^TXz\)).
Then \[
y^*D(I − Q)y = z^*X^TXD(I − Q)y = z^*\lambda y = \lambda z^*X^TXz = \lambda (Xz)^*Xz,
\]
where ``*'' denotes the conjugate-transpose.
This implies that \[
    \mbox{Re~ ~} \left( y^*D(I-Q)y\right) =
    \mbox{Re~ ~} \left( \lambda(Xz)^*Xz\right);
\] 
\begin{align*}
    a^{T} D(I-Q) a+b^{T} D(I-Q) b=(X z)^{*} X z \quad \mbox{Re~ ~} \lambda.
\end{align*}

Since the left side and \((Xz)^*Xz\) must both be strictly positive,
so must the real part of \(\lambda\).

Furthermore,
\(y\) must also be an eigenvector of \(I −\alpha X^TXD(I −Q)\),
because \[
    (I − \alpha X^TXD(I − Q))y = y − \alpha \lambda y = (1 − \alpha \lambda )y.\]
Thus,
all eigenvectors of \(I − \alpha X^TXD(I − Q)\) are of the form \(1 − \alpha \lambda \),
where \(\lambda\) has positive real part.
For each \(\lambda  = a+bi, a > 0\),
if \(\alpha\) is chosen \(0<\alpha<\frac{2a}{a^2+b^2}\),
then \(1−\alpha \lambda \) will have modulus\footnote{%
	The modulus of a complex number \(a + bi\) is	\(\sqrt{a^2 + b^2} \).
}
less than 1:
\begin{align*}
|1-\alpha \lambda| &=\sqrt{(1-\alpha a)^{2}+(-\alpha b)^{2}} \\
&=\sqrt{1-2 \alpha a+\alpha^{2} a^{2}+\alpha^{2} b^{2}} \\
&=\sqrt{1-2 \alpha a+\alpha^{2}\left(a^{2}+b^{2}\right)} \\
&<\sqrt{1-2 \alpha a+\alpha \frac{2 a}{a^{2}+b^{2}}\left(a^{2}+b^{2}\right)}=\sqrt{1-2 \alpha a+2 \alpha a}=1.
\end{align*}
The criterial value \(\frac{2a}{a^2+b^2}\) will be different for different \(\lambda\);
choose \(\epsilon\) to be the smallest such value.
Then,
for any positive \(\alpha<\epsilon\),
\emph{all} eigenvalues \(1 − \alpha \lambda \) of \(I − \alpha XD(I − Q)X^T\) are less than 1 in modulus.
And this immediately implies \citep[e.g., see][p. 13]{varga_matrix_1962} that
\[\lim_{n\to\infty}(I − \alpha X^TXD(I − Q))^n = 0,\]
completing the proof.
\end{proof}

We have just shown that the expected values of the predictions found by linear TD(0) converge to the ideal predictions for data sequences generated by absorbing Markov processes.
Of course,
just as with the Widrow-Hoff procedure,
the predictions themselves do not converge;
they continue to vary around their expected values according to their most recent experience.
In the case of the Widrow-Hoff procedure,
it is known that the asymptotic variance of the predictions is finite and can be made arbitrarily small by the choice of the learning-rate parameter \(\alpha\).
Furthermore,
if \(\alpha\) is reduced according to an appropriate schedule,
e.g., \(\alpha=\frac{1}{n}\),
then the variance converges to zero as well.
We conjecture that these stronger forms of convergence hold for linear TD(0) as well,
but this remains an open question.
Also open is the question of
convergence of linear TD(\(\lambda\)) for \(0 < \lambda < 1\).
We now know that both TD(0) and TD(1) (the Widrow-Hoff rule) converge in the mean to the ideal predictions;
we conjecture that the intermediate TD(\(\lambda\)) procedures do as well.

\section{Optimality and learning rate}

The result obtained in the previous subsection assures us that both TD methods and supervised learning methods converge asymptotically to the ideal estimates for data sequences generated by absorbing Markov processes.
However,
if both kinds of procedure converge to the same result,
which gets there faster?
In other words,
which kind of procedure makes the better predictions from a finite rather than an infinite amount of experience?
Despite the previously noted empirical results showing faster learning with TD methods,
this has not been proved for any general case.
In this subsection we present a related formal result that helps explain the empirical result of faster learning with TD methods.
We show that the predictions of linear TD(0) are optimal in an important sense for repeatedly presented finite training sets.

In the following,
we first define what we mean by optimal predictions for finite training sets.
Though optimal,
these predictions are extremely expensive to compute,
and neither TD nor supervised-learning methods compute them directly.
However,
TD methods do have a special relationship with them.
One common training process is to present a finite amount of data over and over again until the learning process converges (e.g.,
see Ackley,
Hinton and Sejnowski, 1985;
Rumelhart,
Hinton and Williams, 1985).
We prove that linear TD(0) converges under this repeated presentations training paradigm to the optimal predictions,
while supervised-learning procedures converge to suboptimal predictions.
This result also helps explain TD methods' empirically faster learning rates.
Since they are stepping toward a better final result,
it makes sense that they would also be better after the first step.

The word \emph{optimal} can be misleading because it suggests a univerally agreed upon criterion for the best way of doing something.
In fact,
there are many kinds of optimality,
and choosing among them is often a critical decision.
Suppose that one observes a training set consisting of a finite number of observation-outcome sequences,
and that one knows the sequences to be generated by an absorbing Markov process as described in the previous section.
What might one mean by the ``best'' predictions given such a training set?

If the \emph{a priori} distribution of possible Markov processes is known,
then the predictions that are optimal in the mean square sense can be calculated through Bayes's rule.
Unfortunately,
it is very difficult to justify any \emph{a priori} assumptions about possible Markov processes.
In order to avoid making any such assumptions,
mathematicians have developed another kind of optimal estimate,
known as the \emph{maximum-likelihood estimate}.
This is the kind of optimality with which we will be concerned.
For example,
suppose one flips a coin ten times and gets seven heads.
What is the best estimate of the probability of getting a head on the next toss?
In one sense, the best estimate depends entirely on
\emph{a priori} assumptions about
how likely one is to run into fair and biased coins,
and thus cannot be uniquely determined.
On the other hand,
the best answer in the maximum-likelihood sense requires no such assumptions;
it is simply \(\frac{7}{10}\).
In general,
the maximum-likelihood estimate of the process that produced a set of data
is that process whose probability of producing the data is the largest.

What is the maximum-likelihood estimate for our prediction problem?
If the observation vectors \(\x_i\) for each nonterminal state \(i\) are distinct,
then one can enumerate the nonterminal states appearing in the training set and effectively
know which state the process is in at each time.
Since terminal states do not produce observation vectors,
but only outcomes,
it is not possible to tell when two sequences end in the same terminal state;
thus we will assume that all sequences terminate in different states.

Let \(\hat{T}\) and \(\hat{N}\) denote the sets of terminal and nonterminal states respectively,
as observed in the training set.
Let \([\hat{Q}]_{ij} = \hat{p}_{ij}\) (\(i,j \in  \hat{N}\)) be the fraction of the times that state \(i\) was entered in which a transition occurred to state \(j\).
Let \(z_j\) be the outcome of the sequence in which termination occurred at
state \(j \in \hat{T}\),
and let \([\hat{h}]_i = \sum_{j\in \hat{T}} \hat{p}_{ij}z_j\),
\(i \in  \hat{N}\).
\(\hat{Q}\) and \(\hat{h}\) are the maximumlikelihood estimates of
the true process parameters \(Q\) and \(h\).
Finally,
estimate the expected value of the outcome \(z\),
given that the process is in state \(i \in \hat{N}\),
as
\begin{equation}\label{eq:8}
    \left[ \sum_{k=0}^{\infty} \hat{Q}^k \hat{h}\right]_i =
    \left[ (I - \hat{Q})^{-1} \hat{h}\right]_i.
\end{equation}
That is,
choose the estimate that would be ideal if in fact the maximumlikelihood estimate of the underlying process were exactly correct.
Let us call these estimates the optimal predictions.
Note that even though \(\hat{Q}\) is an estimated quantity,
it still corresponds to some absorbing Markov chain.
Thus,
\(\lim_{n\to\infty} \hat{Q}^n = 0\),
and Theorem A.1 applies,
assuring the existence of the limit and inverse in the above equation.

Although the procedure outlined above serves well as a definition of optimal performance,
note that it itself would be impractical to implement.
First of all,
it relies heavily on the observation vectors \(\x_i\) being distinct,
and on the assumption that they map one-to-one onto states.
Second,
the procedure involves keeping statistics on each pair of states (e.g.,
the \(ˆp_{ij}\)) rather than on each state or component of the observation vector.
If \(n\) is the number of states,
then this procedure requires \(O(n^2)\) memory whereas the other learning
procedures require only \(O(n)\) memory.
In addition,
the right side of \eqref{eq:8} must be re-computed
each time additional data become available and new estimates are needed.
This procedure may require as much as \(O(n^3)\) computation per time step
as compared to \(O(n)\) for the supervised-learning and TD methods.

Consider the case in which the observation vectors are linearly independent,
the training set is repeatedly presented,
and the weights are updated after each complete presentation of the training set.
In this case,
the Widrow-Hoff procedure converges so as to minimize the root mean squared error between its predictions and the actual outcomes in the training set (Widrow and Stearns,
1985).
As illustrated earlier in the random-walk example,
linear TD(0) converges to a different set of predictions.
We now show that those predictions are in fact the optimal predictions in the maximum-likelihood sense discussed above.
That is,
we prove the following theorem:
\begin{thm}\label{thm:3}
    For any training set whose observation vectors
    \(\{\x_i | i \in \hat{N} \}\) are linearly independent,
    there exists an \(\epsilon>0\) such that,
    for all positive \(\alpha<\epsilon\) and for any initial weight vector,
    the predictions of linear TD(0) converge,
    under repeated presentations of the training set with weight updates
    after each complete presentation,
    to the optimal predictions \eqref{eq:8}.
    That is,
    if \(w_n\) is the value of the weight vector
    after the training set has been presented \(n\) times,
    then \(\lim_{n\to\infty}x^T_i w_n = [(I − \hat{Q})^{−1}\hat{h}]_i\),
    \(\forall i \in \hat{N}\).
\end{thm}

\begin{proof}
The proof of Theorem \ref{thm:2} is almost the same as that of Theorem \ref{thm:2},
so here we only highlight the differences.
Linear TD(0) updates \(w_n\) after each presentation of the training set:
\begin{equation*}
    w_{n+1} = w_n + \sum_s\sum_{t=1}^{m_s}\alpha (P_{t+1}^s −
    P_t^s)\grad_wP_t^s,
\end{equation*}
where \(m_s\) is the number of observation vectors in the \(s\)\oth sequence in the training set,
\(P_t^s\) is the \(t\)\oth prediction in the \(s\)\oth sequence,
and is defined to be the outcome of the sth sequence.
Let \(\eta_{ij}\) be the number of times the transition \(i \to j\) appears in the training set;
then the sums can be regrouped as
\begin{align*}
w_{n+1} &=w_{n}+\sum_{i \in \hat{N}} \sum_{j \in \hat{N}} \eta_{i j} \alpha\left(w_{n}^{T} \mathbf{x}_{j}-w_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i}+\sum_{i \in \hat{N}} \sum_{j \in \hat{T}} \eta_{i j} \alpha\left(z_{j}-w_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i} \\
&=w_{n}+\sum_{i \in \hat{N}} \sum_{j \in \hat{N}} \hat{d}_{i} \hat{p}_{i j} \alpha\left(w_{n}^{T} \mathbf{x}_{j}-w_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i}+\sum_{i \in \hat{N}} \sum_{j \in \hat{T}} \hat{d}_{i} \hat{p}_{i j} \alpha\left(z_{j}-w_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i},
\end{align*}
where \(\hat{d}_i\) is the number of times
state \(i \in \hat{N}\) appears in the training set.
The rest of the proof for Theorem \ref{thm:2},
starting at \eqref{eq:6},
carries through with estimates substituting for actual values throughout.
The only step in the proof that requires additional support is to show
that \eqref{eq:7} still holds,
i.e., that \(\hat{d}^T = \hat{\mu}^T(I − \hat{Q})−1\),
where \([\hat{\mu}]_i\) is the number of sequences in
the training set that begin in state \(i \in \hat{N}\).
Note that \(\sum_{i\in \hat{N}}\eta_{ij} =
\sum_{i\in \hat{N}} \hat{d}_i\hat{p}_{ij}\) is
the number of times state \(j\) appears in the training set
as the destination of a transition.
Since all occurrences of state \(j\) must be either as the destination of a transition or as the beginning state of a sequence,
\(\hat{d}_j = [\hat{\mu}]_j + \sum_i \hat{d}_i\hat{p}_{ij}\).
Converting this to matrix notation,
we have \(\hat{d}^T = \hat{\mu}^T +\hat{d}^T\hat{Q}\),
which yields the desired conclusion,
\(\hat{d}^T = \hat{\mu}^T(I − \hat{Q})^{−1}\),
after algebraic manipulations.
\end{proof}

We have just shown that if linear TD(0) is repeatedly presented with a finite training set,
then it converges to the optimal estimates.
The Widrow-Hoff rule,
on the other hand,
converges to the estimates that minimize error on the training set;
as we saw in the random-walk example,
these are in general different from the optimal estimates.
That TD(0) converges to a better set of estimates with repeated presentations helps explain how and why it could learn better estimates from a single presentation,
but it does not prove that.
What is still needed is a characterization of the learning rate of TD methods that can be compared with those already available for supervised-learning methods.

\section{Temporal-difference methods as gradient descent}

Like many other statistical learning methods,
TD methods can be viewed as gradient descent (hillclimbing)
in the space of the modifiable parameters (weights).
That is,
their goal can be viewed as minimizing an overall error measure \(J(w)\)
over the space of weights by repeatedly incrementing the weight vector in
(an approximation to) the direction in which \(J(w)\) decreases most steeply.
Denoting the approximation to this direction of steepest descent,
or gradient,
as \(\tilde{\grad}_{w} J(w)\),
such methods are typically written as
\begin{equation*}
    \Delta w_t = −\alpha \tilde{\grad}_{w}J(w_t).
\end{equation*}

where \(\alpha\) is a positive constant determining step size.

For a multi-step prediction problem in which \(P_t = P(x_t,w)\) is meant to
approximate \(E \{z | x_t\}\),
a natural error measure is the expected value of the square of the difference between these two quantities:
\begin{equation*}
    J(w) = E_{\x} \left\{ \left( E \{z|\x\} - P(\x,w)\right)^2 \right\},
\end{equation*}
where \(E_{\x}\{~ ~\}\) denotes the expectation operator over observation vectors
\(x\).
\(J(w)\) measures the error for a weight vector averaged over all observation vectors,
but at each time step one usually obtains additional information about only a single observation vector.
The usual next step,
therefore,
is to define a per-observation error measure \(Q(w,x)\) with the property that
\(E_{\x}\{Q(w,x)\} = J(w)\).
For a multi-step prediction problem,
\begin{equation*}
    Q(w,\x) = E_{\x} \left( E \{z|\x\} - P(\x,w)\right)^2.
\end{equation*}
Each time step's weight increments are then determined using
\(\grad_wQ(w,x_t)\),
relying on the fact that \(E_{\x}\{\grad_wQ(w,x_t)\} = \grad_wJ(w)\),
so that the overall effect of the equation for \(\Delta w_t\) given above can be approximated over many steps using small \(\alpha\) by
.
The quantity \(E\{z | x_t\}\) is not directly known and must be estimated.
Depending on how this is done,
one gets either a supervised-learning method or a TD method.
If \(E\{z | x_t\}\) is approximated by \(\),
the outcome that actually occurs following \(x_t\),
then we get the classical supervised-learning procedure (2).
Alternatively,
if \(E\{z | x_t\}\) is approximated by \(P(x_{t+1},w)\),
the immediately following prediction,
then we get the extreme TD method, TD(0).
Key to this analysis is the recognition,
in the definition of \(J(w)\),
that our real goal is for each prediction to
match the expected value of the subsequent outcome,
not the actual outcome occurring in the training set.
TD methods can perform better than supervised-learning methods
because the actual outcome of a sequence is often not
the best estimate of its expected value.

\chapter{Generalizations of TD(λ)}

In this article,
we have chosen to analyze particularly simple cases of temporal-difference methods.
This has clarified their operation and made it possible to prove theorems.
However,
more realistic problems may require more complex TD methods.
In this section,
we briefly explore some ways in
which the simple methods can be extended.
Except where explicitly noted,
the theorems presented earlier do not strictly apply to these extensions.

\section{Predicting cumulative outcomes}

Temporal-difference methods are not limited to
predicting only the final outcome of a sequence;
they can also be used to predict a quantity that accumulates over a sequence.
That is,
each step of a sequence may incur a cost,
where we wish to predict the expected total cost over the sequence.
A common way for this to arise is for the costs to be elapsed time.
For example,
in a bounded random walk one might want to
predict how many steps will be taken before termination.
In a pole-balancing problem
one may want to predict time until a failure in balancing,
and in a packet-switched telecommunications network
one may want to predict the total delay in sending a packet.
In game playing,
points may be lost or won throughout a game,
and we may be interested in predicting the expected net gain or loss.
In all of these examples,
the quantity predicted is the cumulative sum of a number of parts,
where the parts become known as the sequence evolves.
For convenience,
we will continue to refer to these parts as costs,
even though their minimization will not be a goal in all applications.

In such problems,
it is natural to use the observation vector received at each step to
predict the total cumulative cost \emph{after that step},
rather than the total cost for the sequence as a whole.
Thus,
we will want \(P_t\) to predict the \emph{remaining} cumulative cost given
the \(t\)\oth observation rather than the overall cost for the sequence.
Since the cost for the preceding portion of the sequence is already known,
the total sequence cost can always be estimated as
the sum of the known cost-so-far and the estimated cost-remaining
(cf. the \(\mathrm{A}^*\) algorithm, dynamic programming).

The procedures presented earlier are easily generalized to
include the case of predicting cumulative outcomes.
Let \(c_{t+1}\) denote the actual cost incurred
between times \(t\) and \(t + 1\),
and let \(\overline{c}_{ij}\) denote the expected value of the cost incurred on transition from state \(i\) to state \(j\).
We would like \(P_t\) to equal the expected value of
\(z_t=\sum_{k=t}^{m} c_{k+1}\)
where \(m\) is the number of observation vectors in the sequence.
The prediction error can be represented in terms of temporal differences as),
where we define \(P_{m+1} = 0\).
Then,
following the same steps used to derive the TD(\(\lambda\)) family of
procedures defined by (4),
one can also derive the cumulative TD(\(\lambda\)) family defined by \(t\)
\begin{equation*}
    \Delta w_t =
    \alpha (c_{t+1} + P_{t+1} − P_t)\sum_{k=1}^t\lambda^{t−k}\grad_wP_k.
\end{equation*}
The three theorems presented earlier in this article carry over to the cumulative outcome case with the obvious modifications.
For example,
the ideal prediction for each state \(i \in  N\) is the expected value of the cumulative sum of the costs:
\begin{multline*}
E\left\{z_{t} \mid x_{t}=\mathbf{x}_{i}\right\}=\sum_{j \in N \cup T} p_{i j} \bar{c}_{i j}+\sum_{j \in N} p_{i j} \sum_{k \in N \cup T} p_{j k} \bar{c}_{j k} \\
+\sum_{j \in N} p_{i j} \sum_{k \in N} p_{j k} \sum_{l \in N \cup T} p_{k l} \bar{c}_{k l}+\cdots
\end{multline*}
If we let \(h\) be the vector with components
\([h]_i = \sum_j p_{ij}\overline{c}_{ij}\), \(i \in  N\),
then \eqref{eq:5} holds for this case as well.
Following steps similar to those in the proof of Theorem \ref{thm:2},
one can show that,
using linear cumulative TD(0),
the expected value of the weight vector after n sequences have been experienced is
\begin{align*}
\bar{w}_{n+1}=& \bar{w}_{n}+\sum_{i \in N} \sum_{j \in N} d_{i} p_{i j} \alpha\left(\bar{c}_{i j}+\bar{w}_{n}^{T} \mathbf{x}_{j}-\bar{w}_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i} \\
&+\sum_{i \in N} \sum_{j \in T} d_{i} p_{i j} \alpha\left(\bar{c}_{i j}-\bar{w}_{n}^{T} \mathbf{x}_{i}\right) \mathbf{x}_{i} \\
=& \bar{w}_{n}+\alpha \sum_{i \in N} d_{i} \mathbf{x}_{i}\left(\sum_{j \in N \cup T} p_{i j} \bar{c}_{i j}+\sum_{j \in N} p_{i j} \bar{w}_{n}^{T} \mathbf{x}_{j}-\bar{w}_{n}^{T} \mathbf{x}_{i} \sum_{j \in N \cup T} p_{i j}\right) \\
=& \bar{w}_{n}+\alpha \sum_{i \in N} d_{i} \mathbf{x}_{i}\left([h]_{i}+\sum_{j \in N} p_{i j} \bar{w}_{n}^{T} \mathbf{x}_{j}-\bar{w}_{n}^{T} \mathbf{x}_{i}\right),
\end{align*}
after which the rest of the proof of Theorem \ref{thm:2} follows unchanged.

\section{Intra-sequence weight updating}

So far we have concentrated on TD procedures in which the weight vector is
updated after the presentation of a complete sequence or training set.
Since each observation of a sequence generates an increment to the weight vector,
in many respects it would be simpler to update the weight vector immediately after each observation.
In fact,
all previously studied TD methods have operated in this more fully incremental way.

Extending TD(\(\lambda\)) to allow for intra-sequence updating requires a bit of care.
The obvious extension is
\begin{equation*}
    w_{t+1} =
    w_t + \alpha (P_{t+1} − P_t)\sum_{k=1}^t\lambda^{t−k}\grad_wP_k,
    \mbox{ where } P_t \stackrel{\mathrm{def}}{=} P(x_t,w_{t-1}).
\end{equation*}
However,
if \(w\) is changed within a sequence,
then the temporal changes in prediction during the sequence,
as defined by this procedure,
will be due to changes in \(w\) as well as to changes in \(x\).
This is probably an undesirable feature;
in extreme cases it may even lead to instability.
The following update rule ensures that only changes in prediction due to x are effective in causing weight alterations:
\begin{equation*}
    w_{t+1}=w_{t}+\alpha\left(P\left(x_{t+1}, w_{t}\right)-P\left(x_{t}, w_{t}\right)\right) \sum_{k=1}^{t} \lambda^{t-k} \nabla_{w} P\left(x_{k}, w_{t}\right).
\end{equation*}
This refinement is used in Samuel's (\cite*{samuel_studies_1959}) checker player
and in Sutton's (\cite*{sutton_temporal_1984}) Adaptive Heuristic Critic,
but not in Holland's (1986) bucket brigade
or in the system described by \citet{barto_neuronlike_1983}.

\section{Prediction by a fixed interval}

Finally,
consider the problem of making a prediction for a particular fixed amount of time later.
For example,
suppose you are interested in predicting one week in advance whether or not it will rain---on each Monday,
you predict whether it will rain on the following Monday,
on each Tuesday,
you predict whether it will rain on the following Tuesday,
and so on for each day of the week.
Although this problem involves a sequence of predictions,
TD methods cannot be directly applied because each prediction is of a different event and thus there is no clear desired relationship between them.

In order to apply TD methods,
this problem must be embedded within a larger family of prediction problems.
At each day t,
we must form not only \(P_t7\),
our estimate of the probability of rain seven days later,
and also \(P_t6\), \(P_t5\),
..., \(P_t1\),
where each is an estimate of the probability of rain \(δ\) days later.

This will provide for overlapping sequences of inter-related predictions,
e.g.,
,
all of the same event,
in this case of whether it will
rain on day \(t + 7\).
If the predictions are accurate,
we will have ,
\(\forall t, 1 ≤ δ ≤ 7\),
where \(P_t0\) is defined as the actual outcome at time \(t\) (e.g., 1 if it rains, 0 if it doesn't rain).
The update rule for the weight vector wδ used to compute \(P_tδ\) would be
t
\begin{equation*}
    ∆wδ = \alpha (P_tδ+1−1 − P_tδ)X\lambda t−k\grad_wP_kδ.
\end{equation*}
k=1
As illustrated here,
there are three key steps in constructing a TD method for a particular problem.
First,
embed the problem of interest in a larger class of problems,
if necessary in order to produce an appropriate sequence of predictions.
Second,
write down recursive equations expressing the desired relationship between predictions at different times in the sequence.
For the simplest cases,
with which this article has been mostly concerned,
these are just \(P_t = P_{t+1}\),
whereas in the cumulative outcome case these are \(P_t = P_{t+1} + c_{t+1}\).
Third,
construct an update rule that uses the mismatch in the recursive equations to drive weight changes towards a better match.
These three steps are very similar to those taken in formulating a dynamic programming problem (e.g., Denardo, 1982).

    6. Related Research
Although temporal-difference methods have never previously been identified or studied on their own,
we can view some previous machine learning research as having used them.
In this section we briefly review some of this previous work in light of the ideas developed here.

        6.1 Samuel's checker-playing program
The earliest known use of a TD method was in Samuel's (1959) celebrated checker-playing program.
This was in his ``learning by generalization'' procedure,
which modified the parameters of the function used to evaluate board positions.
The evaluation of a position was thought of as an estimate or prediction of how the game would eventually turn out starting from that position.
Thus,
the sequence of positions from an actual game or an anticipated continuation naturally gave rise to a sequence of predictions,
each about the game's final outcome.

In Samuel's learning procedure,
the difference between the evaluations of each pair of successive positions occurring in a game was used as an error;
that is,
it was used to alter the prediction associated with the first position of the pair to be more like the prediction associated with the second.
The predictions for the two positions were computed in different ways.
In most versions of the program,
the prediction for the first position was simply the result of applying the current evaluation function to that position.
The prediction for the second position was the ``backed-up'' or minimax score from a lookahead search started at that position,
using the current evaluation function.
Samuel referred to the difference between these two predictions as delta.
Although his updating procedure was much more complicated than TD(0),
his intent was to use delta much as \(P_{t+1} − P_t\) is used in (linear) TD(0).

However,
Samuel's learning procedure significantly differed from all the TD methods discussed here in its treatment of the final step of a sequence.
We have considered each sequence to end with a definite,
externally-supplied outcome (e.g.,
1 for a victory and 0 for a defeat).
The prediction for the last position in a sequence was altered so as to match this final outcome.
In Samuel's procedure,
on the other hand,
no position had a definite \emph{a priori} evaluation,
and the evaluation for the last position in a sequence was never explicitly altered.
Thus,
while both procedures constrained the evaluations (predictions) of non-terminal positions to match those that follow them,
Samuel's provided no additional constraint on the evaluation of terminal positions.
As he himself pointed out,
many useless evaluation functions satisfy just the first constraint (e.g., any function that is constant for all positions).

To discourage his learning procedure from finding useless evaluation functions,
Samuel included in the evaluation function a non-modifiable term measuring how many more pieces his program had than its opponent.
However,
although this modification may have decreased the likelihood of finding useless evaluation functions,
it did not prohibit them.
For example,
a constant function could still have been attained by setting the modifiable terms so as to cancel the effect of the non-modifiable one.

If Samuel's learning procedure was not constrained to find useful evaluation functions,
then it should have been possible for it to become worse with experience.
In fact,
Samuel reported observing this during extensive self-play training sessions.
He found that a good way to get the program improving again was to set the weight with the largest absolute value back to zero.
His interpretation was that this drastic intervention jarred the program out of local optima,
but another possibility is that it jarred the program out of evaluation functions that changed little,
but that also had little to do with winning or losing the game.

Nevertheless,
Samuel's learning procedure was overall very successful;
it played an important role in significantly improving the play of his checkerplaying program until it rivaled human checker masters.
Christensen and Korf have investigated a simplification of Samuel's procedure that also does not constrain the evaluations of terminal positions,
and have obtained promising preliminary results (Christensen, 1986;
Christensen and Korf, 1986).
Thus,
although a terminal constraint may be critical to good temporal-difference theory,
apparently it is not strictly necessary to obtain good performance.

        6.2 Backpropagation in connectionist networks
The backpropagation technique of Rumelhart et al.
(1985) is one of the most exciting recent developments in incremental learning methods.
This technique extends the Widrow-Hoff rule so that it can be applied to the interior ``hidden'' units of multi-layer connectionist networks.
In a backpropagation network,
the input-output functions of all units are deterministic and differentiable.
As a result,
the partial derivatives of the error measure with respect to each connection weight are well-defined,
and one can apply a gradient-descent approach such as that used in the original Widrow-Hoff rule.
The term ``backpropagation'' refers to the way the partial derivatives are efficiently computed in a backward propagating sweep through the network.
As presented by Rumelhart et al.,
backpropagation is explicitly a supervised-learning procedure.

The purpose of both backpropagation and TD methods is accurate credit assignment.
Backpropagation decides which part(s) of a network to change so as to influence the network's output and thus to reduce its overall error,
whereas TD methods decide how each output of a temporal sequence of outputs should be changed.
Backpropagation addresses a structural credit-assignment issue whereas TD methods address a temporal credit-assignment issue.

Although it currently seems that backpropagation and TD methods address different parts of the credit-assignment problem,
it is important to note that they are perfectly compatible and easily combined.
In this article,
we have emphasized the linear case,
but the TD methods presented are equally applicable to predictions formed by nonlinear functions,
such as backpropagation-style networks.
The key requirement is that the gradient \(\grad_wP_t\) be computable.
In a linear system,
this is just \(xt\).
In a network of differentiable nonlinear elements,
it can be computed by a backpropagation process.
For example,
Anderson (1986, 1987) has implemented such a combination of backpropagation and a temporal-difference method (the Adaptive Heuristic Critic, see below),
successfully applying it to both a nonlinear broomstick-balancing task and the Towers of Hanoi problem.

        6.3 Holland's bucket brigade
Holland's (1986) bucket brigade is a technique for learning sequences of rule invocations in a kind of adaptive production system called a classifier system.
The production rules in a classifier system compete to become active and have their right-hand sides (called messages) posted to a working-memory data structure (called the message list).
Conflict resolution is carried out by a competitive auction.
Each rule that matches the current contents of the message list makes a bid that depends on the product of its specificity and its strength,
a modifiable numerical parameter.
The highest bidders become active and post their messages to a new message list for the next round of the auction.

The bucket brigade is the process that adjusts the strengths of the rules and thereby determines which rules will become active at which times.
When a rule becomes active,
it loses strength by the amount of its bid,
but also gains strength if the message it posts triggers other rules to become active in the next round of the auction.
The strength gained is exactly the bids of the other rules.
If several rules post the same message,
then the bids of all responders are pooled and divided equally among the posting rules.
In principle,
long chains of rule invocations can be learned in this way,
with strength being passed back from rule to rule,
thus the name ``bucket brigade.'' For a chain to be stable,
its final rule must affect the environment,
achieve a goal,
and thereby receive new strength in the form of a payoff from the external environment.

Temporal-difference methods and the bucket brigade both borrow the same key idea from Samuel's work---that the steps in a sequence should be evaluated and adjusted according to their immediate or near-immediate successors,
rather than according to the final outcome.
The similarity between TD methods and the bucket brigade can be seen at a more detailed level by considering the latter's effect on a linear chain of rule invocations.
Each rule's strength can be thought of as a prediction of the payoff that will ultimately be obtained from the environment.
Assuming equal specificities,
the strength of each rule experiences a net change dependent on the difference between that strength and the strength of the succeeding rule.
Thus,
like TD(0),
the bucket brigade updates each strength (prediction) in a sequence of strengths (predictions) according to the immediately following temporal difference in strength (prediction).

There are also numerous differences between the bucket brigade and the TD methods presented here.
The most important of these is that the bucket brigade assigns credit based on which rules caused which other rules to become active,
whereas TD methods assign credit based solely on temporal succession.
The bucket brigade thus performs both temporal and structural credit assignment in a single mechanism.
This contrasts with the TD/backpropagation combination discussed in the preceding subsection,
which uses separate mechanisms for each kind of credit assignment.
The relative advantages of these two approaches are still to be determined.

        6.4 Infinite discounted predictions and the Adaptive Heuristic Critic
All the prediction problems we have considered so far have had definite outcomes.
That is,
after some point in time the actual outcome corresponding to each prediction became known.
Supervised-learning methods require this property,
because they make no learning changes until the actual outcome is discovered,
but in some problems it never becomes completely known.
For example,
suppose you wish to predict the total return from investing in the stock of various companies;
unless a company goes out of business,
total return is never fully determined.

Actually,
there is a problem of definition here:
if a company never goes out of business and earns income every year,
the total return can be infinite.
For reasons of this sort,
infinite-horizon prediction problems usually include some form of discounting.
For example,
if some process generates costs \(c_{t+1}\) at each transition from t to t + 1,
we may want \(P_t\) to predict the discounted sum:
,
where the discount-rate parameter \(\gamma \), \(0 ≤ \gamma  < 1\),
determines the extent to which we are concerned with short-range or long-range prediction.

If \(P_t\) should equal the above \(zt\),
then what are the recursive equations defining the desired relationship between temporally successive predictions?
If the predictions are accurate,
we can write

The mismatch or TD error is the difference between the two sides of this equation,
\((c_{t+1} +\gamma P_{t+1})−P_t.1\) Sutton's (1984) Adaptive Heuristic Critic uses this error in a learning rule otherwise identical to TD(\(\lambda\))'s:
t
\(\Delta w_t =
\alpha (c_{t+1} + \gamma P_{t+1} − P_t)\sum_{k=1}^t\lambda^{t−k}\grad_wP_k\),
k=1
where \(P_t\) is the linear form \(wTxt\),
so that \(\grad_wP_t = xt\).
Thus,
the Adaptive Heuristic Critic is probably best understood as using the linear TD method for predicting discounted cumulative outcomes.

    7. Conclusion
These analyses and experiments suggest that TD methods may be the learning methods of choice for many real-world learning problems.
We have argued that many of these problems involve temporal sequences of observations and predictions.
Whereas conventional,
supervised-learning approaches disregard this temporal structure,
TD methods are specially tailored to it.
As a result,
they can be computed more incrementally and require significantly less memory and peak computation.
One TD method makes exactly the same predictions and learning changes as a supervised-learning method,
while retaining these computational advantages.
Another TD method makes different learning changes,
but has been proved to converge asymptotically to the same correct predictions.
Empirically,
TD methods appear to learn faster than supervised-learning methods,
and one TD method has been proved to make optimal predictions for finite training sets that are presented repeatedly.
Overall,
TD methods appear to be computationally cheaper and to learn faster than conventional approaches to prediction learning.

The progress made in this paper has been due primarily to treating TD methods as general methods for learning to predict rather than as specialized methods for learning evaluation functions,
as they were in all previous work.
This simplification makes their theory much easier and also greatly broadens their range of applicability.
It is now clear that TD methods can be used for any pattern recognition problem in which data are gathered over time---for example,
speech recognition,
process monitoring,
target identification,
and market-trend prediction.
Potentially,
all of these can benefit from the advantages of TD methods vis-a-vis supervised-learning methods.
In speech recognition,
for example,
current learning methods cannot be applied until the correct classification of the word is known.
This means that all critical information about the waveform and how it was processed must be stored for later credit assignment.
If learning proceeded simultaneously with processing,
as in TD methods,
this storage would be avoided,
making it practical to consider far more features and combinations of features.

As general prediction-learning methods,
temporal-difference methods can also be applied to the classic problem of learning an internal model of the world.
Much of what we mean by having such a model is the ability to predict the future based on current actions and observations.
This prediction problem is a multi-step one,
and the external world is well modeled as a causal dynamical system;
hence TD methods should be applicable and advantageous.
Sutton and Pinette (1985) and Sutton and Barto (1981b) have begun to pursue one approach along these lines,
using TD methods and recurrent connectionist networks.

Animals must also face the problem of learning internal models of the world.
The learning process that seems to perform this function in animals is called Pavlovian or classical conditioning.
For example,
if a dog is repeatedly presented with the sound of a bell and then fed,
it will learn to predict the meal given just the bell,
as evidenced by salivation to the bell alone.
Some of the detailed features of this learning process suggest that animals may be using a TD method (Kehoe,
Schreurs and Graham, 1987;
Sutton and Barto, 1987).

Acknowledgements
The author acknowledges especially the assistance of Andy Barto,
Martha Steenstrup,
Chuck Anderson,
John Moore,
and Harry Klopf.
I also thank Oliver Selfridge,
Pat Langley,
Ron Rivest,
Mike Grimaldi,
John Aspinall,
Gene Cooperman,
Bud Frawley,
Jonathan Bachrach,
Mike Seymour,
Steve Epstein,
Jim Kehoe,
Les Servi,
Ron Williams and Marie Goslin.
The early stages of this research were supported by AFOSR contract F33615-83-C-1078.
