Monte Carlo vs Temporal Difference Learning

The last thing we need to discuss before diving into Q-Learning is the two learning strategies.

Remember that an RL agent learns by interacting with its environment. The idea is that given the experience and the received reward, the agent will update its value function or policy.

Monte Carlo and Temporal Difference Learning are two different strategies on how to train our value function or our policy function. Both of them use experience to solve the RL problem.

On one hand, Monte Carlo uses an entire episode of experience before learning. On the other hand, Temporal Difference uses only a step ( St,At,Rt+1,St+1S_t, A_t, R_{t+1}, S_{t+1} ) to learn.

We’ll explain both of them using a value-based method example.

Monte Carlo: learning at the end of the episode

Monte Carlo waits until the end of the episode, calculates GtG_t (return) and uses it as a target for updating V(St)V(S_t).

So it requires a complete episode of interaction before updating our value function.

Monte Carlo

If we take an example:

Monte Carlo

By running more and more episodes, the agent will learn to play better and better.

Monte Carlo

For instance, if we train a state-value function using Monte Carlo:

Monte Carlo

Temporal Difference Learning: learning at each step

Temporal Difference, on the other hand, waits for only one interaction (one step) St+1S_{t+1} to form a TD target and update V(St)V(S_t) using Rt+1R_{t+1} and γV(St+1) \gamma * V(S_{t+1}).

The idea with TD is to update the V(St)V(S_t) at each step.

But because we didn’t experience an entire episode, we don’t have GtG_t (expected return). Instead, we estimate GtG_t by adding Rt+1R_{t+1} and the discounted value of the next state.

This is called bootstrapping. It’s called this because TD bases its update part on an existing estimate V(St+1)V(S_{t+1}) and not a complete sample GtG_t.

Temporal Difference

This method is called TD(0) or one-step TD (update the value function after any individual step).

Temporal Difference

If we take the same example,

Temporal Difference Temporal Difference

We can now update V(S0)V(S_0):

New V(S0)=V(S0)+lr[R1+γV(S1)V(S0)]V(S_0) = V(S_0) + lr * [R_1 + \gamma * V(S_1) - V(S_0)]

New V(S0)=0+0.1[1+100]V(S_0) = 0 + 0.1 * [1 + 1 * 0–0]

New V(S0)=0.1V(S_0) = 0.1

So we just updated our value function for State 0.

Now we continue to interact with this environment with our updated value function.

Temporal Difference

If we summarize: