Q-Learning is an off-policy value-based method that uses a TD approach to train its action-value function:
Q-Learning is the algorithm we use to train our Q-function, an action-value function that determines the value of being at a particular state and taking a specific action at that state.
The Q comes from “the Quality” (the value) of that action at that state.
Let’s recap the difference between value and reward:
Internally, our Q-function has a Q-table, a table where each cell corresponds to a state-action pair value. Think of this Q-table as the memory or cheat sheet of our Q-function.
Let’s go through an example of a maze.
The Q-table is initialized. That’s why all values are = 0. This table contains, for each state, the four state-action values.
Here we see that the state-action value of the initial state and going up is 0:
Therefore, Q-function contains a Q-table that has the value of each-state action pair. And given a state and action, our Q-function will search inside its Q-table to output the value.
If we recap, Q-Learning is the RL algorithm that:
But, in the beginning, our Q-table is useless since it gives arbitrary values for each state-action pair (most of the time, we initialize the Q-table to 0). As the agent explores the environment and we update the Q-table, it will give us better and better approximations to the optimal policy.
Now that we understand what Q-Learning, Q-function, and Q-table are, let’s dive deeper into the Q-Learning algorithm.
This is the Q-Learning pseudocode; let’s study each part and see how it works with a simple example before implementing it. Don’t be intimidated by it, it’s simpler than it looks! We’ll go over each step.
We need to initialize the Q-table for each state-action pair. Most of the time, we initialize with values of 0.
Epsilon greedy strategy is a policy that handles the exploration/exploitation trade-off.
The idea is that we define the initial epsilon ɛ = 1.0:
At the beginning of the training, the probability of doing exploration will be huge since ɛ is very high, so most of the time, we’ll explore. But as the training goes on, and consequently our Q-table gets better and better in its estimations, we progressively reduce the epsilon value since we will need less and less exploration and more exploitation.
Remember that in TD Learning, we update our policy or value function (depending on the RL method we choose) after one step of the interaction.
To produce our TD target, we used the immediate reward plus the discounted value of the next state best state-action pair (we call that bootstrap).
Therefore, our update formula goes like this:
It means that to update our :
How do we form the TD target?
Then when the update of this Q-value is done, we start in a new state and select our action using a epsilon-greedy policy again.
This is why we say that Q Learning is an off-policy algorithm.
The difference is subtle:
For instance, with Q-Learning, the epsilon-greedy policy (acting policy), is different from the greedy policy that is used to select the best next-state action value to update our Q-value (updating policy).
Is different from the policy we use during the training part:
For instance, with Sarsa, another value-based algorithm, the epsilon-greedy policy selects the next state-action pair, not a greedy policy.