In other terms, how to build an RL agent that can select the actions that maximize its expected cumulative reward?
The Policy π is the brain of our Agent, it’s the function that tells us what action to take given the state we are. So it defines the agent’s behavior at a given time.
This Policy is the function we want to learn, our goal is to find the optimal policy π*, the policy that maximizes expected return when the agent acts according to it. We find this π* through training.
There are two approaches to train our agent to find this optimal policy π*:
In Policy-Based methods, we learn a policy function directly.
This function will define a mapping between each state and the best corresponding action. We can also say that it’ll define a probability distribution over the set of possible actions at that state.
We have two types of policies:
If we recap:
In value-based methods, instead of training a policy function, we train a value function that maps a state to the expected value of being at that state.
The value of a state is the expected discounted return the agent can get if it starts in that state, and then act according to our policy.
“Act according to our policy” just means that our policy is “going to the state with the highest value”.
Here we see that our value function defined value for each possible state.
Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.
If we recap: