text
stringlengths
0
4.99k
running reward: 128.99 at episode 270
running reward: 157.48 at episode 280
running reward: 174.54 at episode 290
running reward: 184.76 at episode 300
running reward: 190.87 at episode 310
running reward: 194.54 at episode 320
Solved at episode 322!
Visualizations
In early stages of training: Imgur
In later stages of training: Imgur
Implementing DDPG algorithm on the Inverted Pendulum Problem.
Introduction
Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions.
It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces.
This tutorial closely follow this paper - Continuous control with deep reinforcement learning
Problem
We are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right.
What make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2.
Quick theory
Just like the Actor-Critic method, we have two networks:
Actor - It proposes an action given a state.
Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action.
DDPG uses two more techniques not present in the original DQN:
First, it uses two Target networks.
Why? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable.
Conceptually, this is like saying, \"I have an idea of how to play this well, I'm going to try it out for a bit until I find something better\", as opposed to saying \"I'm going to re-learn how to play this entire game after every move\". See this StackOverflow answer.
Second, it uses Experience Replay.
We store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far.
Now, let's see how is it implemented.
import gym
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
We use OpenAIGym to create the environment. We will use the upper_bound parameter to scale our actions later.
problem = \"Pendulum-v0\"
env = gym.make(problem)
num_states = env.observation_space.shape[0]
print(\"Size of State Space -> {}\".format(num_states))
num_actions = env.action_space.shape[0]
print(\"Size of Action Space -> {}\".format(num_actions))
upper_bound = env.action_space.high[0]
lower_bound = env.action_space.low[0]
print(\"Max Value of Action -> {}\".format(upper_bound))
print(\"Min Value of Action -> {}\".format(lower_bound))
Size of State Space -> 3
Size of Action Space -> 1
Max Value of Action -> 2.0
Min Value of Action -> -2.0
To implement better exploration by the Actor network, we use noisy perturbations, specifically an Ornstein-Uhlenbeck process for generating noise, as described in the paper. It samples noise from a correlated normal distribution.
class OUActionNoise:
def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None):
self.theta = theta
self.mean = mean
self.std_dev = std_deviation
self.dt = dt
self.x_initial = x_initial
self.reset()
def __call__(self):
# Formula taken from https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process.
x = (
self.x_prev
+ self.theta * (self.mean - self.x_prev) * self.dt
+ self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape)
)
# Store x into x_prev
# Makes next noise dependent on current one
self.x_prev = x
return x
def reset(self):
if self.x_initial is not None:
self.x_prev = self.x_initial
else:
self.x_prev = np.zeros_like(self.mean)
The Buffer class implements Experience Replay.
Algorithm