text stringlengths 0 4.99k |
|---|
Critic loss - Mean Squared Error of y - Q(s, a) where y is the expected return as seen by the Target network, and Q(s, a) is action value predicted by the Critic network. y is a moving target that the critic model tries to achieve; we make this target stable by updating the Target model slowly. |
Actor loss - This is computed using the mean of the value given by the Critic network for the actions taken by the Actor network. We seek to maximize this quantity. |
Hence we update the Actor network so that it produces actions that get the maximum predicted value as seen by the Critic, for a given state. |
class Buffer: |
def __init__(self, buffer_capacity=100000, batch_size=64): |
# Number of \"experiences\" to store at max |
self.buffer_capacity = buffer_capacity |
# Num of tuples to train on. |
self.batch_size = batch_size |
# Its tells us num of times record() was called. |
self.buffer_counter = 0 |
# Instead of list of tuples as the exp.replay concept go |
# We use different np.arrays for each tuple element |
self.state_buffer = np.zeros((self.buffer_capacity, num_states)) |
self.action_buffer = np.zeros((self.buffer_capacity, num_actions)) |
self.reward_buffer = np.zeros((self.buffer_capacity, 1)) |
self.next_state_buffer = np.zeros((self.buffer_capacity, num_states)) |
# Takes (s,a,r,s') obervation tuple as input |
def record(self, obs_tuple): |
# Set index to zero if buffer_capacity is exceeded, |
# replacing old records |
index = self.buffer_counter % self.buffer_capacity |
self.state_buffer[index] = obs_tuple[0] |
self.action_buffer[index] = obs_tuple[1] |
self.reward_buffer[index] = obs_tuple[2] |
self.next_state_buffer[index] = obs_tuple[3] |
self.buffer_counter += 1 |
# Eager execution is turned on by default in TensorFlow 2. Decorating with tf.function allows |
# TensorFlow to build a static graph out of the logic and computations in our function. |
# This provides a large speed up for blocks of code that contain many small TensorFlow operations such as this one. |
@tf.function |
def update( |
self, state_batch, action_batch, reward_batch, next_state_batch, |
): |
# Training and updating Actor & Critic networks. |
# See Pseudo Code. |
with tf.GradientTape() as tape: |
target_actions = target_actor(next_state_batch, training=True) |
y = reward_batch + gamma * target_critic( |
[next_state_batch, target_actions], training=True |
) |
critic_value = critic_model([state_batch, action_batch], training=True) |
critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value)) |
critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables) |
critic_optimizer.apply_gradients( |
zip(critic_grad, critic_model.trainable_variables) |
) |
with tf.GradientTape() as tape: |
actions = actor_model(state_batch, training=True) |
critic_value = critic_model([state_batch, actions], training=True) |
# Used `-value` as we want to maximize the value given |
# by the critic for our actions |
actor_loss = -tf.math.reduce_mean(critic_value) |
actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables) |
actor_optimizer.apply_gradients( |
zip(actor_grad, actor_model.trainable_variables) |
) |
# We compute the loss and update parameters |
def learn(self): |
# Get sampling range |
record_range = min(self.buffer_counter, self.buffer_capacity) |
# Randomly sample indices |
batch_indices = np.random.choice(record_range, self.batch_size) |
# Convert to tensors |
state_batch = tf.convert_to_tensor(self.state_buffer[batch_indices]) |
action_batch = tf.convert_to_tensor(self.action_buffer[batch_indices]) |
reward_batch = tf.convert_to_tensor(self.reward_buffer[batch_indices]) |
reward_batch = tf.cast(reward_batch, dtype=tf.float32) |
next_state_batch = tf.convert_to_tensor(self.next_state_buffer[batch_indices]) |
self.update(state_batch, action_batch, reward_batch, next_state_batch) |
# This update target parameters slowly |
# Based on rate `tau`, which is much less than one. |
@tf.function |
def update_target(target_weights, weights, tau): |
for (a, b) in zip(target_weights, weights): |
a.assign(b * tau + a * (1 - tau)) |
Here we define the Actor and Critic networks. These are basic Dense models with ReLU activation. |
Note: We need the initialization for last layer of the Actor to be between -0.003 and 0.003 as this prevents us from getting 1 or -1 output values in the initial stages, which would squash our gradients to zero, as we use the tanh activation. |
def get_actor(): |
# Initialize weights between -3e-3 and 3-e3 |
last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.