text stringlengths 0 4.99k |
|---|
model.compile( |
optimizer=\"adam\", |
loss=\"mean_squared_error\", |
metrics=[keras.metrics.MeanAbsolutePercentageError()], |
) |
model.fit( |
x_train, |
y_train, |
epochs=200, |
verbose=0, |
callbacks=[keras.callbacks.TensorBoard(log_dir=f\"logs/{model_name}\")], |
) |
To load the TensorBoard from a Jupyter notebook you can use the %tensorboard magic: |
%tensorboard --logdir logs |
The TensorBoard monitor metrics and examine the training curve. |
Tensorboard training graph |
The TensorBoard also allows you to explore the computation graph used in your models. |
Tensorboard graph exploration |
The ability to introspect into your models can be valuable during debugging. |
Conclusion |
Porting existing NumPy code to Keras models using the tensorflow_numpy API is easy! By integrating with Keras you gain the ability to use existing Keras callbacks, metrics and optimizers, easily distribute your training and use Tensorboard. |
Migrating a more complex model, such as a ResNet, to the TensorFlow NumPy API would be a great follow up learning exercise. |
Several open source NumPy ResNet implementations are available online. |
Implement Actor Critic Method in CartPole environment. |
Introduction |
This script shows an implementation of Actor Critic method on CartPole-V0 environment. |
Actor Critic Method |
As an agent takes actions and moves through an environment, it learns to map the observed state of the environment to two possible outputs: |
Recommended action: A probability value for each action in the action space. The part of the agent responsible for this output is called the actor. |
Estimated rewards in the future: Sum of all rewards it expects to receive in the future. The part of the agent responsible for this output is the critic. |
Agent and Critic learn to perform their tasks, such that the recommended actions from the actor maximize the rewards. |
CartPole-V0 |
A pole is attached to a cart placed on a frictionless track. The agent has to apply force to move the cart. It is rewarded for every time step the pole remains upright. The agent, therefore, must learn to keep the pole from falling over. |
References |
CartPole |
Actor Critic Method |
Setup |
import gym |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
# Configuration parameters for the whole setup |
seed = 42 |
gamma = 0.99 # Discount factor for past rewards |
max_steps_per_episode = 10000 |
env = gym.make(\"CartPole-v0\") # Create the environment |
env.seed(seed) |
eps = np.finfo(np.float32).eps.item() # Smallest number such that 1.0 + eps != 1.0 |
Implement Actor Critic network |
This network learns two functions: |
Actor: This takes as input the state of our environment and returns a probability value for each action in its action space. |
Critic: This takes as input the state of our environment and returns an estimate of total rewards in the future. |
In our implementation, they share the initial layer. |
num_inputs = 4 |
num_actions = 2 |
num_hidden = 128 |
inputs = layers.Input(shape=(num_inputs,)) |
common = layers.Dense(num_hidden, activation=\"relu\")(inputs) |
action = layers.Dense(num_actions, activation=\"softmax\")(common) |
critic = layers.Dense(1)(common) |
model = keras.Model(inputs=inputs, outputs=[action, critic]) |
Train |
optimizer = keras.optimizers.Adam(learning_rate=0.01) |
huber_loss = keras.losses.Huber() |
action_probs_history = [] |
critic_value_history = [] |
rewards_history = [] |
running_reward = 0 |
episode_count = 0 |
while True: # Run until solved |
state = env.reset() |
episode_reward = 0 |
with tf.GradientTape() as tape: |
for timestep in range(1, max_steps_per_episode): |
# env.render(); Adding this line would show the attempts |
# of the agent in a pop up window. |
state = tf.convert_to_tensor(state) |
state = tf.expand_dims(state, 0) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.