text stringlengths 0 4.99k |
|---|
Episode * 77 * Avg Reward is ==> -158.68906473090686 |
Episode * 78 * Avg Reward is ==> -164.60260866654318 |
Episode * 79 * Avg Reward is ==> -161.5493472156026 |
Episode * 80 * Avg Reward is ==> -152.48077012719403 |
Episode * 81 * Avg Reward is ==> -149.52532010375975 |
Episode * 82 * Avg Reward is ==> -149.61942419730423 |
Episode * 83 * Avg Reward is ==> -149.82443455067468 |
Episode * 84 * Avg Reward is ==> -149.80009937226978 |
Episode * 85 * Avg Reward is ==> -144.51659331262107 |
Episode * 86 * Avg Reward is ==> -150.7545561142967 |
Episode * 87 * Avg Reward is ==> -153.84772667131307 |
Episode * 88 * Avg Reward is ==> -151.35200443047225 |
Episode * 89 * Avg Reward is ==> -148.30392250041828 |
Episode * 90 * Avg Reward is ==> -151.33886235855053 |
Episode * 91 * Avg Reward is ==> -151.153096135589 |
Episode * 92 * Avg Reward is ==> -151.19626034791332 |
Episode * 93 * Avg Reward is ==> -151.15870791946685 |
Episode * 94 * Avg Reward is ==> -154.2673372216281 |
Episode * 95 * Avg Reward is ==> -150.40737651480134 |
Episode * 96 * Avg Reward is ==> -147.7969116731913 |
Episode * 97 * Avg Reward is ==> -147.88640802454557 |
Episode * 98 * Avg Reward is ==> -144.88997165191319 |
Episode * 99 * Avg Reward is ==> -142.22158276699662 |
png |
If training proceeds correctly, the average episodic reward will increase with time. |
Feel free to try different learning rates, tau values, and architectures for the Actor and Critic networks. |
The Inverted Pendulum problem has low complexity, but DDPG work great on many other problems. |
Another great environment to try this on is LunarLandingContinuous-v2, but it will take more episodes to obtain good results. |
# Save the weights |
actor_model.save_weights(\"pendulum_actor.h5\") |
critic_model.save_weights(\"pendulum_critic.h5\") |
target_actor.save_weights(\"pendulum_target_actor.h5\") |
target_critic.save_weights(\"pendulum_target_critic.h5\") |
Before Training: |
before_img |
After 100 episodes: |
after_img |
Play Atari Breakout with a Deep Q-Network. |
Introduction |
This script shows an implementation of Deep Q-Learning on the BreakoutNoFrameskip-v4 environment. |
Deep Q-Learning |
As an agent takes actions and moves through an environment, it learns to map the observed state of the environment to an action. An agent will choose an action in a given state based on a \"Q-value\", which is a weighted reward based on the expected highest long-term reward. A Q-Learning Agent learns to perform its task such that the recommended action maximizes the potential future rewards. This method is considered an \"Off-Policy\" method, meaning its Q values are updated assuming that the best action was chosen, even if the best action was not chosen. |
Atari Breakout |
In this environment, a board moves along the bottom of the screen returning a ball that will destroy blocks at the top of the screen. The aim of the game is to remove all blocks and breakout of the level. The agent must learn to control the board by moving left and right, returning the ball and removing all the blocks without the ball passing the board. |
Note |
The Deepmind paper trained for \"a total of 50 million frames (that is, around 38 days of game experience in total)\". However this script will give good results at around 10 million frames which are processed in less than 24 hours on a modern machine. |
References |
Q-Learning |
Deep Q-Learning |
Setup |
from baselines.common.atari_wrappers import make_atari, wrap_deepmind |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
# Configuration paramaters for the whole setup |
seed = 42 |
gamma = 0.99 # Discount factor for past rewards |
epsilon = 1.0 # Epsilon greedy parameter |
epsilon_min = 0.1 # Minimum epsilon greedy parameter |
epsilon_max = 1.0 # Maximum epsilon greedy parameter |
epsilon_interval = ( |
epsilon_max - epsilon_min |
) # Rate at which to reduce chance of random action being taken |
batch_size = 32 # Size of batch taken from replay buffer |
max_steps_per_episode = 10000 |
# Use the Baseline Atari environment because of Deepmind helper functions |
env = make_atari(\"BreakoutNoFrameskip-v4\") |
# Warp the frames, grey scale, stake four frame and scale to smaller ratio |
env = wrap_deepmind(env, frame_stack=True, scale=True) |
env.seed(seed) |
Implement the Deep Q-Network |
This network learns an approximation of the Q-table, which is a mapping between the states and actions that an agent will take. For every state we'll have four actions, that can be taken. The environment provides the state, and the action is chosen by selecting the larger of the four Q-values predicted in the output layer. |
num_actions = 4 |
def create_q_model(): |
# Network defined by the Deepmind paper |
inputs = layers.Input(shape=(84, 84, 4,)) |
# Convolutions on the frames on the screen |
layer1 = layers.Conv2D(32, 8, strides=4, activation=\"relu\")(inputs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.