A2C Agent playing PandaReachDense-v3
General information about the project:
This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. It controls a robotic arm to pick up balls.
What I did:
Manually tuned hyperparameters by adding "learning_rate=0.0007, n_steps=5, gamma=0.99, gae_lambda=0.95" to the A2C model.
model = A2C(policy = "MultiInputPolicy",
env = env,
learning_rate=0.0007,
n_steps=5,
gamma=0.99,
gae_lambda=0.95,
verbose=1)
Links to relevant resources such as tutorials.
Reinforcement Learning Tips and Tricks: https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html
A Github Training Framework : https://github.com/DLR-RM/rl-baselines3-zoo
Poe (GPT-4): Showed me how to use Optuna to do automated hyperparameter optimization, but I was still understanding how it worked and couldn't get it to run properly.
import optuna
from stable_baselines3 import A2C
from stable_baselines3.common.env_util import make_vec_env
def optimize_agent(trial):
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1)
gamma = trial.suggest_uniform('gamma', 0.8, 0.9999)
gae_lambda = trial.suggest_uniform('gae_lambda', 0.8, 0.99)
n_steps = trial.suggest_int('n_steps', 5, 20)
model = A2C('MlpPolicy', env, verbose=0, learning_rate=learning_rate, gamma=gamma, gae_lambda=gae_lambda, n_steps=n_steps)
model.learn(total_timesteps=5000)
rewards = sum(model.rollout_buffer.rewards)
return rewards
study = optuna.create_study(direction='maximize')
study.optimize(optimize_agent, n_trials=100)
print('Best hyperparameters:', study.best_params)
- Downloads last month
- 5
Evaluation results
- mean_reward on PandaReachDense-v3self-reported-0.24 +/- 0.09