ppo-LunarLander-v2 / README.md

Commit History

Parameters: PPO model, batch_size: 32, n_steps: 512, epochs: 10, gamma: 0.999, gae_lambda: 0.95, ent_coef: 0.01, total_timesteps=2000000
7125b9f

AigizK commited on

Upload PPO LunarLander-v2 trained agent
f436ed0

AigizK commited on