ppo-LunarLander-v2 / README.md

Commit History

PPO first trained model for the LunarLander-v2 env
7a63367

egarciamartin commited on