ppo-LunarLander-v2 / results.json
Max100ce's picture
First PPO Model trained on LunarLander-V2
0fa43dc
raw
history blame
164 Bytes
{"mean_reward": 256.2339051785674, "std_reward": 17.594155066877402, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2023-02-25T16:17:59.386843"}