nithiroj commited on
Commit
ea4a4cc
1 Parent(s): b0fb49a

Push Reinforce agent to the Hub

Browse files
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
2
- library_name: stable-baselines3
3
  tags:
4
  - LunarLander-v2
 
5
  - deep-reinforcement-learning
6
  - reinforcement-learning
7
- - stable-baselines3
 
8
  model-index:
9
  - name: PPO
10
  results:
11
  - metrics:
12
  - type: mean_reward
13
- value: 168.85 +/- 17.99
14
  name: mean_reward
15
  task:
16
  type: reinforcement-learning
@@ -20,17 +21,42 @@ model-index:
20
  type: LunarLander-v2
21
  ---
22
 
23
- # **PPO** Agent playing **LunarLander-v2**
24
- This is a trained model of a **PPO** agent playing **LunarLander-v2**
25
- using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
26
 
27
- ## Usage (with Stable-baselines3)
28
- TODO: Add your code
29
-
30
-
31
- ```python
32
- from stable_baselines3 import ...
33
- from huggingface_sb3 import load_from_hub
34
-
35
- ...
36
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  tags:
3
  - LunarLander-v2
4
+ - ppo
5
  - deep-reinforcement-learning
6
  - reinforcement-learning
7
+ - custom-implementation
8
+ - deep-rl-class
9
  model-index:
10
  - name: PPO
11
  results:
12
  - metrics:
13
  - type: mean_reward
14
+ value: -131.97 +/- 97.59
15
  name: mean_reward
16
  task:
17
  type: reinforcement-learning
 
21
  type: LunarLander-v2
22
  ---
23
 
24
+ # PPO Agent Playing LunarLander-v2
 
 
25
 
26
+ This is a trained model of a PPO agent playing LunarLander-v2.
27
+ To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
28
+
29
+ # Hyperparameters
30
+ ```python
31
+ {'exp_name': 'ppo'
32
+ 'seed': 1
33
+ 'torch_deterministic': True
34
+ 'cuda': True
35
+ 'track': False
36
+ 'wandb_project_name': 'cleanRL'
37
+ 'wandb_entity': None
38
+ 'capture_video': False
39
+ 'env_id': 'LunarLander-v2'
40
+ 'total_timesteps': 50000
41
+ 'learning_rate': 0.00025
42
+ 'num_envs': 4
43
+ 'num_steps': 128
44
+ 'anneal_lr': True
45
+ 'gae': True
46
+ 'gamma': 0.99
47
+ 'gae_lambda': 0.95
48
+ 'num_minibatches': 4
49
+ 'update_epochs': 4
50
+ 'norm_adv': True
51
+ 'clip_coef': 0.2
52
+ 'clip_vloss': True
53
+ 'ent_coef': 0.01
54
+ 'vf_coef': 0.5
55
+ 'max_grad_norm': 0.5
56
+ 'target_kl': None
57
+ 'virtual_display': True
58
+ 'repo_id': 'NithirojTripatarasit/ppo-LunarLander-v2'
59
+ 'batch_size': 512
60
+ 'minibatch_size': 128}
61
+ ```
62
+
logs/events.out.tfevents.1662597235.nt-pc.13004.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1c2f18e2d759fe6a24b1bc769c3d0fbe687f7ef94a0a93038a814f5ad886acb
3
+ size 113680
model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cda0e6fa965668f39fdf95e4f2deb1cafd6cf6c90bf65a20693583a68d3d98e7
3
+ size 42689
replay.mp4 CHANGED
Binary files a/replay.mp4 and b/replay.mp4 differ
 
results.json CHANGED
@@ -1 +1 @@
1
- {"mean_reward": 168.84663404661714, "std_reward": 17.987145843328495, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2022-09-01T01:59:32.128173"}
 
1
+ {"env_id": "LunarLander-v2", "mean_reward": -131.97019709331514, "std_reward": 97.59489916371655, "n_evaluation_episodes": 10, "eval_datetime": "2022-09-08T07:34:34.425937"}