File size: 1,749 Bytes
e87beef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf0bcc1
e87beef
bf0bcc1
 
 
e87beef
bf0bcc1
 
 
 
e87beef
bf0bcc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: LunarLander-v2
      type: LunarLander-v2
    metrics:
    - type: mean_reward
      value: 301.97 +/- 19.65
      name: mean_reward
      verified: false
---

# PPO Agent Playing LunarLander-v2

This is a trained model of a PPO agent playing LunarLander-v2.
The agent has been trained with a custom PPO implementation inspired to
[a tutorial by Costa Huang](https://www.youtube.com/watch?v=MEt6rrxH8W4).

This work is related to Unit 8, part 1 of the Hugging Face Deep RL course. I had to slightly modify
some pieces of the provided notebook, because I used gymnasium and not gym.
Furthermore, the PPO implementation is available on GitHub, here:
[https://github.com/micdestefano/micppo](https://github.com/micdestefano/micppo).

# Hyperparameters
```python
{
  'exp_name': 'micppo'
  'gym_id': 'LunarLander-v2'
  'learning_rate': 0.00025
  'min_learning_rate_ratio': 0.01
  'seed': 1
  'total_timesteps': 10000000
  'torch_not_deterministic': False
  'no_cuda': False
  'capture_video': True
  'hidden_size': 256
  'num_hidden_layers': 3
  'activation': 'leaky-relu'
  'num_checkpoints': 4
  'num_envs': 8
  'num_steps': 2048
  'no_lr_annealing': False
  'no_gae': False
  'gamma': 0.99
  'gae_lambda': 0.95
  'num_minibatches': 16
  'num_update_epochs': 32
  'no_advantage_normalization': False
  'clip_coef': 0.2
  'no_value_loss_clip': False
  'ent_coef': 0.01
  'vf_coef': 0.5
  'max_grad_norm': 0.5
  'target_kl': None
  'batch_size': 16384
  'minibatch_size': 1024
}
```