File size: 2,057 Bytes
9dfda57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b1e0cf
9dfda57
2b1e0cf
 
9dfda57
 
 
2b1e0cf
 
9dfda57
2b1e0cf
 
 
 
 
 
 
 
 
 
 
 
 
9dfda57
2b1e0cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: LunarLander-v2
      type: LunarLander-v2
    metrics:
    - type: mean_reward
      value: 271.63 +/- 13.52
      name: mean_reward
      verified: false
---

# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).

## Usage (with Stable-baselines3)
The PPO model is saved as "ppo-LunarLander-v2.zip". 

There are two ways to load this model: 
1. Directly load the model from huggingface. This requires the use of the 'load_from_hub' function

```python
from huggingface_sb3 import load_from_hub
repo_id = "buildthemachine/ppo-LunarLander-v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip

# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
            "learning_rate": 0.0,
            "lr_schedule": lambda _: 0.0,
            "clip_range": lambda _: 0.0,
}

checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
```

2. Directly loading the zip file: 
```python
model = PPO(policy = 'MlpPolicy',
            env = env,
            learning_rate = 3e-4,
            n_steps = 1024,
            batch_size = 64,
            n_epochs = 4,
            gamma = 0.999,
            gae_lambda = 0.98,
            ent_coef = 0.01,
            verbose=1)
model_name = "ppo-LunarLander-v2"
model = PPO.load((f"/content/drive/MyDrive/Colab Notebooks/RL_tutorial_model_save/{model_name}"))

```