lethebodies commited on
Commit
2c1d722
1 Parent(s): 5adbf42

Update README.md

Browse files

Code implementation to evaluate the PPO model in readme file

Files changed (1) hide show
  1. README.md +32 -3
README.md CHANGED
@@ -26,12 +26,41 @@ This is a trained model of a **PPO** agent playing **LunarLander-v2**
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
29
- TODO: Add your code
30
-
31
 
32
  ```python
33
  from stable_baselines3 import ...
34
  from huggingface_sb3 import load_from_hub
35
 
36
- ...
37
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
 
 
29
 
30
  ```python
31
  from stable_baselines3 import ...
32
  from huggingface_sb3 import load_from_hub
33
 
 
34
  ```
35
+
36
+ Use the model like this
37
+
38
+ ```python
39
+ import gym
40
+
41
+ from huggingface_sb3 import load_from_hub
42
+ from stable_baselines3 import PPO
43
+ from stable_baselines3.common.evaluation import evaluate_policy
44
+
45
+ # Retrieve the model from the hub
46
+ ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
47
+ ## filename = name of the model zip file from the repository
48
+ checkpoint = load_from_hub(repo_id="ThomasSimonini/ppo-LunarLander-v2", filename="ppo-LunarLander-v2.zip")
49
+ model = PPO.load(checkpoint)
50
+
51
+ # Evaluate the agent
52
+ eval_env = gym.make('LunarLander-v2')
53
+ mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
54
+ print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
55
+
56
+ # Watch the agent play
57
+ obs = eval_env.reset()
58
+ for i in range(1000):
59
+ action, _state = model.predict(obs)
60
+ obs, reward, done, info = eval_env.step(action)
61
+ eval_env.render()
62
+ if done:
63
+ obs = eval_env.reset()
64
+ eval_env.close()
65
+
66
+ ```