Update README.md
Browse files
README.md
CHANGED
@@ -22,12 +22,12 @@ model-index:
|
|
22 |
|
23 |
# **Reinforce** Agent playing **LunarLanderContinuous-v2**
|
24 |
This is a custom REINFORCE RL agent. Performance has been measured over 900 episodes.
|
25 |
-
To try the agent, user needs to import the ParameterisedPolicy class from the agent_class.py file. </br>
|
26 |
Training progress:
|
27 |
![training](training_graph.jpg)
|
28 |
|
29 |
Numbers on X axis are average over 40 episodes, each lasting for about 500 timesteps on average. So in total the agent was trained over about 5e6 timesteps.
|
30 |
-
Learning rate decay schedule: <code>torch.optim.lr_scheduler.StepLR(opt, step_size=4000, gamma=0.7)</code
|
31 |
|
32 |
Minimal code to use the agent:</br>
|
33 |
```
|
|
|
22 |
|
23 |
# **Reinforce** Agent playing **LunarLanderContinuous-v2**
|
24 |
This is a custom REINFORCE RL agent. Performance has been measured over 900 episodes.
|
25 |
+
To try the agent, user needs to import the `ParameterisedPolicy` class from the agent_class.py file. </br>
|
26 |
Training progress:
|
27 |
![training](training_graph.jpg)
|
28 |
|
29 |
Numbers on X axis are average over 40 episodes, each lasting for about 500 timesteps on average. So in total the agent was trained over about 5e6 timesteps.
|
30 |
+
Learning rate decay schedule: <code>torch.optim.lr_scheduler.StepLR(opt, step_size=4000, gamma=0.7)</code>. Training code is shown in the training.py file for reference.
|
31 |
|
32 |
Minimal code to use the agent:</br>
|
33 |
```
|