Hawk91 commited on
Commit
6ad2b77
1 Parent(s): 058fa8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -23,9 +23,8 @@ model-index:
23
  This is a trained model of **DQN** agent that plays **PongNoFrameskip-v4**
24
  Pong is a Atari 2600 game imported from Gym environment.
25
  Agent is implemented from Deep Reinforcement Learning by Max Lapan.
26
- The code is present in the github link:
27
- [https://github.com/mohit-ix/DeepRL/tree/main/Unit%206]
28
- The performance of agent at different steps is present here: [https://youtu.be/03Pl5Odc2jM]
29
 
30
  To use the agent use "03_dqn_play.py" from the github link and type:
31
  ```python
@@ -33,5 +32,5 @@ python 03_dqn_play.py -m [model_name] -r [recording_location] --no-vis
33
 
34
  ```
35
 
36
- Add "-r [recoding_location]" if you want to save the recording.
37
  Remove "--no-vis" if you want to render the gamplay by the agent.
 
23
  This is a trained model of **DQN** agent that plays **PongNoFrameskip-v4**
24
  Pong is a Atari 2600 game imported from Gym environment.
25
  Agent is implemented from Deep Reinforcement Learning by Max Lapan.
26
+ The code is present in the github link: https://github.com/mohit-ix/DeepRL/tree/main/Unit%206
27
+ The performance of agent at different steps is present here: https://youtu.be/03Pl5Odc2jM
 
28
 
29
  To use the agent use "03_dqn_play.py" from the github link and type:
30
  ```python
 
32
 
33
  ```
34
 
35
+ Add "-r [recoding_location]" if you want to save the recording.[]
36
  Remove "--no-vis" if you want to render the gamplay by the agent.