zjowowen commited on
Commit
0f93086
1 Parent(s): fdebf87

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -7
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  type: OpenAI/Gym/Atari-SpaceInvadersNoFrameskip-v4
22
  metrics:
23
  - type: mean_reward
24
- value: 1176.0 +/- 248.31
25
  name: mean_reward
26
  ---
27
 
@@ -45,7 +45,7 @@ This is a simple **DQN** implementation to OpenAI/Gym/Atari **SpaceInvadersNoFra
45
  git clone https://github.com/opendilab/huggingface_ding.git
46
  pip3 install -e ./huggingface_ding/
47
  # install environment dependencies if needed
48
- pip3 install DI-engine[common_env]
49
  ```
50
  </details>
51
 
@@ -60,7 +60,23 @@ python3 -u run.py
60
  ```
61
  **run.py**
62
  ```python
63
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  ```
65
  </details>
66
 
@@ -75,7 +91,20 @@ python3 -u run.py
75
  ```
76
  **run.py**
77
  ```python
78
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ```
80
  </details>
81
 
@@ -92,7 +121,32 @@ python3 -u train.py
92
  ```
93
  **train.py**
94
  ```python
95
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ```
97
  </details>
98
 
@@ -220,13 +274,13 @@ exp_config = {
220
  - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/SpaceInvadersNoFrameskip-v4-DQN/blob/main/replay.mp4)
221
  <!-- Provide the size information for the model. -->
222
  - **Parameters total size:** 55703.03 KB
223
- - **Last Update Date:** 2023-07-24
224
 
225
  ## Environments
226
  <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
227
  - **Benchmark:** OpenAI/Gym/Atari
228
  - **Task:** SpaceInvadersNoFrameskip-v4
229
  - **Gym version:** 0.25.1
230
- - **DI-engine version:** v0.4.8
231
  - **PyTorch version:** 2.0.1+cu117
232
  - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)
 
21
  type: OpenAI/Gym/Atari-SpaceInvadersNoFrameskip-v4
22
  metrics:
23
  - type: mean_reward
24
+ value: 1614.5 +/- 554.23
25
  name: mean_reward
26
  ---
27
 
 
45
  git clone https://github.com/opendilab/huggingface_ding.git
46
  pip3 install -e ./huggingface_ding/
47
  # install environment dependencies if needed
48
+ pip3 install DI-engine[common_env,video]
49
  ```
50
  </details>
51
 
 
60
  ```
61
  **run.py**
62
  ```python
63
+ from ding.bonus import DQNAgent
64
+ from ding.config import Config
65
+ from easydict import EasyDict
66
+ import torch
67
+
68
+ # Pull model from files which are git cloned from huggingface
69
+ policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
70
+ cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
71
+ # Instantiate the agent
72
+ agent = DQNAgent(
73
+ env_id="SpaceInvadersNoFrameskip-v4", exp_name="SpaceInvadersNoFrameskip-v4-DQN", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
74
+ )
75
+ # Continue training
76
+ agent.train(step=5000)
77
+ # Render the new agent performance
78
+ agent.deploy(enable_save_replay=True)
79
+
80
  ```
81
  </details>
82
 
 
91
  ```
92
  **run.py**
93
  ```python
94
+ from ding.bonus import DQNAgent
95
+ from huggingface_ding import pull_model_from_hub
96
+
97
+ # Pull model from Hugggingface hub
98
+ policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/SpaceInvadersNoFrameskip-v4-DQN")
99
+ # Instantiate the agent
100
+ agent = DQNAgent(
101
+ env_id="SpaceInvadersNoFrameskip-v4", exp_name="SpaceInvadersNoFrameskip-v4-DQN", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
102
+ )
103
+ # Continue training
104
+ agent.train(step=5000)
105
+ # Render the new agent performance
106
+ agent.deploy(enable_save_replay=True)
107
+
108
  ```
109
  </details>
110
 
 
121
  ```
122
  **train.py**
123
  ```python
124
+ from ding.bonus import DQNAgent
125
+ from huggingface_ding import push_model_to_hub
126
+
127
+ # Instantiate the agent
128
+ agent = DQNAgent(env_id="SpaceInvadersNoFrameskip-v4", exp_name="SpaceInvadersNoFrameskip-v4-DQN")
129
+ # Train the agent
130
+ return_ = agent.train(step=int(20000000), collector_env_num=8, evaluator_env_num=8, debug=False)
131
+ print("-----wandb url is----:", return_.wandb_url)
132
+ # Push model to huggingface hub
133
+ push_model_to_hub(
134
+ agent=agent.best,
135
+ env_name="OpenAI/Gym/Atari",
136
+ task_name="SpaceInvadersNoFrameskip-v4",
137
+ algo_name="DQN",
138
+ wandb_url=return_.wandb_url,
139
+ github_repo_url="https://github.com/opendilab/DI-engine",
140
+ github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/dqn.html",
141
+ github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
142
+ installation_guide="pip3 install DI-engine[common_env,video]",
143
+ usage_file_by_git_clone="./dqn/spaceinvaders_dqn_deploy.py",
144
+ usage_file_by_huggingface_ding="./dqn/spaceinvaders_dqn_download.py",
145
+ train_file="./dqn/spaceinvaders_dqn.py",
146
+ repo_id="OpenDILabCommunity/SpaceInvadersNoFrameskip-v4-DQN",
147
+ create_repo=False
148
+ )
149
+
150
  ```
151
  </details>
152
 
 
274
  - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/SpaceInvadersNoFrameskip-v4-DQN/blob/main/replay.mp4)
275
  <!-- Provide the size information for the model. -->
276
  - **Parameters total size:** 55703.03 KB
277
+ - **Last Update Date:** 2023-10-07
278
 
279
  ## Environments
280
  <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
281
  - **Benchmark:** OpenAI/Gym/Atari
282
  - **Task:** SpaceInvadersNoFrameskip-v4
283
  - **Gym version:** 0.25.1
284
+ - **DI-engine version:** v0.4.9
285
  - **PyTorch version:** 2.0.1+cu117
286
  - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)