zjowowen commited on
Commit
81fc7f8
1 Parent(s): 6b76469

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -7
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  type: OpenAI/Gym/Atari-QbertNoFrameskip-v4
22
  metrics:
23
  - type: mean_reward
24
- value: 16375.0 +/- 0.0
25
  name: mean_reward
26
  ---
27
 
@@ -45,7 +45,7 @@ This is a simple **DQN** implementation to OpenAI/Gym/Atari **QbertNoFrameskip-v
45
  git clone https://github.com/opendilab/huggingface_ding.git
46
  pip3 install -e ./huggingface_ding/
47
  # install environment dependencies if needed
48
- pip3 install DI-engine[common_env]
49
  ```
50
  </details>
51
 
@@ -60,7 +60,23 @@ python3 -u run.py
60
  ```
61
  **run.py**
62
  ```python
63
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  ```
65
  </details>
66
 
@@ -75,7 +91,20 @@ python3 -u run.py
75
  ```
76
  **run.py**
77
  ```python
78
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ```
80
  </details>
81
 
@@ -92,7 +121,32 @@ python3 -u train.py
92
  ```
93
  **train.py**
94
  ```python
95
- # [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ```
97
  </details>
98
 
@@ -217,13 +271,13 @@ exp_config = {
217
  - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/QbertNoFrameskip-v4-DQN/blob/main/replay.mp4)
218
  <!-- Provide the size information for the model. -->
219
  - **Parameters total size:** 55703.03 KB
220
- - **Last Update Date:** 2023-07-23
221
 
222
  ## Environments
223
  <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
224
  - **Benchmark:** OpenAI/Gym/Atari
225
  - **Task:** QbertNoFrameskip-v4
226
  - **Gym version:** 0.25.1
227
- - **DI-engine version:** v0.4.8
228
  - **PyTorch version:** 2.0.1+cu117
229
  - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)
 
21
  type: OpenAI/Gym/Atari-QbertNoFrameskip-v4
22
  metrics:
23
  - type: mean_reward
24
+ value: 17175.0 +/- 0.0
25
  name: mean_reward
26
  ---
27
 
 
45
  git clone https://github.com/opendilab/huggingface_ding.git
46
  pip3 install -e ./huggingface_ding/
47
  # install environment dependencies if needed
48
+ pip3 install DI-engine[common_env,video]
49
  ```
50
  </details>
51
 
 
60
  ```
61
  **run.py**
62
  ```python
63
+ from ding.bonus import DQNAgent
64
+ from ding.config import Config
65
+ from easydict import EasyDict
66
+ import torch
67
+
68
+ # Pull model from files which are git cloned from huggingface
69
+ policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
70
+ cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
71
+ # Instantiate the agent
72
+ agent = DQNAgent(
73
+ env_id="QbertNoFrameskip-v4", exp_name="QbertNoFrameskip-v4-DQN", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
74
+ )
75
+ # Continue training
76
+ agent.train(step=5000)
77
+ # Render the new agent performance
78
+ agent.deploy(enable_save_replay=True)
79
+
80
  ```
81
  </details>
82
 
 
91
  ```
92
  **run.py**
93
  ```python
94
+ from ding.bonus import DQNAgent
95
+ from huggingface_ding import pull_model_from_hub
96
+
97
+ # Pull model from Hugggingface hub
98
+ policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/QbertNoFrameskip-v4-DQN")
99
+ # Instantiate the agent
100
+ agent = DQNAgent(
101
+ env_id="QbertNoFrameskip-v4", exp_name="QbertNoFrameskip-v4-DQN", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
102
+ )
103
+ # Continue training
104
+ agent.train(step=5000)
105
+ # Render the new agent performance
106
+ agent.deploy(enable_save_replay=True)
107
+
108
  ```
109
  </details>
110
 
 
121
  ```
122
  **train.py**
123
  ```python
124
+ from ding.bonus import DQNAgent
125
+ from huggingface_ding import push_model_to_hub
126
+
127
+ # Instantiate the agent
128
+ agent = DQNAgent(env_id="QbertNoFrameskip-v4", exp_name="QbertNoFrameskip-v4-DQN")
129
+ # Train the agent
130
+ return_ = agent.train(step=int(20000000), collector_env_num=8, evaluator_env_num=8, debug=False)
131
+ print("-----wandb url is----:", return_.wandb_url)
132
+ # Push model to huggingface hub
133
+ push_model_to_hub(
134
+ agent=agent.best,
135
+ env_name="OpenAI/Gym/Atari",
136
+ task_name="QbertNoFrameskip-v4",
137
+ algo_name="DQN",
138
+ wandb_url=return_.wandb_url,
139
+ github_repo_url="https://github.com/opendilab/DI-engine",
140
+ github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/dqn.html",
141
+ github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
142
+ installation_guide="pip3 install DI-engine[common_env,video]",
143
+ usage_file_by_git_clone="./dqn/qbert_dqn_deploy.py",
144
+ usage_file_by_huggingface_ding="./dqn/qbert_dqn_download.py",
145
+ train_file="./dqn/qbert_dqn.py",
146
+ repo_id="OpenDILabCommunity/QbertNoFrameskip-v4-DQN",
147
+ create_repo=False
148
+ )
149
+
150
  ```
151
  </details>
152
 
 
271
  - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/QbertNoFrameskip-v4-DQN/blob/main/replay.mp4)
272
  <!-- Provide the size information for the model. -->
273
  - **Parameters total size:** 55703.03 KB
274
+ - **Last Update Date:** 2023-10-08
275
 
276
  ## Environments
277
  <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
278
  - **Benchmark:** OpenAI/Gym/Atari
279
  - **Task:** QbertNoFrameskip-v4
280
  - **Gym version:** 0.25.1
281
+ - **DI-engine version:** v0.4.9
282
  - **PyTorch version:** 2.0.1+cu117
283
  - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)