araffin commited on
Commit
870a1ae
1 Parent(s): 572fae9

Initial commit

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - QbertNoFrameskip-v4
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: DQN
10
+ results:
11
+ - metrics:
12
+ - type: mean_reward
13
+ value: 5300.00 +/- 6528.41
14
+ name: mean_reward
15
+ task:
16
+ type: reinforcement-learning
17
+ name: reinforcement-learning
18
+ dataset:
19
+ name: QbertNoFrameskip-v4
20
+ type: QbertNoFrameskip-v4
21
+ ---
22
+
23
+ # **DQN** Agent playing **QbertNoFrameskip-v4**
24
+ This is a trained model of a **DQN** agent playing **QbertNoFrameskip-v4**
25
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
26
+ and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
27
+
28
+ The RL Zoo is a training framework for Stable Baselines3
29
+ reinforcement learning agents,
30
+ with hyperparameter optimization and pre-trained agents included.
31
+
32
+ ## Usage (with SB3 RL Zoo)
33
+
34
+ RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
35
+ SB3: https://github.com/DLR-RM/stable-baselines3<br/>
36
+ SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
37
+
38
+ ```
39
+ # Download model and save it into the logs/ folder
40
+ python -m utils.load_from_hub --algo dqn --env QbertNoFrameskip-v4 -orga sb3 -f logs/
41
+ python enjoy.py --algo dqn --env QbertNoFrameskip-v4 -f logs/
42
+ ```
43
+
44
+ ## Training (with the RL Zoo)
45
+ ```
46
+ python train.py --algo dqn --env QbertNoFrameskip-v4 -f logs/
47
+ # Upload the model and generate video (when possible)
48
+ python -m utils.push_to_hub --algo dqn --env QbertNoFrameskip-v4 -f logs/ -orga sb3
49
+ ```
50
+
51
+ ## Hyperparameters
52
+ ```python
53
+ OrderedDict([('batch_size', 32),
54
+ ('buffer_size', 10000),
55
+ ('env_wrapper',
56
+ ['stable_baselines3.common.atari_wrappers.AtariWrapper']),
57
+ ('exploration_final_eps', 0.01),
58
+ ('exploration_fraction', 0.1),
59
+ ('frame_stack', 4),
60
+ ('gradient_steps', 1),
61
+ ('learning_rate', 0.0001),
62
+ ('learning_starts', 100000),
63
+ ('n_timesteps', 10000000.0),
64
+ ('optimize_memory_usage', True),
65
+ ('policy', 'CnnPolicy'),
66
+ ('target_update_interval', 1000),
67
+ ('train_freq', 4),
68
+ ('normalize', False)])
69
+ ```
args.yml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - algo
3
+ - dqn
4
+ - - env
5
+ - QbertNoFrameskip-v4
6
+ - - env_kwargs
7
+ - null
8
+ - - eval_episodes
9
+ - 10
10
+ - - eval_freq
11
+ - 10000
12
+ - - gym_packages
13
+ - []
14
+ - - hyperparams
15
+ - null
16
+ - - log_folder
17
+ - rl-trained-agents/
18
+ - - log_interval
19
+ - -1
20
+ - - n_evaluations
21
+ - 20
22
+ - - n_jobs
23
+ - 1
24
+ - - n_startup_trials
25
+ - 10
26
+ - - n_timesteps
27
+ - -1
28
+ - - n_trials
29
+ - 10
30
+ - - num_threads
31
+ - -1
32
+ - - optimize_hyperparameters
33
+ - false
34
+ - - pruner
35
+ - median
36
+ - - sampler
37
+ - tpe
38
+ - - save_freq
39
+ - -1
40
+ - - save_replay_buffer
41
+ - false
42
+ - - seed
43
+ - 3527742872
44
+ - - storage
45
+ - null
46
+ - - study_name
47
+ - null
48
+ - - tensorboard_log
49
+ - ''
50
+ - - trained_agent
51
+ - ''
52
+ - - truncate_last_trajectory
53
+ - true
54
+ - - uuid
55
+ - true
56
+ - - vec_env
57
+ - dummy
58
+ - - verbose
59
+ - 1
config.yml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - batch_size
3
+ - 32
4
+ - - buffer_size
5
+ - 10000
6
+ - - env_wrapper
7
+ - - stable_baselines3.common.atari_wrappers.AtariWrapper
8
+ - - exploration_final_eps
9
+ - 0.01
10
+ - - exploration_fraction
11
+ - 0.1
12
+ - - frame_stack
13
+ - 4
14
+ - - gradient_steps
15
+ - 1
16
+ - - learning_rate
17
+ - 0.0001
18
+ - - learning_starts
19
+ - 100000
20
+ - - n_timesteps
21
+ - 10000000.0
22
+ - - optimize_memory_usage
23
+ - true
24
+ - - policy
25
+ - CnnPolicy
26
+ - - target_update_interval
27
+ - 1000
28
+ - - train_freq
29
+ - 4
dqn-QbertNoFrameskip-v4.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dba98ed6e76224ed8966db7b5971f82f3d4bd4939db14cdf459969d64574f3b2
3
+ size 27222385
dqn-QbertNoFrameskip-v4/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 1.5.1a8
dqn-QbertNoFrameskip-v4/data ADDED
The diff for this file is too large to render. See raw diff
 
dqn-QbertNoFrameskip-v4/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09f470d57438d4e7005909fe9aea6cd39bc179559fe72f3e67275c80ffc4455e
3
+ size 13503145
dqn-QbertNoFrameskip-v4/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:332359ee82cf0399381591e1e5effc15c50857bad9653ac25882f215c1ec2b90
3
+ size 13504937
dqn-QbertNoFrameskip-v4/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d030ad8db708280fcae77d87e973102039acd23a11bdecc3db8eb6c0ac940ee1
3
+ size 431
dqn-QbertNoFrameskip-v4/system_info.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ OS: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid #49~20.04.1-Ubuntu SMP Wed May 18 18:44:28 UTC 2022
2
+ Python: 3.7.10
3
+ Stable-Baselines3: 1.5.1a8
4
+ PyTorch: 1.11.0
5
+ GPU Enabled: True
6
+ Numpy: 1.21.2
7
+ Gym: 0.21.0
env_kwargs.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
replay.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f03aea8f0e25532e4db300f65da2fbb74452a8e7ef06e1d9c406e0f31d26422d
3
+ size 211053
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": 5300.0, "std_reward": 6528.409071129045, "is_deterministic": false, "n_eval_episodes": 10, "eval_datetime": "2022-06-02T20:27:50.613737"}
train_eval_metrics.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfb49a14596308527c29e53058d41dbf854efb5a41eec6d9b271a412d3a7692b
3
+ size 467010