xaeroq commited on
Commit
a2aa693
1 Parent(s): ff22057

Initial commit

Browse files
.gitattributes CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - ALE/Qbert-v5
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: DQN
10
+ results:
11
+ - task:
12
+ type: reinforcement-learning
13
+ name: reinforcement-learning
14
+ dataset:
15
+ name: ALE/Qbert-v5
16
+ type: ALE/Qbert-v5
17
+ metrics:
18
+ - type: mean_reward
19
+ value: 6665.00 +/- 1973.49
20
+ name: mean_reward
21
+ verified: false
22
+ ---
23
+
24
+ # **DQN** Agent playing **ALE/Qbert-v5**
25
+ This is a trained model of a **DQN** agent playing **ALE/Qbert-v5**
26
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
27
+ and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
28
+
29
+ The RL Zoo is a training framework for Stable Baselines3
30
+ reinforcement learning agents,
31
+ with hyperparameter optimization and pre-trained agents included.
32
+
33
+ ## Usage (with SB3 RL Zoo)
34
+
35
+ RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
36
+ SB3: https://github.com/DLR-RM/stable-baselines3<br/>
37
+ SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
38
+
39
+ ```
40
+ # Download model and save it into the logs/ folder
41
+ python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
42
+ python enjoy.py --algo dqn --env ALE/Qbert-v5 -f logs/
43
+ ```
44
+
45
+ If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
46
+ ```
47
+ python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
48
+ rl_zoo3 enjoy --algo dqn --env ALE/Qbert-v5 -f logs/
49
+ ```
50
+
51
+ ## Training (with the RL Zoo)
52
+ ```
53
+ python train.py --algo dqn --env ALE/Qbert-v5 -f logs/
54
+ # Upload the model and generate video (when possible)
55
+ python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Qbert-v5 -f logs/ -orga xaeroq
56
+ ```
57
+
58
+ ## Hyperparameters
59
+ ```python
60
+ OrderedDict([('batch_size', 32),
61
+ ('buffer_size', 100000),
62
+ ('env_wrapper',
63
+ ['stable_baselines3.common.atari_wrappers.AtariWrapper']),
64
+ ('exploration_final_eps', 0.01),
65
+ ('exploration_fraction', 0.1),
66
+ ('frame_stack', 4),
67
+ ('gradient_steps', 1),
68
+ ('learning_rate', 0.0001),
69
+ ('learning_starts', 100000),
70
+ ('n_timesteps', 1000000.0),
71
+ ('optimize_memory_usage', False),
72
+ ('policy', 'CnnPolicy'),
73
+ ('target_update_interval', 1000),
74
+ ('train_freq', 4),
75
+ ('normalize', False)])
76
+ ```
args.yml ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - algo
3
+ - dqn
4
+ - - device
5
+ - auto
6
+ - - env
7
+ - ALE/Qbert-v5
8
+ - - env_kwargs
9
+ - null
10
+ - - eval_episodes
11
+ - 5
12
+ - - eval_freq
13
+ - 25000
14
+ - - gym_packages
15
+ - []
16
+ - - hyperparams
17
+ - null
18
+ - - log_folder
19
+ - logs/
20
+ - - log_interval
21
+ - -1
22
+ - - max_total_trials
23
+ - null
24
+ - - n_eval_envs
25
+ - 1
26
+ - - n_evaluations
27
+ - null
28
+ - - n_jobs
29
+ - 1
30
+ - - n_startup_trials
31
+ - 10
32
+ - - n_timesteps
33
+ - -1
34
+ - - n_trials
35
+ - 500
36
+ - - no_optim_plots
37
+ - false
38
+ - - num_threads
39
+ - -1
40
+ - - optimization_log_path
41
+ - null
42
+ - - optimize_hyperparameters
43
+ - false
44
+ - - progress
45
+ - false
46
+ - - pruner
47
+ - median
48
+ - - sampler
49
+ - tpe
50
+ - - save_freq
51
+ - -1
52
+ - - save_replay_buffer
53
+ - false
54
+ - - seed
55
+ - 2663950829
56
+ - - storage
57
+ - null
58
+ - - study_name
59
+ - null
60
+ - - tensorboard_log
61
+ - ''
62
+ - - track
63
+ - false
64
+ - - trained_agent
65
+ - ''
66
+ - - truncate_last_trajectory
67
+ - true
68
+ - - uuid
69
+ - false
70
+ - - vec_env
71
+ - dummy
72
+ - - verbose
73
+ - 1
74
+ - - wandb_entity
75
+ - null
76
+ - - wandb_project_name
77
+ - sb3
78
+ - - yaml_file
79
+ - null
config.yml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - batch_size
3
+ - 32
4
+ - - buffer_size
5
+ - 100000
6
+ - - env_wrapper
7
+ - - stable_baselines3.common.atari_wrappers.AtariWrapper
8
+ - - exploration_final_eps
9
+ - 0.01
10
+ - - exploration_fraction
11
+ - 0.1
12
+ - - frame_stack
13
+ - 4
14
+ - - gradient_steps
15
+ - 1
16
+ - - learning_rate
17
+ - 0.0001
18
+ - - learning_starts
19
+ - 100000
20
+ - - n_timesteps
21
+ - 1000000.0
22
+ - - optimize_memory_usage
23
+ - false
24
+ - - policy
25
+ - CnnPolicy
26
+ - - target_update_interval
27
+ - 1000
28
+ - - train_freq
29
+ - 4
dqn-ALE-Qbert-v5.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54ff41e9c00e7b0c7dfdee8d7f56e4c8512a5dab10b11f29ea51a1a1ffb633aa
3
+ size 27224959
dqn-ALE-Qbert-v5/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 1.6.2
dqn-ALE-Qbert-v5/data ADDED
The diff for this file is too large to render. See raw diff
 
dqn-ALE-Qbert-v5/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea03eed39ab180327dd2be7d49b5f11330211f087b02be67c0ec068f5cd9db0b
3
+ size 13505739
dqn-ALE-Qbert-v5/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b8c4a954af3a325375f875c04ded81bf8ad9fd55e27c20e713b402d98145359
3
+ size 13504937
dqn-ALE-Qbert-v5/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d030ad8db708280fcae77d87e973102039acd23a11bdecc3db8eb6c0ac940ee1
3
+ size 431
dqn-ALE-Qbert-v5/system_info.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ OS: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic #1 SMP Fri Aug 26 08:44:51 UTC 2022
2
+ Python: 3.7.15
3
+ Stable-Baselines3: 1.6.2
4
+ PyTorch: 1.12.1+cu113
5
+ GPU Enabled: True
6
+ Numpy: 1.21.6
7
+ Gym: 0.21.0
env_kwargs.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
replay.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd5677dacc7cf44f2fcf081cf7963c3a8bd01b12f0b40b25c2541a2bedbf61e4
3
+ size 256330
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": 6665.0, "std_reward": 1973.4867620533967, "is_deterministic": false, "n_eval_episodes": 10, "eval_datetime": "2022-11-23T07:49:47.570408"}
train_eval_metrics.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf04e67ebd7ca2f63ca2c81f9fb8dc8c7cf9c59c9f5839849922e255dd3c56d2
3
+ size 147151