SubhasishSaha commited on
Commit
0be83ad
1 Parent(s): 3875324

Push to Hub

Browse files
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - LunarLander-v2
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: DQN
10
+ results:
11
+ - task:
12
+ type: reinforcement-learning
13
+ name: reinforcement-learning
14
+ dataset:
15
+ name: LunarLander-v2
16
+ type: LunarLander-v2
17
+ metrics:
18
+ - type: mean_reward
19
+ value: -446.72 +/- 136.10
20
+ name: mean_reward
21
+ verified: false
22
+ ---
23
+
24
+ # **DQN** Agent playing **LunarLander-v2**
25
+ This is a trained model of a **DQN** agent playing **LunarLander-v2**
26
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
+
28
+ ## Usage (with Stable-baselines3)
29
+ TODO: Add your code
30
+
31
+
32
+ ```python
33
+ from stable_baselines3 import ...
34
+ from huggingface_sb3 import load_from_hub
35
+
36
+ ...
37
+ ```
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"policy_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVMAAAAAAAAACMHnN0YWJsZV9iYXNlbGluZXMzLmRxbi5wb2xpY2llc5SMCURRTlBvbGljeZSTlC4=", "__module__": "stable_baselines3.dqn.policies", "__annotations__": "{'q_net': <class 'stable_baselines3.dqn.policies.QNetwork'>, 'q_net_target': <class 'stable_baselines3.dqn.policies.QNetwork'>}", "__doc__": "\n Policy class with Q-Value Net and target net for DQN\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ", "__init__": "<function DQNPolicy.__init__ at 0x2a0094790>", "_build": "<function DQNPolicy._build at 0x2a0094820>", "make_q_net": "<function DQNPolicy.make_q_net at 0x2a00948b0>", "forward": "<function DQNPolicy.forward at 0x2a0094940>", "_predict": "<function DQNPolicy._predict at 0x2a00949d0>", "_get_constructor_parameters": "<function DQNPolicy._get_constructor_parameters at 0x2a0094a60>", "set_training_mode": "<function DQNPolicy.set_training_mode at 0x2a0094af0>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x2a009b400>"}, "verbose": 1, "policy_kwargs": {}, "num_timesteps": 1000, "_total_timesteps": 1000, "_num_timesteps_at_start": 0, "seed": null, "action_noise": null, "start_time": 1710998178281685000, "learning_rate": 0.0001, "tensorboard_log": null, "_last_obs": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVlQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYgAAAAAAAAAAJIET95SNw+gGCDPxxenL+08ge/VUNLPQAAAAAAAAAAlIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksBSwiGlIwBQ5R0lFKULg=="}, "_last_episode_starts": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVdAAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYBAAAAAAAAAAGUjAVudW1weZSMBWR0eXBllJOUjAJiMZSJiIeUUpQoSwOMAXyUTk5OSv////9K/////0sAdJRiSwGFlIwBQ5R0lFKULg=="}, "_last_original_obs": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVlQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYgAAAAAAAAAPClDj+wXeo+EhyCPyeEmL9QlQi/n3jGPQAAAAAAAAAAlIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksBSwiGlIwBQ5R0lFKULg=="}, "_episode_num": 9, "use_sde": false, "sde_sample_freq": -1, "_current_progress_remaining": 0.0, "_stats_window_size": 100, "ep_info_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWVNgEAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHwHQRp3cHnlqMAWyUS3KMAXSURz/yWtMfzSThdX2UKGgGR8B1CQcp9ZzQaAdLYWgIRz/zNwrDqGDddX2UKGgGR8BdRInF5v9+aAdLVmgIRz/0lVo6CDmKdX2UKGgGR8BQRQcHWz4UaAdLYmgIRz/1ci0OVgQZdX2UKGgGR0BEguvt+kP+aAdLa2gIRz/2Pl+3H7xedX2UKGgGR8B3elj+aScLaAdLXWgIRz/23NHH3lCDdX2UKGgGR8BSHDC1qnFYaAdLSWgIRz/3P9LpRoAXdX2UKGgGR8BybYV8CxNZaAdLZ2gIRz/38/t6X0GvdX2UKGgGR8Br4lbRnezlaAdLaGgIRz/4bwjMV1wHdWUu"}, "ep_success_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="}, "_n_updates": 0, "buffer_size": 1000000, "batch_size": 32, "learning_starts": 50000, "tau": 1.0, "gamma": 0.99, "gradient_steps": 1, "optimize_memory_usage": false, "replay_buffer_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVNQAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwMUmVwbGF5QnVmZmVylJOULg==", "__module__": "stable_baselines3.common.buffers", "__doc__": "\n Replay buffer used in off-policy algorithms like SAC/TD3.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param n_envs: Number of parallel environments\n :param optimize_memory_usage: Enable a memory efficient variant\n of the replay buffer which reduces by almost a factor two the memory used,\n at a cost of more complexity.\n See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195\n and https://github.com/DLR-RM/stable-baselines3/pull/28#issuecomment-637559274\n Cannot be used in combination with handle_timeout_termination.\n :param handle_timeout_termination: Handle timeout termination (due to timelimit)\n separately and treat the task as infinite horizon task.\n https://github.com/DLR-RM/stable-baselines3/issues/284\n ", "__init__": "<function ReplayBuffer.__init__ at 0x2a0068f70>", "add": "<function ReplayBuffer.add at 0x2a0071040>", "sample": "<function ReplayBuffer.sample at 0x2a00710d0>", "_get_samples": "<function ReplayBuffer._get_samples at 0x2a0071160>", "_maybe_cast_dtype": "<staticmethod object at 0x2a0070160>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x2a006ea80>"}, "replay_buffer_kwargs": {}, "train_freq": {":type:": "<class 'stable_baselines3.common.type_aliases.TrainFreq'>", ":serialized:": "gAWVYQAAAAAAAACMJXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi50eXBlX2FsaWFzZXOUjAlUcmFpbkZyZXGUk5RLBGgAjBJUcmFpbkZyZXF1ZW5jeVVuaXSUk5SMBHN0ZXCUhZRSlIaUgZQu"}, "use_sde_at_warmup": false, "exploration_initial_eps": 1.0, "exploration_final_eps": 0.1, "exploration_fraction": 0.1, "target_update_interval": 250, "_n_calls": 1000, "max_grad_norm": 10, "exploration_rate": 0.1, "observation_space": {":type:": "<class 'gymnasium.spaces.box.Box'>", ":serialized:": "gAWVZgIAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMDWJvdW5kZWRfYmVsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWCAAAAAAAAAABAQEBAQEBAZRoCIwCYjGUiYiHlFKUKEsDjAF8lE5OTkr/////Sv////9LAHSUYksIhZSMAUOUdJRSlIwNYm91bmRlZF9hYm92ZZRoESiWCAAAAAAAAAABAQEBAQEBAZRoFUsIhZRoGXSUUpSMBl9zaGFwZZRLCIWUjANsb3eUaBEoliAAAAAAAAAAAADAvwAAwL8AAKDAAACgwNsPScAAAKDAAAAAgAAAAICUaAtLCIWUaBl0lFKUjARoaWdolGgRKJYgAAAAAAAAAAAAwD8AAMA/AACgQAAAoEDbD0lAAACgQAAAgD8AAIA/lGgLSwiFlGgZdJRSlIwIbG93X3JlcHKUjFNbLTEuNSAgICAgICAtMS41ICAgICAgIC01LiAgICAgICAgLTUuICAgICAgICAtMy4xNDE1OTI3IC01LgogLTAuICAgICAgICAtMC4gICAgICAgXZSMCWhpZ2hfcmVwcpSMS1sxLjUgICAgICAgMS41ICAgICAgIDUuICAgICAgICA1LiAgICAgICAgMy4xNDE1OTI3IDUuICAgICAgICAxLgogMS4gICAgICAgXZSMCl9ucF9yYW5kb22UTnViLg==", "dtype": "float32", "bounded_below": "[ True True True True True True True True]", "bounded_above": "[ True True True True True True True True]", "_shape": [8], "low": "[-1.5 -1.5 -5. -5. -3.1415927 -5.\n -0. -0. ]", "high": "[1.5 1.5 5. 5. 3.1415927 5. 1.\n 1. ]", "low_repr": "[-1.5 -1.5 -5. -5. -3.1415927 -5.\n -0. -0. ]", "high_repr": "[1.5 1.5 5. 5. 3.1415927 5. 1.\n 1. ]", "_np_random": null}, "action_space": {":type:": "<class 'gymnasium.spaces.discrete.Discrete'>", ":serialized:": "gAWVxgEAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIBAAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgLjAJpOJSJiIeUUpQoSwNoD05OTkr/////Sv////9LAHSUYowKX25wX3JhbmRvbZSMFG51bXB5LnJhbmRvbS5fcGlja2xllIwQX19nZW5lcmF0b3JfY3RvcpSTlIwFUENHNjSUaB+MFF9fYml0X2dlbmVyYXRvcl9jdG9ylJOUhpRSlH2UKIwNYml0X2dlbmVyYXRvcpSMBVBDRzY0lIwFc3RhdGWUfZQoaCqKENiCDz7lRkIfZO7EcAbgQXKMA2luY5SKETvd8cirrbbVtLWqO0LmSc4AdYwKaGFzX3VpbnQzMpRLAIwIdWludGVnZXKUigXGy7bsAHVidWIu", "n": "4", "start": "0", "_shape": [], "dtype": "int64", "_np_random": "Generator(PCG64)"}, "n_envs": 1, "lr_schedule": {":type:": "<class 'function'>", ":serialized:": "gAWVIgMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLAUsTQwSIAFMAlE6FlCmMAV+UhZSMhi9Vc2Vycy9zdWJoYXNpc2gvRG9jdW1lbnRzL2lOZXVyb24vUmVpbmZvcmNlbWVudC1MZWFybmluZy9kcmwtMmVkL3JsX2RybC9saWIvcHl0aG9uMy45L3NpdGUtcGFja2FnZXMvc3RhYmxlX2Jhc2VsaW5lczMvY29tbW9uL3V0aWxzLnB5lIwEZnVuY5RLg0MCAAGUjAN2YWyUhZQpdJRSlH2UKIwLX19wYWNrYWdlX1+UjBhzdGFibGVfYmFzZWxpbmVzMy5jb21tb26UjAhfX25hbWVfX5SMHnN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi51dGlsc5SMCF9fZmlsZV9flIyGL1VzZXJzL3N1Ymhhc2lzaC9Eb2N1bWVudHMvaU5ldXJvbi9SZWluZm9yY2VtZW50LUxlYXJuaW5nL2RybC0yZWQvcmxfZHJsL2xpYi9weXRob24zLjkvc2l0ZS1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaB99lH2UKGgWaA2MDF9fcXVhbG5hbWVfX5SMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUjA9fX2Fubm90YXRpb25zX1+UfZSMDl9fa3dkZWZhdWx0c19flE6MDF9fZGVmYXVsdHNfX5ROjApfX21vZHVsZV9flGgXjAdfX2RvY19flE6MC19fY2xvc3VyZV9flGgAjApfbWFrZV9jZWxslJOURz8aNuLrHEMthZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjAu"}, "batch_norm_stats": [], "batch_norm_stats_target": [], "exploration_schedule": {":type:": "<class 'function'>", ":serialized:": "gAWVxgMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLBEsTQyxkAXwAGACIAWsEchCIAFMAiAJkAXwAGACIAIgCGAAUAIgBGwAXAFMAZABTAJROSwGGlCmMEnByb2dyZXNzX3JlbWFpbmluZ5SFlIyGL1VzZXJzL3N1Ymhhc2lzaC9Eb2N1bWVudHMvaU5ldXJvbi9SZWluZm9yY2VtZW50LUxlYXJuaW5nL2RybC0yZWQvcmxfZHJsL2xpYi9weXRob24zLjkvc2l0ZS1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUjARmdW5jlEtxQwYAAQwBBAKUjANlbmSUjAxlbmRfZnJhY3Rpb26UjAVzdGFydJSHlCl0lFKUfZQojAtfX3BhY2thZ2VfX5SMGHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbpSMCF9fbmFtZV9flIwec3RhYmxlX2Jhc2VsaW5lczMuY29tbW9uLnV0aWxzlIwIX19maWxlX1+UjIYvVXNlcnMvc3ViaGFzaXNoL0RvY3VtZW50cy9pTmV1cm9uL1JlaW5mb3JjZW1lbnQtTGVhcm5pbmcvZHJsLTJlZC9ybF9kcmwvbGliL3B5dGhvbjMuOS9zaXRlLXBhY2thZ2VzL3N0YWJsZV9iYXNlbGluZXMzL2NvbW1vbi91dGlscy5weZR1Tk5oAIwQX21ha2VfZW1wdHlfY2VsbJSTlClSlGgdKVKUaB0pUpSHlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCN9lH2UKGgYaA2MDF9fcXVhbG5hbWVfX5SMG2dldF9saW5lYXJfZm4uPGxvY2Fscz4uZnVuY5SMD19fYW5ub3RhdGlvbnNfX5R9lChoCowIYnVpbHRpbnOUjAVmbG9hdJSTlIwGcmV0dXJulGgudYwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBmMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RHP7mZmZmZmZqFlFKUaDZHP7mZmZmZmZqFlFKUaDZHP/AAAAAAAACFlFKUh5SMF19jbG91ZHBpY2tsZV9zdWJtb2R1bGVzlF2UjAtfX2dsb2JhbHNfX5R9lHWGlIZSMC4="}, "system_info": {"OS": "macOS-14.2.1-arm64-arm-64bit Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103", "Python": "3.9.19", "Stable-Baselines3": "2.1.0", "PyTorch": "2.2.1", "GPU Enabled": "False", "Numpy": "1.26.4", "Cloudpickle": "3.0.0", "Gymnasium": "0.29.1", "OpenAI Gym": "0.26.2"}}
dqn-LunarLander-v2.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2b7c388a37f7b912201fc7327c8d6d54d9d2deb3bf4e0b1e50a7816b616fa5e
3
+ size 58889
dqn-LunarLander-v2/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 2.1.0
dqn-LunarLander-v2/data ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "policy_class": {
3
+ ":type:": "<class 'abc.ABCMeta'>",
4
+ ":serialized:": "gAWVMAAAAAAAAACMHnN0YWJsZV9iYXNlbGluZXMzLmRxbi5wb2xpY2llc5SMCURRTlBvbGljeZSTlC4=",
5
+ "__module__": "stable_baselines3.dqn.policies",
6
+ "__annotations__": "{'q_net': <class 'stable_baselines3.dqn.policies.QNetwork'>, 'q_net_target': <class 'stable_baselines3.dqn.policies.QNetwork'>}",
7
+ "__doc__": "\n Policy class with Q-Value Net and target net for DQN\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ",
8
+ "__init__": "<function DQNPolicy.__init__ at 0x2a0094790>",
9
+ "_build": "<function DQNPolicy._build at 0x2a0094820>",
10
+ "make_q_net": "<function DQNPolicy.make_q_net at 0x2a00948b0>",
11
+ "forward": "<function DQNPolicy.forward at 0x2a0094940>",
12
+ "_predict": "<function DQNPolicy._predict at 0x2a00949d0>",
13
+ "_get_constructor_parameters": "<function DQNPolicy._get_constructor_parameters at 0x2a0094a60>",
14
+ "set_training_mode": "<function DQNPolicy.set_training_mode at 0x2a0094af0>",
15
+ "__abstractmethods__": "frozenset()",
16
+ "_abc_impl": "<_abc._abc_data object at 0x2a009b400>"
17
+ },
18
+ "verbose": 1,
19
+ "policy_kwargs": {},
20
+ "num_timesteps": 1000,
21
+ "_total_timesteps": 1000,
22
+ "_num_timesteps_at_start": 0,
23
+ "seed": null,
24
+ "action_noise": null,
25
+ "start_time": 1710998178281685000,
26
+ "learning_rate": 0.0001,
27
+ "tensorboard_log": null,
28
+ "_last_obs": {
29
+ ":type:": "<class 'numpy.ndarray'>",
30
+ ":serialized:": "gAWVlQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYgAAAAAAAAAAJIET95SNw+gGCDPxxenL+08ge/VUNLPQAAAAAAAAAAlIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksBSwiGlIwBQ5R0lFKULg=="
31
+ },
32
+ "_last_episode_starts": {
33
+ ":type:": "<class 'numpy.ndarray'>",
34
+ ":serialized:": "gAWVdAAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYBAAAAAAAAAAGUjAVudW1weZSMBWR0eXBllJOUjAJiMZSJiIeUUpQoSwOMAXyUTk5OSv////9K/////0sAdJRiSwGFlIwBQ5R0lFKULg=="
35
+ },
36
+ "_last_original_obs": {
37
+ ":type:": "<class 'numpy.ndarray'>",
38
+ ":serialized:": "gAWVlQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYgAAAAAAAAAPClDj+wXeo+EhyCPyeEmL9QlQi/n3jGPQAAAAAAAAAAlIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksBSwiGlIwBQ5R0lFKULg=="
39
+ },
40
+ "_episode_num": 9,
41
+ "use_sde": false,
42
+ "sde_sample_freq": -1,
43
+ "_current_progress_remaining": 0.0,
44
+ "_stats_window_size": 100,
45
+ "ep_info_buffer": {
46
+ ":type:": "<class 'collections.deque'>",
47
+ ":serialized:": "gAWVNgEAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHwHQRp3cHnlqMAWyUS3KMAXSURz/yWtMfzSThdX2UKGgGR8B1CQcp9ZzQaAdLYWgIRz/zNwrDqGDddX2UKGgGR8BdRInF5v9+aAdLVmgIRz/0lVo6CDmKdX2UKGgGR8BQRQcHWz4UaAdLYmgIRz/1ci0OVgQZdX2UKGgGR0BEguvt+kP+aAdLa2gIRz/2Pl+3H7xedX2UKGgGR8B3elj+aScLaAdLXWgIRz/23NHH3lCDdX2UKGgGR8BSHDC1qnFYaAdLSWgIRz/3P9LpRoAXdX2UKGgGR8BybYV8CxNZaAdLZ2gIRz/38/t6X0GvdX2UKGgGR8Br4lbRnezlaAdLaGgIRz/4bwjMV1wHdWUu"
48
+ },
49
+ "ep_success_buffer": {
50
+ ":type:": "<class 'collections.deque'>",
51
+ ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="
52
+ },
53
+ "_n_updates": 0,
54
+ "buffer_size": 1000000,
55
+ "batch_size": 32,
56
+ "learning_starts": 50000,
57
+ "tau": 1.0,
58
+ "gamma": 0.99,
59
+ "gradient_steps": 1,
60
+ "optimize_memory_usage": false,
61
+ "replay_buffer_class": {
62
+ ":type:": "<class 'abc.ABCMeta'>",
63
+ ":serialized:": "gAWVNQAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwMUmVwbGF5QnVmZmVylJOULg==",
64
+ "__module__": "stable_baselines3.common.buffers",
65
+ "__doc__": "\n Replay buffer used in off-policy algorithms like SAC/TD3.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param n_envs: Number of parallel environments\n :param optimize_memory_usage: Enable a memory efficient variant\n of the replay buffer which reduces by almost a factor two the memory used,\n at a cost of more complexity.\n See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195\n and https://github.com/DLR-RM/stable-baselines3/pull/28#issuecomment-637559274\n Cannot be used in combination with handle_timeout_termination.\n :param handle_timeout_termination: Handle timeout termination (due to timelimit)\n separately and treat the task as infinite horizon task.\n https://github.com/DLR-RM/stable-baselines3/issues/284\n ",
66
+ "__init__": "<function ReplayBuffer.__init__ at 0x2a0068f70>",
67
+ "add": "<function ReplayBuffer.add at 0x2a0071040>",
68
+ "sample": "<function ReplayBuffer.sample at 0x2a00710d0>",
69
+ "_get_samples": "<function ReplayBuffer._get_samples at 0x2a0071160>",
70
+ "_maybe_cast_dtype": "<staticmethod object at 0x2a0070160>",
71
+ "__abstractmethods__": "frozenset()",
72
+ "_abc_impl": "<_abc._abc_data object at 0x2a006ea80>"
73
+ },
74
+ "replay_buffer_kwargs": {},
75
+ "train_freq": {
76
+ ":type:": "<class 'stable_baselines3.common.type_aliases.TrainFreq'>",
77
+ ":serialized:": "gAWVYQAAAAAAAACMJXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi50eXBlX2FsaWFzZXOUjAlUcmFpbkZyZXGUk5RLBGgAjBJUcmFpbkZyZXF1ZW5jeVVuaXSUk5SMBHN0ZXCUhZRSlIaUgZQu"
78
+ },
79
+ "use_sde_at_warmup": false,
80
+ "exploration_initial_eps": 1.0,
81
+ "exploration_final_eps": 0.1,
82
+ "exploration_fraction": 0.1,
83
+ "target_update_interval": 250,
84
+ "_n_calls": 1000,
85
+ "max_grad_norm": 10,
86
+ "exploration_rate": 0.1,
87
+ "observation_space": {
88
+ ":type:": "<class 'gymnasium.spaces.box.Box'>",
89
+ ":serialized:": "gAWVZgIAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMDWJvdW5kZWRfYmVsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWCAAAAAAAAAABAQEBAQEBAZRoCIwCYjGUiYiHlFKUKEsDjAF8lE5OTkr/////Sv////9LAHSUYksIhZSMAUOUdJRSlIwNYm91bmRlZF9hYm92ZZRoESiWCAAAAAAAAAABAQEBAQEBAZRoFUsIhZRoGXSUUpSMBl9zaGFwZZRLCIWUjANsb3eUaBEoliAAAAAAAAAAAADAvwAAwL8AAKDAAACgwNsPScAAAKDAAAAAgAAAAICUaAtLCIWUaBl0lFKUjARoaWdolGgRKJYgAAAAAAAAAAAAwD8AAMA/AACgQAAAoEDbD0lAAACgQAAAgD8AAIA/lGgLSwiFlGgZdJRSlIwIbG93X3JlcHKUjFNbLTEuNSAgICAgICAtMS41ICAgICAgIC01LiAgICAgICAgLTUuICAgICAgICAtMy4xNDE1OTI3IC01LgogLTAuICAgICAgICAtMC4gICAgICAgXZSMCWhpZ2hfcmVwcpSMS1sxLjUgICAgICAgMS41ICAgICAgIDUuICAgICAgICA1LiAgICAgICAgMy4xNDE1OTI3IDUuICAgICAgICAxLgogMS4gICAgICAgXZSMCl9ucF9yYW5kb22UTnViLg==",
90
+ "dtype": "float32",
91
+ "bounded_below": "[ True True True True True True True True]",
92
+ "bounded_above": "[ True True True True True True True True]",
93
+ "_shape": [
94
+ 8
95
+ ],
96
+ "low": "[-1.5 -1.5 -5. -5. -3.1415927 -5.\n -0. -0. ]",
97
+ "high": "[1.5 1.5 5. 5. 3.1415927 5. 1.\n 1. ]",
98
+ "low_repr": "[-1.5 -1.5 -5. -5. -3.1415927 -5.\n -0. -0. ]",
99
+ "high_repr": "[1.5 1.5 5. 5. 3.1415927 5. 1.\n 1. ]",
100
+ "_np_random": null
101
+ },
102
+ "action_space": {
103
+ ":type:": "<class 'gymnasium.spaces.discrete.Discrete'>",
104
+ ":serialized:": "gAWVxgEAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIBAAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgLjAJpOJSJiIeUUpQoSwNoD05OTkr/////Sv////9LAHSUYowKX25wX3JhbmRvbZSMFG51bXB5LnJhbmRvbS5fcGlja2xllIwQX19nZW5lcmF0b3JfY3RvcpSTlIwFUENHNjSUaB+MFF9fYml0X2dlbmVyYXRvcl9jdG9ylJOUhpRSlH2UKIwNYml0X2dlbmVyYXRvcpSMBVBDRzY0lIwFc3RhdGWUfZQoaCqKENiCDz7lRkIfZO7EcAbgQXKMA2luY5SKETvd8cirrbbVtLWqO0LmSc4AdYwKaGFzX3VpbnQzMpRLAIwIdWludGVnZXKUigXGy7bsAHVidWIu",
105
+ "n": "4",
106
+ "start": "0",
107
+ "_shape": [],
108
+ "dtype": "int64",
109
+ "_np_random": "Generator(PCG64)"
110
+ },
111
+ "n_envs": 1,
112
+ "lr_schedule": {
113
+ ":type:": "<class 'function'>",
114
+ ":serialized:": "gAWVIgMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLAUsTQwSIAFMAlE6FlCmMAV+UhZSMhi9Vc2Vycy9zdWJoYXNpc2gvRG9jdW1lbnRzL2lOZXVyb24vUmVpbmZvcmNlbWVudC1MZWFybmluZy9kcmwtMmVkL3JsX2RybC9saWIvcHl0aG9uMy45L3NpdGUtcGFja2FnZXMvc3RhYmxlX2Jhc2VsaW5lczMvY29tbW9uL3V0aWxzLnB5lIwEZnVuY5RLg0MCAAGUjAN2YWyUhZQpdJRSlH2UKIwLX19wYWNrYWdlX1+UjBhzdGFibGVfYmFzZWxpbmVzMy5jb21tb26UjAhfX25hbWVfX5SMHnN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi51dGlsc5SMCF9fZmlsZV9flIyGL1VzZXJzL3N1Ymhhc2lzaC9Eb2N1bWVudHMvaU5ldXJvbi9SZWluZm9yY2VtZW50LUxlYXJuaW5nL2RybC0yZWQvcmxfZHJsL2xpYi9weXRob24zLjkvc2l0ZS1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaB99lH2UKGgWaA2MDF9fcXVhbG5hbWVfX5SMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUjA9fX2Fubm90YXRpb25zX1+UfZSMDl9fa3dkZWZhdWx0c19flE6MDF9fZGVmYXVsdHNfX5ROjApfX21vZHVsZV9flGgXjAdfX2RvY19flE6MC19fY2xvc3VyZV9flGgAjApfbWFrZV9jZWxslJOURz8aNuLrHEMthZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjAu"
115
+ },
116
+ "batch_norm_stats": [],
117
+ "batch_norm_stats_target": [],
118
+ "exploration_schedule": {
119
+ ":type:": "<class 'function'>",
120
+ ":serialized:": "gAWVxgMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLBEsTQyxkAXwAGACIAWsEchCIAFMAiAJkAXwAGACIAIgCGAAUAIgBGwAXAFMAZABTAJROSwGGlCmMEnByb2dyZXNzX3JlbWFpbmluZ5SFlIyGL1VzZXJzL3N1Ymhhc2lzaC9Eb2N1bWVudHMvaU5ldXJvbi9SZWluZm9yY2VtZW50LUxlYXJuaW5nL2RybC0yZWQvcmxfZHJsL2xpYi9weXRob24zLjkvc2l0ZS1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUjARmdW5jlEtxQwYAAQwBBAKUjANlbmSUjAxlbmRfZnJhY3Rpb26UjAVzdGFydJSHlCl0lFKUfZQojAtfX3BhY2thZ2VfX5SMGHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbpSMCF9fbmFtZV9flIwec3RhYmxlX2Jhc2VsaW5lczMuY29tbW9uLnV0aWxzlIwIX19maWxlX1+UjIYvVXNlcnMvc3ViaGFzaXNoL0RvY3VtZW50cy9pTmV1cm9uL1JlaW5mb3JjZW1lbnQtTGVhcm5pbmcvZHJsLTJlZC9ybF9kcmwvbGliL3B5dGhvbjMuOS9zaXRlLXBhY2thZ2VzL3N0YWJsZV9iYXNlbGluZXMzL2NvbW1vbi91dGlscy5weZR1Tk5oAIwQX21ha2VfZW1wdHlfY2VsbJSTlClSlGgdKVKUaB0pUpSHlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCN9lH2UKGgYaA2MDF9fcXVhbG5hbWVfX5SMG2dldF9saW5lYXJfZm4uPGxvY2Fscz4uZnVuY5SMD19fYW5ub3RhdGlvbnNfX5R9lChoCowIYnVpbHRpbnOUjAVmbG9hdJSTlIwGcmV0dXJulGgudYwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBmMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RHP7mZmZmZmZqFlFKUaDZHP7mZmZmZmZqFlFKUaDZHP/AAAAAAAACFlFKUh5SMF19jbG91ZHBpY2tsZV9zdWJtb2R1bGVzlF2UjAtfX2dsb2JhbHNfX5R9lHWGlIZSMC4="
121
+ }
122
+ }
dqn-LunarLander-v2/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d17122623157f8003aa40f4a6b9968afcefd2ff85a9d8261d6b0fe84f914b22d
3
+ size 1120
dqn-LunarLander-v2/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d610b49debdcef9587847c0dc102ba4e1a25ae5551886b4b99a745f4819c3c0b
3
+ size 44338
dqn-LunarLander-v2/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebdad4b9cfe9cd22a3abadb5623bf7bb1f6eb2e408740245eb3f2044b0adc018
3
+ size 864
dqn-LunarLander-v2/system_info.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ - OS: macOS-14.2.1-arm64-arm-64bit Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103
2
+ - Python: 3.9.19
3
+ - Stable-Baselines3: 2.1.0
4
+ - PyTorch: 2.2.1
5
+ - GPU Enabled: False
6
+ - Numpy: 1.26.4
7
+ - Cloudpickle: 3.0.0
8
+ - Gymnasium: 0.29.1
9
+ - OpenAI Gym: 0.26.2
replay.mp4 ADDED
Binary file (216 kB). View file
 
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": -446.71560102050137, "std_reward": 136.10367483740197, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2024-03-21T10:47:07.306506"}