|
[2024-07-27 15:44:39,947][00200] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-07-27 15:44:39,949][00200] Rollout worker 0 uses device cpu |
|
[2024-07-27 15:44:39,952][00200] Rollout worker 1 uses device cpu |
|
[2024-07-27 15:44:39,954][00200] Rollout worker 2 uses device cpu |
|
[2024-07-27 15:44:39,955][00200] Rollout worker 3 uses device cpu |
|
[2024-07-27 15:44:39,956][00200] Rollout worker 4 uses device cpu |
|
[2024-07-27 15:44:39,960][00200] Rollout worker 5 uses device cpu |
|
[2024-07-27 15:44:39,961][00200] Rollout worker 6 uses device cpu |
|
[2024-07-27 15:44:39,962][00200] Rollout worker 7 uses device cpu |
|
[2024-07-27 15:44:40,159][00200] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 15:44:40,161][00200] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-07-27 15:44:40,207][00200] Starting all processes... |
|
[2024-07-27 15:44:40,209][00200] Starting process learner_proc0 |
|
[2024-07-27 15:44:41,862][00200] Starting all processes... |
|
[2024-07-27 15:44:41,872][00200] Starting process inference_proc0-0 |
|
[2024-07-27 15:44:41,873][00200] Starting process rollout_proc0 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc1 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc2 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc3 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc4 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc5 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc6 |
|
[2024-07-27 15:44:41,874][00200] Starting process rollout_proc7 |
|
[2024-07-27 15:44:56,790][08821] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 15:44:56,794][08821] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-07-27 15:44:56,851][08840] Worker 1 uses CPU cores [1] |
|
[2024-07-27 15:44:56,867][08821] Num visible devices: 1 |
|
[2024-07-27 15:44:56,900][08821] Starting seed is not provided |
|
[2024-07-27 15:44:56,901][08821] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 15:44:56,902][08821] Initializing actor-critic model on device cuda:0 |
|
[2024-07-27 15:44:56,903][08821] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 15:44:56,906][08821] RunningMeanStd input shape: (1,) |
|
[2024-07-27 15:44:57,011][08821] ConvEncoder: input_channels=3 |
|
[2024-07-27 15:44:57,036][08844] Worker 5 uses CPU cores [1] |
|
[2024-07-27 15:44:57,045][08842] Worker 3 uses CPU cores [1] |
|
[2024-07-27 15:44:57,143][08843] Worker 4 uses CPU cores [0] |
|
[2024-07-27 15:44:57,240][08846] Worker 7 uses CPU cores [1] |
|
[2024-07-27 15:44:57,311][08839] Worker 0 uses CPU cores [0] |
|
[2024-07-27 15:44:57,343][08841] Worker 2 uses CPU cores [0] |
|
[2024-07-27 15:44:57,400][08845] Worker 6 uses CPU cores [0] |
|
[2024-07-27 15:44:57,414][08838] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 15:44:57,414][08838] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-07-27 15:44:57,417][08821] Conv encoder output size: 512 |
|
[2024-07-27 15:44:57,418][08821] Policy head output size: 512 |
|
[2024-07-27 15:44:57,436][08838] Num visible devices: 1 |
|
[2024-07-27 15:44:57,479][08821] Created Actor Critic model with architecture: |
|
[2024-07-27 15:44:57,479][08821] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-07-27 15:44:57,752][08821] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-07-27 15:44:58,497][08821] No checkpoints found |
|
[2024-07-27 15:44:58,498][08821] Did not load from checkpoint, starting from scratch! |
|
[2024-07-27 15:44:58,498][08821] Initialized policy 0 weights for model version 0 |
|
[2024-07-27 15:44:58,502][08821] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 15:44:58,524][08821] LearnerWorker_p0 finished initialization! |
|
[2024-07-27 15:44:58,634][08838] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 15:44:58,636][08838] RunningMeanStd input shape: (1,) |
|
[2024-07-27 15:44:58,649][08838] ConvEncoder: input_channels=3 |
|
[2024-07-27 15:44:58,755][08838] Conv encoder output size: 512 |
|
[2024-07-27 15:44:58,755][08838] Policy head output size: 512 |
|
[2024-07-27 15:44:58,810][00200] Inference worker 0-0 is ready! |
|
[2024-07-27 15:44:58,811][00200] All inference workers are ready! Signal rollout workers to start! |
|
[2024-07-27 15:44:59,104][08844] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,109][08842] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,127][08841] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,163][08845] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,165][08839] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,179][08846] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,181][08840] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,191][08843] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 15:44:59,509][08842] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... |
|
[2024-07-27 15:44:59,507][08843] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... |
|
[2024-07-27 15:44:59,510][08842] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init |
|
self.game.init() |
|
vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. |
|
|
|
During handling of the above exception, another exception occurred: |
|
|
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init |
|
env_runner.init(self.timing) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init |
|
self._reset() |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset |
|
observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 515, in reset |
|
obs, info = self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset |
|
self._ensure_initialized() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized |
|
self.initialize() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize |
|
self._game_init() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init |
|
raise EnvCriticalError() |
|
sample_factory.envs.env_utils.EnvCriticalError |
|
[2024-07-27 15:44:59,516][08842] Unhandled exception in evt loop rollout_proc3_evt_loop |
|
[2024-07-27 15:44:59,509][08839] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... |
|
[2024-07-27 15:44:59,518][08839] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init |
|
self.game.init() |
|
vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. |
|
|
|
During handling of the above exception, another exception occurred: |
|
|
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init |
|
env_runner.init(self.timing) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init |
|
self._reset() |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset |
|
observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 515, in reset |
|
obs, info = self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset |
|
self._ensure_initialized() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized |
|
self.initialize() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize |
|
self._game_init() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init |
|
raise EnvCriticalError() |
|
sample_factory.envs.env_utils.EnvCriticalError |
|
[2024-07-27 15:44:59,525][08839] Unhandled exception in evt loop rollout_proc0_evt_loop |
|
[2024-07-27 15:44:59,512][08843] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() |
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init |
|
self.game.init() |
|
vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. |
|
|
|
During handling of the above exception, another exception occurred: |
|
|
|
Traceback (most recent call last): |
|
File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal |
|
slot_callable(*args) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init |
|
env_runner.init(self.timing) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init |
|
self._reset() |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset |
|
observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 515, in reset |
|
obs, info = self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset |
|
obs, info = self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset |
|
return self.env.reset(seed=seed, options=options) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset |
|
return self.env.reset(**kwargs) |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset |
|
self._ensure_initialized() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized |
|
self.initialize() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize |
|
self._game_init() |
|
File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init |
|
raise EnvCriticalError() |
|
sample_factory.envs.env_utils.EnvCriticalError |
|
[2024-07-27 15:44:59,526][08843] Unhandled exception in evt loop rollout_proc4_evt_loop |
|
[2024-07-27 15:45:00,148][00200] Heartbeat connected on Batcher_0 |
|
[2024-07-27 15:45:00,163][00200] Heartbeat connected on LearnerWorker_p0 |
|
[2024-07-27 15:45:00,202][00200] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-07-27 15:45:00,721][00200] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 15:45:00,928][08841] Decorrelating experience for 0 frames... |
|
[2024-07-27 15:45:00,928][08844] Decorrelating experience for 0 frames... |
|
[2024-07-27 15:45:00,926][08846] Decorrelating experience for 0 frames... |
|
[2024-07-27 15:45:01,921][08844] Decorrelating experience for 32 frames... |
|
[2024-07-27 15:45:01,937][08846] Decorrelating experience for 32 frames... |
|
[2024-07-27 15:45:01,946][08845] Decorrelating experience for 0 frames... |
|
[2024-07-27 15:45:01,951][08841] Decorrelating experience for 32 frames... |
|
[2024-07-27 15:45:02,754][08845] Decorrelating experience for 32 frames... |
|
[2024-07-27 15:45:02,996][08841] Decorrelating experience for 64 frames... |
|
[2024-07-27 15:45:03,154][08840] Decorrelating experience for 0 frames... |
|
[2024-07-27 15:45:03,440][08844] Decorrelating experience for 64 frames... |
|
[2024-07-27 15:45:03,445][08846] Decorrelating experience for 64 frames... |
|
[2024-07-27 15:45:03,453][08845] Decorrelating experience for 64 frames... |
|
[2024-07-27 15:45:03,865][08840] Decorrelating experience for 32 frames... |
|
[2024-07-27 15:45:04,263][08841] Decorrelating experience for 96 frames... |
|
[2024-07-27 15:45:04,389][08845] Decorrelating experience for 96 frames... |
|
[2024-07-27 15:45:04,419][08846] Decorrelating experience for 96 frames... |
|
[2024-07-27 15:45:04,472][00200] Heartbeat connected on RolloutWorker_w2 |
|
[2024-07-27 15:45:04,595][00200] Heartbeat connected on RolloutWorker_w6 |
|
[2024-07-27 15:45:04,617][00200] Heartbeat connected on RolloutWorker_w7 |
|
[2024-07-27 15:45:05,204][08844] Decorrelating experience for 96 frames... |
|
[2024-07-27 15:45:05,245][08840] Decorrelating experience for 64 frames... |
|
[2024-07-27 15:45:05,352][00200] Heartbeat connected on RolloutWorker_w5 |
|
[2024-07-27 15:45:05,725][00200] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 4.0. Samples: 20. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 15:45:05,908][08840] Decorrelating experience for 96 frames... |
|
[2024-07-27 15:45:06,637][00200] Heartbeat connected on RolloutWorker_w1 |
|
[2024-07-27 15:45:10,165][08821] Signal inference workers to stop experience collection... |
|
[2024-07-27 15:45:10,190][08838] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-07-27 15:45:10,721][00200] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 108.2. Samples: 1082. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 15:45:10,722][00200] Avg episode reward: [(0, '2.904')] |
|
[2024-07-27 15:45:13,074][08821] Signal inference workers to resume experience collection... |
|
[2024-07-27 15:45:13,075][08838] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-07-27 15:45:15,721][00200] Fps is (10 sec: 1229.3, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 174.3. Samples: 2614. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-07-27 15:45:15,727][00200] Avg episode reward: [(0, '3.727')] |
|
[2024-07-27 15:45:20,721][00200] Fps is (10 sec: 3276.8, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 32768. Throughput: 0: 416.6. Samples: 8332. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:45:20,723][00200] Avg episode reward: [(0, '4.165')] |
|
[2024-07-27 15:45:22,549][08838] Updated weights for policy 0, policy_version 10 (0.0014) |
|
[2024-07-27 15:45:25,721][00200] Fps is (10 sec: 3276.8, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 433.2. Samples: 10830. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:45:25,726][00200] Avg episode reward: [(0, '4.423')] |
|
[2024-07-27 15:45:30,721][00200] Fps is (10 sec: 2867.1, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 61440. Throughput: 0: 496.3. Samples: 14890. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:45:30,728][00200] Avg episode reward: [(0, '4.388')] |
|
[2024-07-27 15:45:34,992][08838] Updated weights for policy 0, policy_version 20 (0.0014) |
|
[2024-07-27 15:45:35,721][00200] Fps is (10 sec: 3686.4, 60 sec: 2340.6, 300 sec: 2340.6). Total num frames: 81920. Throughput: 0: 592.6. Samples: 20742. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:45:35,728][00200] Avg episode reward: [(0, '4.500')] |
|
[2024-07-27 15:45:40,721][00200] Fps is (10 sec: 3686.5, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 98304. Throughput: 0: 587.5. Samples: 23498. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:45:40,729][00200] Avg episode reward: [(0, '4.644')] |
|
[2024-07-27 15:45:45,721][00200] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 110592. Throughput: 0: 603.6. Samples: 27160. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:45:45,728][00200] Avg episode reward: [(0, '4.535')] |
|
[2024-07-27 15:45:45,739][08821] Saving new best policy, reward=4.535! |
|
[2024-07-27 15:45:47,907][08838] Updated weights for policy 0, policy_version 30 (0.0014) |
|
[2024-07-27 15:45:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 131072. Throughput: 0: 728.2. Samples: 32784. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:45:50,726][00200] Avg episode reward: [(0, '4.629')] |
|
[2024-07-27 15:45:50,728][08821] Saving new best policy, reward=4.629! |
|
[2024-07-27 15:45:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 2681.0, 300 sec: 2681.0). Total num frames: 147456. Throughput: 0: 764.7. Samples: 35492. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:45:55,728][00200] Avg episode reward: [(0, '4.408')] |
|
[2024-07-27 15:46:00,721][00200] Fps is (10 sec: 2867.2, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 159744. Throughput: 0: 813.4. Samples: 39218. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:00,727][00200] Avg episode reward: [(0, '4.346')] |
|
[2024-07-27 15:46:00,987][08838] Updated weights for policy 0, policy_version 40 (0.0026) |
|
[2024-07-27 15:46:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3003.9, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 815.8. Samples: 45044. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:05,728][00200] Avg episode reward: [(0, '4.334')] |
|
[2024-07-27 15:46:10,723][00200] Fps is (10 sec: 3685.6, 60 sec: 3276.7, 300 sec: 2808.6). Total num frames: 196608. Throughput: 0: 825.1. Samples: 47960. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:10,729][00200] Avg episode reward: [(0, '4.273')] |
|
[2024-07-27 15:46:12,774][08838] Updated weights for policy 0, policy_version 50 (0.0023) |
|
[2024-07-27 15:46:15,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 2785.3). Total num frames: 208896. Throughput: 0: 827.9. Samples: 52146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:46:15,722][00200] Avg episode reward: [(0, '4.402')] |
|
[2024-07-27 15:46:20,721][00200] Fps is (10 sec: 3277.5, 60 sec: 3276.8, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 819.9. Samples: 57638. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:46:20,728][00200] Avg episode reward: [(0, '4.690')] |
|
[2024-07-27 15:46:20,731][08821] Saving new best policy, reward=4.690! |
|
[2024-07-27 15:46:24,001][08838] Updated weights for policy 0, policy_version 60 (0.0016) |
|
[2024-07-27 15:46:25,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 2939.5). Total num frames: 249856. Throughput: 0: 823.8. Samples: 60570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:46:25,725][00200] Avg episode reward: [(0, '4.657')] |
|
[2024-07-27 15:46:30,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 2912.7). Total num frames: 262144. Throughput: 0: 838.5. Samples: 64894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:46:30,728][00200] Avg episode reward: [(0, '4.459')] |
|
[2024-07-27 15:46:35,724][00200] Fps is (10 sec: 3275.7, 60 sec: 3344.9, 300 sec: 2974.9). Total num frames: 282624. Throughput: 0: 830.2. Samples: 70148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:35,726][00200] Avg episode reward: [(0, '4.329')] |
|
[2024-07-27 15:46:35,737][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth... |
|
[2024-07-27 15:46:36,782][08838] Updated weights for policy 0, policy_version 70 (0.0022) |
|
[2024-07-27 15:46:40,723][00200] Fps is (10 sec: 3685.6, 60 sec: 3344.9, 300 sec: 2990.0). Total num frames: 299008. Throughput: 0: 835.5. Samples: 73092. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:40,729][00200] Avg episode reward: [(0, '4.388')] |
|
[2024-07-27 15:46:45,721][00200] Fps is (10 sec: 2868.2, 60 sec: 3345.1, 300 sec: 2964.7). Total num frames: 311296. Throughput: 0: 855.2. Samples: 77704. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:45,725][00200] Avg episode reward: [(0, '4.387')] |
|
[2024-07-27 15:46:49,542][08838] Updated weights for policy 0, policy_version 80 (0.0015) |
|
[2024-07-27 15:46:50,721][00200] Fps is (10 sec: 3277.6, 60 sec: 3345.1, 300 sec: 3016.1). Total num frames: 331776. Throughput: 0: 833.3. Samples: 82544. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:46:50,723][00200] Avg episode reward: [(0, '4.553')] |
|
[2024-07-27 15:46:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3027.5). Total num frames: 348160. Throughput: 0: 833.3. Samples: 85458. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:46:55,728][00200] Avg episode reward: [(0, '4.494')] |
|
[2024-07-27 15:47:00,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3037.9). Total num frames: 364544. Throughput: 0: 850.3. Samples: 90408. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:00,723][00200] Avg episode reward: [(0, '4.629')] |
|
[2024-07-27 15:47:02,506][08838] Updated weights for policy 0, policy_version 90 (0.0023) |
|
[2024-07-27 15:47:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3047.4). Total num frames: 380928. Throughput: 0: 822.7. Samples: 94660. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:47:05,728][00200] Avg episode reward: [(0, '4.738')] |
|
[2024-07-27 15:47:05,737][08821] Saving new best policy, reward=4.738! |
|
[2024-07-27 15:47:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3056.2). Total num frames: 397312. Throughput: 0: 820.2. Samples: 97480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:47:10,727][00200] Avg episode reward: [(0, '4.461')] |
|
[2024-07-27 15:47:14,072][08838] Updated weights for policy 0, policy_version 100 (0.0019) |
|
[2024-07-27 15:47:15,722][00200] Fps is (10 sec: 2866.8, 60 sec: 3345.0, 300 sec: 3034.0). Total num frames: 409600. Throughput: 0: 838.1. Samples: 102610. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:15,727][00200] Avg episode reward: [(0, '4.429')] |
|
[2024-07-27 15:47:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3042.7). Total num frames: 425984. Throughput: 0: 813.1. Samples: 106734. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:20,727][00200] Avg episode reward: [(0, '4.515')] |
|
[2024-07-27 15:47:25,721][00200] Fps is (10 sec: 3686.9, 60 sec: 3276.8, 300 sec: 3079.1). Total num frames: 446464. Throughput: 0: 811.4. Samples: 109604. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:25,726][00200] Avg episode reward: [(0, '4.396')] |
|
[2024-07-27 15:47:26,286][08838] Updated weights for policy 0, policy_version 110 (0.0032) |
|
[2024-07-27 15:47:30,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3085.7). Total num frames: 462848. Throughput: 0: 834.4. Samples: 115252. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:30,724][00200] Avg episode reward: [(0, '4.518')] |
|
[2024-07-27 15:47:35,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3208.7, 300 sec: 3065.4). Total num frames: 475136. Throughput: 0: 813.1. Samples: 119134. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:35,723][00200] Avg episode reward: [(0, '4.551')] |
|
[2024-07-27 15:47:38,921][08838] Updated weights for policy 0, policy_version 120 (0.0025) |
|
[2024-07-27 15:47:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.9, 300 sec: 3097.6). Total num frames: 495616. Throughput: 0: 816.9. Samples: 122220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:47:40,723][00200] Avg episode reward: [(0, '4.765')] |
|
[2024-07-27 15:47:40,725][08821] Saving new best policy, reward=4.765! |
|
[2024-07-27 15:47:45,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3103.0). Total num frames: 512000. Throughput: 0: 835.1. Samples: 127986. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:47:45,727][00200] Avg episode reward: [(0, '4.665')] |
|
[2024-07-27 15:47:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3108.1). Total num frames: 528384. Throughput: 0: 828.4. Samples: 131936. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:47:50,723][00200] Avg episode reward: [(0, '4.511')] |
|
[2024-07-27 15:47:51,464][08838] Updated weights for policy 0, policy_version 130 (0.0022) |
|
[2024-07-27 15:47:55,723][00200] Fps is (10 sec: 3685.5, 60 sec: 3344.9, 300 sec: 3136.3). Total num frames: 548864. Throughput: 0: 829.6. Samples: 134816. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:47:55,727][00200] Avg episode reward: [(0, '4.656')] |
|
[2024-07-27 15:48:00,723][00200] Fps is (10 sec: 4095.2, 60 sec: 3413.2, 300 sec: 3163.0). Total num frames: 569344. Throughput: 0: 851.8. Samples: 140940. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:48:00,725][00200] Avg episode reward: [(0, '4.799')] |
|
[2024-07-27 15:48:00,728][08821] Saving new best policy, reward=4.799! |
|
[2024-07-27 15:48:02,543][08838] Updated weights for policy 0, policy_version 140 (0.0019) |
|
[2024-07-27 15:48:05,721][00200] Fps is (10 sec: 2867.9, 60 sec: 3276.8, 300 sec: 3121.8). Total num frames: 577536. Throughput: 0: 852.9. Samples: 145116. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:48:05,723][00200] Avg episode reward: [(0, '4.957')] |
|
[2024-07-27 15:48:05,759][08821] Saving new best policy, reward=4.957! |
|
[2024-07-27 15:48:10,721][00200] Fps is (10 sec: 2867.7, 60 sec: 3345.1, 300 sec: 3147.4). Total num frames: 598016. Throughput: 0: 843.9. Samples: 147580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:48:10,729][00200] Avg episode reward: [(0, '5.109')] |
|
[2024-07-27 15:48:10,730][08821] Saving new best policy, reward=5.109! |
|
[2024-07-27 15:48:14,381][08838] Updated weights for policy 0, policy_version 150 (0.0016) |
|
[2024-07-27 15:48:15,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3171.8). Total num frames: 618496. Throughput: 0: 845.5. Samples: 153298. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:48:15,723][00200] Avg episode reward: [(0, '4.809')] |
|
[2024-07-27 15:48:20,721][00200] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3153.9). Total num frames: 630784. Throughput: 0: 863.3. Samples: 157982. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:48:20,724][00200] Avg episode reward: [(0, '4.915')] |
|
[2024-07-27 15:48:25,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3156.9). Total num frames: 647168. Throughput: 0: 842.3. Samples: 160124. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:48:25,723][00200] Avg episode reward: [(0, '4.911')] |
|
[2024-07-27 15:48:26,942][08838] Updated weights for policy 0, policy_version 160 (0.0014) |
|
[2024-07-27 15:48:30,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3179.3). Total num frames: 667648. Throughput: 0: 844.5. Samples: 165988. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:48:30,722][00200] Avg episode reward: [(0, '5.136')] |
|
[2024-07-27 15:48:30,727][08821] Saving new best policy, reward=5.136! |
|
[2024-07-27 15:48:35,722][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3181.5). Total num frames: 684032. Throughput: 0: 864.4. Samples: 170836. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:48:35,724][00200] Avg episode reward: [(0, '5.211')] |
|
[2024-07-27 15:48:35,738][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000167_684032.pth... |
|
[2024-07-27 15:48:35,880][08821] Saving new best policy, reward=5.211! |
|
[2024-07-27 15:48:39,742][08838] Updated weights for policy 0, policy_version 170 (0.0021) |
|
[2024-07-27 15:48:40,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3165.1). Total num frames: 696320. Throughput: 0: 838.8. Samples: 172562. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:48:40,727][00200] Avg episode reward: [(0, '5.094')] |
|
[2024-07-27 15:48:45,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3185.8). Total num frames: 716800. Throughput: 0: 835.2. Samples: 178520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-27 15:48:45,725][00200] Avg episode reward: [(0, '4.711')] |
|
[2024-07-27 15:48:50,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3187.8). Total num frames: 733184. Throughput: 0: 860.6. Samples: 183842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:48:50,724][00200] Avg episode reward: [(0, '4.902')] |
|
[2024-07-27 15:48:51,131][08838] Updated weights for policy 0, policy_version 180 (0.0019) |
|
[2024-07-27 15:48:55,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3189.7). Total num frames: 749568. Throughput: 0: 848.8. Samples: 185774. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:48:55,724][00200] Avg episode reward: [(0, '5.104')] |
|
[2024-07-27 15:49:00,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3208.5). Total num frames: 770048. Throughput: 0: 853.0. Samples: 191682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:49:00,723][00200] Avg episode reward: [(0, '5.078')] |
|
[2024-07-27 15:49:01,855][08838] Updated weights for policy 0, policy_version 190 (0.0021) |
|
[2024-07-27 15:49:05,721][00200] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3209.9). Total num frames: 786432. Throughput: 0: 873.6. Samples: 197294. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:49:05,724][00200] Avg episode reward: [(0, '5.020')] |
|
[2024-07-27 15:49:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.4, 300 sec: 3211.3). Total num frames: 802816. Throughput: 0: 867.6. Samples: 199166. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:49:10,722][00200] Avg episode reward: [(0, '5.076')] |
|
[2024-07-27 15:49:14,321][08838] Updated weights for policy 0, policy_version 200 (0.0021) |
|
[2024-07-27 15:49:15,721][00200] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3228.6). Total num frames: 823296. Throughput: 0: 857.6. Samples: 204582. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:15,723][00200] Avg episode reward: [(0, '5.002')] |
|
[2024-07-27 15:49:20,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3229.5). Total num frames: 839680. Throughput: 0: 877.0. Samples: 210302. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:20,727][00200] Avg episode reward: [(0, '4.925')] |
|
[2024-07-27 15:49:25,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3215.0). Total num frames: 851968. Throughput: 0: 879.7. Samples: 212150. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:25,728][00200] Avg episode reward: [(0, '5.065')] |
|
[2024-07-27 15:49:27,084][08838] Updated weights for policy 0, policy_version 210 (0.0021) |
|
[2024-07-27 15:49:30,721][00200] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3231.3). Total num frames: 872448. Throughput: 0: 858.6. Samples: 217158. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:49:30,723][00200] Avg episode reward: [(0, '5.345')] |
|
[2024-07-27 15:49:30,726][08821] Saving new best policy, reward=5.345! |
|
[2024-07-27 15:49:35,727][00200] Fps is (10 sec: 4093.4, 60 sec: 3481.2, 300 sec: 3246.9). Total num frames: 892928. Throughput: 0: 874.4. Samples: 223194. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:35,729][00200] Avg episode reward: [(0, '5.329')] |
|
[2024-07-27 15:49:38,500][08838] Updated weights for policy 0, policy_version 220 (0.0018) |
|
[2024-07-27 15:49:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3232.9). Total num frames: 905216. Throughput: 0: 876.0. Samples: 225196. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:40,723][00200] Avg episode reward: [(0, '5.036')] |
|
[2024-07-27 15:49:45,721][00200] Fps is (10 sec: 3278.9, 60 sec: 3481.6, 300 sec: 3248.1). Total num frames: 925696. Throughput: 0: 851.1. Samples: 229980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:49:45,723][00200] Avg episode reward: [(0, '5.094')] |
|
[2024-07-27 15:49:49,778][08838] Updated weights for policy 0, policy_version 230 (0.0015) |
|
[2024-07-27 15:49:50,721][00200] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3248.6). Total num frames: 942080. Throughput: 0: 857.8. Samples: 235894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:49:50,723][00200] Avg episode reward: [(0, '5.011')] |
|
[2024-07-27 15:49:55,721][00200] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3249.0). Total num frames: 958464. Throughput: 0: 868.7. Samples: 238258. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:49:55,730][00200] Avg episode reward: [(0, '5.247')] |
|
[2024-07-27 15:50:00,721][00200] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 974848. Throughput: 0: 846.4. Samples: 242670. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:50:00,727][00200] Avg episode reward: [(0, '5.314')] |
|
[2024-07-27 15:50:02,300][08838] Updated weights for policy 0, policy_version 240 (0.0016) |
|
[2024-07-27 15:50:05,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 995328. Throughput: 0: 850.6. Samples: 248580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:50:05,727][00200] Avg episode reward: [(0, '5.226')] |
|
[2024-07-27 15:50:10,725][00200] Fps is (10 sec: 3275.4, 60 sec: 3413.1, 300 sec: 3373.9). Total num frames: 1007616. Throughput: 0: 864.4. Samples: 251050. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:50:10,727][00200] Avg episode reward: [(0, '5.044')] |
|
[2024-07-27 15:50:14,921][08838] Updated weights for policy 0, policy_version 250 (0.0020) |
|
[2024-07-27 15:50:15,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 1024000. Throughput: 0: 846.6. Samples: 255256. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:50:15,723][00200] Avg episode reward: [(0, '4.808')] |
|
[2024-07-27 15:50:20,721][00200] Fps is (10 sec: 3687.9, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 1044480. Throughput: 0: 846.9. Samples: 261300. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:50:20,728][00200] Avg episode reward: [(0, '5.045')] |
|
[2024-07-27 15:50:25,726][00200] Fps is (10 sec: 3684.4, 60 sec: 3481.3, 300 sec: 3387.8). Total num frames: 1060864. Throughput: 0: 863.1. Samples: 264038. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:50:25,731][00200] Avg episode reward: [(0, '5.089')] |
|
[2024-07-27 15:50:26,941][08838] Updated weights for policy 0, policy_version 260 (0.0017) |
|
[2024-07-27 15:50:30,721][00200] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1077248. Throughput: 0: 849.1. Samples: 268190. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:50:30,723][00200] Avg episode reward: [(0, '5.164')] |
|
[2024-07-27 15:50:35,721][00200] Fps is (10 sec: 3688.4, 60 sec: 3413.7, 300 sec: 3387.9). Total num frames: 1097728. Throughput: 0: 848.7. Samples: 274084. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:50:35,723][00200] Avg episode reward: [(0, '5.399')] |
|
[2024-07-27 15:50:35,745][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth... |
|
[2024-07-27 15:50:35,858][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth |
|
[2024-07-27 15:50:35,879][08821] Saving new best policy, reward=5.399! |
|
[2024-07-27 15:50:37,865][08838] Updated weights for policy 0, policy_version 270 (0.0016) |
|
[2024-07-27 15:50:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 1110016. Throughput: 0: 859.3. Samples: 276928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-27 15:50:40,732][00200] Avg episode reward: [(0, '5.376')] |
|
[2024-07-27 15:50:45,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1126400. Throughput: 0: 840.7. Samples: 280500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:50:45,727][00200] Avg episode reward: [(0, '5.516')] |
|
[2024-07-27 15:50:45,741][08821] Saving new best policy, reward=5.516! |
|
[2024-07-27 15:50:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1142784. Throughput: 0: 835.4. Samples: 286174. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:50:50,723][00200] Avg episode reward: [(0, '5.704')] |
|
[2024-07-27 15:50:50,729][08821] Saving new best policy, reward=5.704! |
|
[2024-07-27 15:50:50,986][08838] Updated weights for policy 0, policy_version 280 (0.0020) |
|
[2024-07-27 15:50:55,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1159168. Throughput: 0: 841.7. Samples: 288922. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:50:55,724][00200] Avg episode reward: [(0, '5.896')] |
|
[2024-07-27 15:50:55,736][08821] Saving new best policy, reward=5.896! |
|
[2024-07-27 15:51:00,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1175552. Throughput: 0: 837.1. Samples: 292924. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:51:00,725][00200] Avg episode reward: [(0, '6.137')] |
|
[2024-07-27 15:51:00,729][08821] Saving new best policy, reward=6.137! |
|
[2024-07-27 15:51:03,689][08838] Updated weights for policy 0, policy_version 290 (0.0026) |
|
[2024-07-27 15:51:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1191936. Throughput: 0: 823.9. Samples: 298376. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:05,724][00200] Avg episode reward: [(0, '6.174')] |
|
[2024-07-27 15:51:05,735][08821] Saving new best policy, reward=6.174! |
|
[2024-07-27 15:51:10,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.6, 300 sec: 3401.8). Total num frames: 1212416. Throughput: 0: 825.8. Samples: 301194. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:10,723][00200] Avg episode reward: [(0, '6.471')] |
|
[2024-07-27 15:51:10,725][08821] Saving new best policy, reward=6.471! |
|
[2024-07-27 15:51:15,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1224704. Throughput: 0: 831.3. Samples: 305598. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:51:15,728][00200] Avg episode reward: [(0, '6.215')] |
|
[2024-07-27 15:51:16,515][08838] Updated weights for policy 0, policy_version 300 (0.0044) |
|
[2024-07-27 15:51:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1241088. Throughput: 0: 812.6. Samples: 310650. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:51:20,723][00200] Avg episode reward: [(0, '6.291')] |
|
[2024-07-27 15:51:25,721][00200] Fps is (10 sec: 3686.5, 60 sec: 3345.4, 300 sec: 3387.9). Total num frames: 1261568. Throughput: 0: 813.6. Samples: 313542. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:25,725][00200] Avg episode reward: [(0, '6.129')] |
|
[2024-07-27 15:51:28,041][08838] Updated weights for policy 0, policy_version 310 (0.0015) |
|
[2024-07-27 15:51:30,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1273856. Throughput: 0: 836.8. Samples: 318154. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:30,727][00200] Avg episode reward: [(0, '6.527')] |
|
[2024-07-27 15:51:30,730][08821] Saving new best policy, reward=6.527! |
|
[2024-07-27 15:51:35,723][00200] Fps is (10 sec: 2866.5, 60 sec: 3208.4, 300 sec: 3360.1). Total num frames: 1290240. Throughput: 0: 814.0. Samples: 322804. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:35,729][00200] Avg episode reward: [(0, '6.593')] |
|
[2024-07-27 15:51:35,738][08821] Saving new best policy, reward=6.593! |
|
[2024-07-27 15:51:40,263][08838] Updated weights for policy 0, policy_version 320 (0.0017) |
|
[2024-07-27 15:51:40,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1310720. Throughput: 0: 815.6. Samples: 325622. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:40,726][00200] Avg episode reward: [(0, '6.674')] |
|
[2024-07-27 15:51:40,729][08821] Saving new best policy, reward=6.674! |
|
[2024-07-27 15:51:45,721][00200] Fps is (10 sec: 3277.5, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1323008. Throughput: 0: 836.1. Samples: 330548. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:51:45,723][00200] Avg episode reward: [(0, '6.399')] |
|
[2024-07-27 15:51:50,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1339392. Throughput: 0: 810.9. Samples: 334868. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:51:50,723][00200] Avg episode reward: [(0, '6.287')] |
|
[2024-07-27 15:51:53,350][08838] Updated weights for policy 0, policy_version 330 (0.0017) |
|
[2024-07-27 15:51:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1359872. Throughput: 0: 813.5. Samples: 337802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:51:55,728][00200] Avg episode reward: [(0, '6.036')] |
|
[2024-07-27 15:52:00,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1372160. Throughput: 0: 833.7. Samples: 343116. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:52:00,728][00200] Avg episode reward: [(0, '6.150')] |
|
[2024-07-27 15:52:05,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1388544. Throughput: 0: 813.8. Samples: 347272. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:52:05,724][00200] Avg episode reward: [(0, '6.377')] |
|
[2024-07-27 15:52:06,097][08838] Updated weights for policy 0, policy_version 340 (0.0020) |
|
[2024-07-27 15:52:10,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1409024. Throughput: 0: 813.3. Samples: 350140. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:10,727][00200] Avg episode reward: [(0, '6.738')] |
|
[2024-07-27 15:52:10,730][08821] Saving new best policy, reward=6.738! |
|
[2024-07-27 15:52:15,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1425408. Throughput: 0: 835.6. Samples: 355756. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:52:15,723][00200] Avg episode reward: [(0, '6.780')] |
|
[2024-07-27 15:52:15,740][08821] Saving new best policy, reward=6.780! |
|
[2024-07-27 15:52:18,856][08838] Updated weights for policy 0, policy_version 350 (0.0020) |
|
[2024-07-27 15:52:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3360.1). Total num frames: 1437696. Throughput: 0: 811.6. Samples: 359322. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:20,727][00200] Avg episode reward: [(0, '6.798')] |
|
[2024-07-27 15:52:20,730][08821] Saving new best policy, reward=6.798! |
|
[2024-07-27 15:52:25,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1458176. Throughput: 0: 811.6. Samples: 362144. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:52:25,725][00200] Avg episode reward: [(0, '7.165')] |
|
[2024-07-27 15:52:25,739][08821] Saving new best policy, reward=7.165! |
|
[2024-07-27 15:52:29,857][08838] Updated weights for policy 0, policy_version 360 (0.0020) |
|
[2024-07-27 15:52:30,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1474560. Throughput: 0: 834.3. Samples: 368090. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:30,724][00200] Avg episode reward: [(0, '7.273')] |
|
[2024-07-27 15:52:30,728][08821] Saving new best policy, reward=7.273! |
|
[2024-07-27 15:52:35,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.9, 300 sec: 3360.1). Total num frames: 1486848. Throughput: 0: 828.8. Samples: 372164. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:35,723][00200] Avg episode reward: [(0, '7.047')] |
|
[2024-07-27 15:52:35,742][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000363_1486848.pth... |
|
[2024-07-27 15:52:35,872][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000167_684032.pth |
|
[2024-07-27 15:52:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1507328. Throughput: 0: 822.9. Samples: 374834. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:40,723][00200] Avg episode reward: [(0, '7.133')] |
|
[2024-07-27 15:52:42,137][08838] Updated weights for policy 0, policy_version 370 (0.0014) |
|
[2024-07-27 15:52:45,722][00200] Fps is (10 sec: 4095.5, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 1527808. Throughput: 0: 838.9. Samples: 380868. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:52:45,729][00200] Avg episode reward: [(0, '7.271')] |
|
[2024-07-27 15:52:50,722][00200] Fps is (10 sec: 3276.3, 60 sec: 3345.0, 300 sec: 3360.1). Total num frames: 1540096. Throughput: 0: 843.3. Samples: 385222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:52:50,726][00200] Avg episode reward: [(0, '7.314')] |
|
[2024-07-27 15:52:50,738][08821] Saving new best policy, reward=7.314! |
|
[2024-07-27 15:52:54,814][08838] Updated weights for policy 0, policy_version 380 (0.0017) |
|
[2024-07-27 15:52:55,721][00200] Fps is (10 sec: 2867.6, 60 sec: 3276.8, 300 sec: 3346.2). Total num frames: 1556480. Throughput: 0: 829.6. Samples: 387470. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:52:55,723][00200] Avg episode reward: [(0, '7.302')] |
|
[2024-07-27 15:53:00,721][00200] Fps is (10 sec: 3686.7, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 1576960. Throughput: 0: 835.6. Samples: 393360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:53:00,727][00200] Avg episode reward: [(0, '7.578')] |
|
[2024-07-27 15:53:00,731][08821] Saving new best policy, reward=7.578! |
|
[2024-07-27 15:53:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 1589248. Throughput: 0: 854.7. Samples: 397782. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:53:05,726][00200] Avg episode reward: [(0, '8.355')] |
|
[2024-07-27 15:53:05,746][08821] Saving new best policy, reward=8.355! |
|
[2024-07-27 15:53:07,902][08838] Updated weights for policy 0, policy_version 390 (0.0014) |
|
[2024-07-27 15:53:10,721][00200] Fps is (10 sec: 2867.4, 60 sec: 3276.8, 300 sec: 3346.2). Total num frames: 1605632. Throughput: 0: 834.9. Samples: 399714. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:53:10,723][00200] Avg episode reward: [(0, '8.539')] |
|
[2024-07-27 15:53:10,728][08821] Saving new best policy, reward=8.539! |
|
[2024-07-27 15:53:15,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1626112. Throughput: 0: 832.9. Samples: 405572. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:15,724][00200] Avg episode reward: [(0, '9.956')] |
|
[2024-07-27 15:53:15,736][08821] Saving new best policy, reward=9.956! |
|
[2024-07-27 15:53:19,268][08838] Updated weights for policy 0, policy_version 400 (0.0016) |
|
[2024-07-27 15:53:20,723][00200] Fps is (10 sec: 3276.2, 60 sec: 3345.0, 300 sec: 3360.1). Total num frames: 1638400. Throughput: 0: 847.4. Samples: 410300. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:20,730][00200] Avg episode reward: [(0, '9.788')] |
|
[2024-07-27 15:53:25,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3346.2). Total num frames: 1654784. Throughput: 0: 828.0. Samples: 412094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:25,723][00200] Avg episode reward: [(0, '10.504')] |
|
[2024-07-27 15:53:25,811][08821] Saving new best policy, reward=10.504! |
|
[2024-07-27 15:53:30,721][00200] Fps is (10 sec: 3687.0, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 1675264. Throughput: 0: 824.0. Samples: 417948. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:30,728][00200] Avg episode reward: [(0, '10.389')] |
|
[2024-07-27 15:53:31,150][08838] Updated weights for policy 0, policy_version 410 (0.0028) |
|
[2024-07-27 15:53:35,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1691648. Throughput: 0: 843.2. Samples: 423164. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:35,726][00200] Avg episode reward: [(0, '9.973')] |
|
[2024-07-27 15:53:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 1708032. Throughput: 0: 834.0. Samples: 424998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:40,723][00200] Avg episode reward: [(0, '9.540')] |
|
[2024-07-27 15:53:43,705][08838] Updated weights for policy 0, policy_version 420 (0.0027) |
|
[2024-07-27 15:53:45,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.9, 300 sec: 3360.1). Total num frames: 1724416. Throughput: 0: 825.3. Samples: 430498. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:45,727][00200] Avg episode reward: [(0, '9.419')] |
|
[2024-07-27 15:53:50,724][00200] Fps is (10 sec: 3275.7, 60 sec: 3345.0, 300 sec: 3360.1). Total num frames: 1740800. Throughput: 0: 845.4. Samples: 435828. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:53:50,730][00200] Avg episode reward: [(0, '9.772')] |
|
[2024-07-27 15:53:55,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 1757184. Throughput: 0: 844.2. Samples: 437704. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:53:55,727][00200] Avg episode reward: [(0, '9.957')] |
|
[2024-07-27 15:53:56,637][08838] Updated weights for policy 0, policy_version 430 (0.0014) |
|
[2024-07-27 15:54:00,721][00200] Fps is (10 sec: 3277.9, 60 sec: 3276.8, 300 sec: 3346.2). Total num frames: 1773568. Throughput: 0: 826.3. Samples: 442754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:54:00,723][00200] Avg episode reward: [(0, '10.298')] |
|
[2024-07-27 15:54:05,723][00200] Fps is (10 sec: 3685.7, 60 sec: 3413.2, 300 sec: 3360.1). Total num frames: 1794048. Throughput: 0: 851.4. Samples: 448612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:05,727][00200] Avg episode reward: [(0, '10.604')] |
|
[2024-07-27 15:54:05,740][08821] Saving new best policy, reward=10.604! |
|
[2024-07-27 15:54:08,647][08838] Updated weights for policy 0, policy_version 440 (0.0031) |
|
[2024-07-27 15:54:10,723][00200] Fps is (10 sec: 3276.1, 60 sec: 3344.9, 300 sec: 3332.3). Total num frames: 1806336. Throughput: 0: 849.2. Samples: 450308. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:54:10,725][00200] Avg episode reward: [(0, '10.358')] |
|
[2024-07-27 15:54:15,721][00200] Fps is (10 sec: 2867.7, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 1822720. Throughput: 0: 825.2. Samples: 455082. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:15,723][00200] Avg episode reward: [(0, '9.671')] |
|
[2024-07-27 15:54:20,177][08838] Updated weights for policy 0, policy_version 450 (0.0020) |
|
[2024-07-27 15:54:20,724][00200] Fps is (10 sec: 3685.9, 60 sec: 3413.2, 300 sec: 3360.1). Total num frames: 1843200. Throughput: 0: 836.6. Samples: 460816. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:20,727][00200] Avg episode reward: [(0, '8.906')] |
|
[2024-07-27 15:54:25,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 1855488. Throughput: 0: 843.8. Samples: 462968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:54:25,726][00200] Avg episode reward: [(0, '9.321')] |
|
[2024-07-27 15:54:30,721][00200] Fps is (10 sec: 2868.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 1871872. Throughput: 0: 819.9. Samples: 467392. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:30,728][00200] Avg episode reward: [(0, '9.040')] |
|
[2024-07-27 15:54:32,828][08838] Updated weights for policy 0, policy_version 460 (0.0019) |
|
[2024-07-27 15:54:35,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 1892352. Throughput: 0: 831.3. Samples: 473236. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:35,728][00200] Avg episode reward: [(0, '10.065')] |
|
[2024-07-27 15:54:35,737][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000462_1892352.pth... |
|
[2024-07-27 15:54:35,738][00200] Components not started: RolloutWorker_w0, RolloutWorker_w3, RolloutWorker_w4, wait_time=600.0 seconds |
|
[2024-07-27 15:54:35,857][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000268_1097728.pth |
|
[2024-07-27 15:54:40,722][00200] Fps is (10 sec: 3276.3, 60 sec: 3276.7, 300 sec: 3318.4). Total num frames: 1904640. Throughput: 0: 843.4. Samples: 475656. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:54:40,724][00200] Avg episode reward: [(0, '10.421')] |
|
[2024-07-27 15:54:45,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 1921024. Throughput: 0: 822.8. Samples: 479782. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:54:45,723][00200] Avg episode reward: [(0, '11.339')] |
|
[2024-07-27 15:54:45,734][08821] Saving new best policy, reward=11.339! |
|
[2024-07-27 15:54:45,968][08838] Updated weights for policy 0, policy_version 470 (0.0017) |
|
[2024-07-27 15:54:50,721][00200] Fps is (10 sec: 3686.9, 60 sec: 3345.3, 300 sec: 3332.3). Total num frames: 1941504. Throughput: 0: 816.8. Samples: 485366. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:54:50,723][00200] Avg episode reward: [(0, '11.334')] |
|
[2024-07-27 15:54:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 1957888. Throughput: 0: 841.9. Samples: 488190. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:54:55,725][00200] Avg episode reward: [(0, '11.877')] |
|
[2024-07-27 15:54:55,734][08821] Saving new best policy, reward=11.877! |
|
[2024-07-27 15:54:58,750][08838] Updated weights for policy 0, policy_version 480 (0.0031) |
|
[2024-07-27 15:55:00,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 1970176. Throughput: 0: 818.4. Samples: 491912. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:55:00,723][00200] Avg episode reward: [(0, '12.448')] |
|
[2024-07-27 15:55:00,726][08821] Saving new best policy, reward=12.448! |
|
[2024-07-27 15:55:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.9, 300 sec: 3332.4). Total num frames: 1990656. Throughput: 0: 821.7. Samples: 497788. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:55:05,727][00200] Avg episode reward: [(0, '13.303')] |
|
[2024-07-27 15:55:05,736][08821] Saving new best policy, reward=13.303! |
|
[2024-07-27 15:55:09,469][08838] Updated weights for policy 0, policy_version 490 (0.0027) |
|
[2024-07-27 15:55:10,722][00200] Fps is (10 sec: 3686.0, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2007040. Throughput: 0: 837.8. Samples: 500670. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:55:10,725][00200] Avg episode reward: [(0, '14.388')] |
|
[2024-07-27 15:55:10,730][08821] Saving new best policy, reward=14.388! |
|
[2024-07-27 15:55:15,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2019328. Throughput: 0: 825.2. Samples: 504528. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:55:15,729][00200] Avg episode reward: [(0, '16.084')] |
|
[2024-07-27 15:55:15,743][08821] Saving new best policy, reward=16.084! |
|
[2024-07-27 15:55:20,721][00200] Fps is (10 sec: 3277.2, 60 sec: 3277.0, 300 sec: 3318.5). Total num frames: 2039808. Throughput: 0: 818.0. Samples: 510048. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:55:20,723][00200] Avg episode reward: [(0, '17.021')] |
|
[2024-07-27 15:55:20,728][08821] Saving new best policy, reward=17.021! |
|
[2024-07-27 15:55:22,252][08838] Updated weights for policy 0, policy_version 500 (0.0015) |
|
[2024-07-27 15:55:25,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2056192. Throughput: 0: 828.1. Samples: 512920. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:55:25,727][00200] Avg episode reward: [(0, '16.910')] |
|
[2024-07-27 15:55:30,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 2072576. Throughput: 0: 833.1. Samples: 517272. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:55:30,728][00200] Avg episode reward: [(0, '16.609')] |
|
[2024-07-27 15:55:34,770][08838] Updated weights for policy 0, policy_version 510 (0.0022) |
|
[2024-07-27 15:55:35,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2088960. Throughput: 0: 827.0. Samples: 522582. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:55:35,728][00200] Avg episode reward: [(0, '15.991')] |
|
[2024-07-27 15:55:40,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3332.3). Total num frames: 2109440. Throughput: 0: 830.6. Samples: 525566. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:55:40,723][00200] Avg episode reward: [(0, '14.338')] |
|
[2024-07-27 15:55:45,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2121728. Throughput: 0: 849.3. Samples: 530132. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:55:45,724][00200] Avg episode reward: [(0, '13.252')] |
|
[2024-07-27 15:55:47,522][08838] Updated weights for policy 0, policy_version 520 (0.0016) |
|
[2024-07-27 15:55:50,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2138112. Throughput: 0: 825.4. Samples: 534930. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:55:50,723][00200] Avg episode reward: [(0, '14.809')] |
|
[2024-07-27 15:55:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2158592. Throughput: 0: 827.6. Samples: 537912. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:55:55,723][00200] Avg episode reward: [(0, '15.445')] |
|
[2024-07-27 15:55:59,305][08838] Updated weights for policy 0, policy_version 530 (0.0014) |
|
[2024-07-27 15:56:00,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2170880. Throughput: 0: 850.8. Samples: 542814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-27 15:56:00,724][00200] Avg episode reward: [(0, '15.948')] |
|
[2024-07-27 15:56:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2191360. Throughput: 0: 830.2. Samples: 547408. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:56:05,723][00200] Avg episode reward: [(0, '16.579')] |
|
[2024-07-27 15:56:10,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2207744. Throughput: 0: 830.8. Samples: 550304. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:56:10,723][00200] Avg episode reward: [(0, '17.921')] |
|
[2024-07-27 15:56:10,730][08821] Saving new best policy, reward=17.921! |
|
[2024-07-27 15:56:11,076][08838] Updated weights for policy 0, policy_version 540 (0.0014) |
|
[2024-07-27 15:56:15,725][00200] Fps is (10 sec: 3275.4, 60 sec: 3413.1, 300 sec: 3332.3). Total num frames: 2224128. Throughput: 0: 848.8. Samples: 555470. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:56:15,729][00200] Avg episode reward: [(0, '17.842')] |
|
[2024-07-27 15:56:20,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2240512. Throughput: 0: 823.8. Samples: 559654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:56:20,728][00200] Avg episode reward: [(0, '18.208')] |
|
[2024-07-27 15:56:20,731][08821] Saving new best policy, reward=18.208! |
|
[2024-07-27 15:56:23,830][08838] Updated weights for policy 0, policy_version 550 (0.0015) |
|
[2024-07-27 15:56:25,721][00200] Fps is (10 sec: 3278.3, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2256896. Throughput: 0: 820.9. Samples: 562506. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:56:25,723][00200] Avg episode reward: [(0, '18.287')] |
|
[2024-07-27 15:56:25,738][08821] Saving new best policy, reward=18.287! |
|
[2024-07-27 15:56:30,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 2273280. Throughput: 0: 844.2. Samples: 568120. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:56:30,725][00200] Avg episode reward: [(0, '18.367')] |
|
[2024-07-27 15:56:30,732][08821] Saving new best policy, reward=18.367! |
|
[2024-07-27 15:56:35,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2289664. Throughput: 0: 823.8. Samples: 572000. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:56:35,724][00200] Avg episode reward: [(0, '17.914')] |
|
[2024-07-27 15:56:35,736][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000559_2289664.pth... |
|
[2024-07-27 15:56:35,851][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000363_1486848.pth |
|
[2024-07-27 15:56:36,577][08838] Updated weights for policy 0, policy_version 560 (0.0019) |
|
[2024-07-27 15:56:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 2306048. Throughput: 0: 820.0. Samples: 574812. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:56:40,732][00200] Avg episode reward: [(0, '17.352')] |
|
[2024-07-27 15:56:45,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2322432. Throughput: 0: 841.8. Samples: 580696. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:56:45,728][00200] Avg episode reward: [(0, '16.781')] |
|
[2024-07-27 15:56:49,436][08838] Updated weights for policy 0, policy_version 570 (0.0031) |
|
[2024-07-27 15:56:50,721][00200] Fps is (10 sec: 3276.6, 60 sec: 3345.0, 300 sec: 3318.4). Total num frames: 2338816. Throughput: 0: 820.7. Samples: 584340. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:56:50,726][00200] Avg episode reward: [(0, '17.785')] |
|
[2024-07-27 15:56:55,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 2355200. Throughput: 0: 818.2. Samples: 587122. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:56:55,727][00200] Avg episode reward: [(0, '17.266')] |
|
[2024-07-27 15:57:00,174][08838] Updated weights for policy 0, policy_version 580 (0.0025) |
|
[2024-07-27 15:57:00,721][00200] Fps is (10 sec: 3686.7, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 2375680. Throughput: 0: 833.2. Samples: 592962. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:00,723][00200] Avg episode reward: [(0, '18.674')] |
|
[2024-07-27 15:57:00,726][08821] Saving new best policy, reward=18.674! |
|
[2024-07-27 15:57:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2387968. Throughput: 0: 830.6. Samples: 597032. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:57:05,724][00200] Avg episode reward: [(0, '19.507')] |
|
[2024-07-27 15:57:05,738][08821] Saving new best policy, reward=19.507! |
|
[2024-07-27 15:57:10,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2404352. Throughput: 0: 820.4. Samples: 599426. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:57:10,729][00200] Avg episode reward: [(0, '20.186')] |
|
[2024-07-27 15:57:10,731][08821] Saving new best policy, reward=20.186! |
|
[2024-07-27 15:57:13,214][08838] Updated weights for policy 0, policy_version 590 (0.0014) |
|
[2024-07-27 15:57:15,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.3, 300 sec: 3346.2). Total num frames: 2424832. Throughput: 0: 823.2. Samples: 605162. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:57:15,727][00200] Avg episode reward: [(0, '20.224')] |
|
[2024-07-27 15:57:15,736][08821] Saving new best policy, reward=20.224! |
|
[2024-07-27 15:57:20,721][00200] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2437120. Throughput: 0: 833.4. Samples: 609504. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:57:20,728][00200] Avg episode reward: [(0, '20.587')] |
|
[2024-07-27 15:57:20,730][08821] Saving new best policy, reward=20.587! |
|
[2024-07-27 15:57:25,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2453504. Throughput: 0: 813.2. Samples: 611408. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:25,732][00200] Avg episode reward: [(0, '21.365')] |
|
[2024-07-27 15:57:25,745][08821] Saving new best policy, reward=21.365! |
|
[2024-07-27 15:57:26,314][08838] Updated weights for policy 0, policy_version 600 (0.0022) |
|
[2024-07-27 15:57:30,723][00200] Fps is (10 sec: 3685.7, 60 sec: 3344.9, 300 sec: 3346.2). Total num frames: 2473984. Throughput: 0: 810.3. Samples: 617162. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:30,725][00200] Avg episode reward: [(0, '20.066')] |
|
[2024-07-27 15:57:35,726][00200] Fps is (10 sec: 3275.0, 60 sec: 3276.5, 300 sec: 3318.4). Total num frames: 2486272. Throughput: 0: 835.2. Samples: 621928. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:35,731][00200] Avg episode reward: [(0, '19.001')] |
|
[2024-07-27 15:57:39,352][08838] Updated weights for policy 0, policy_version 610 (0.0030) |
|
[2024-07-27 15:57:40,721][00200] Fps is (10 sec: 2867.9, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2502656. Throughput: 0: 815.2. Samples: 623806. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:40,729][00200] Avg episode reward: [(0, '20.099')] |
|
[2024-07-27 15:57:45,721][00200] Fps is (10 sec: 3688.4, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 2523136. Throughput: 0: 809.7. Samples: 629400. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:57:45,729][00200] Avg episode reward: [(0, '19.880')] |
|
[2024-07-27 15:57:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2535424. Throughput: 0: 838.8. Samples: 634778. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:57:50,723][00200] Avg episode reward: [(0, '19.818')] |
|
[2024-07-27 15:57:50,774][08838] Updated weights for policy 0, policy_version 620 (0.0019) |
|
[2024-07-27 15:57:55,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2551808. Throughput: 0: 825.1. Samples: 636554. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:57:55,723][00200] Avg episode reward: [(0, '20.143')] |
|
[2024-07-27 15:58:00,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 2572288. Throughput: 0: 811.4. Samples: 641676. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:58:00,730][00200] Avg episode reward: [(0, '20.992')] |
|
[2024-07-27 15:58:02,737][08838] Updated weights for policy 0, policy_version 630 (0.0037) |
|
[2024-07-27 15:58:05,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2588672. Throughput: 0: 844.6. Samples: 647512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:58:05,724][00200] Avg episode reward: [(0, '21.183')] |
|
[2024-07-27 15:58:10,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2600960. Throughput: 0: 843.4. Samples: 649360. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:58:10,725][00200] Avg episode reward: [(0, '20.795')] |
|
[2024-07-27 15:58:15,296][08838] Updated weights for policy 0, policy_version 640 (0.0019) |
|
[2024-07-27 15:58:15,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.4). Total num frames: 2621440. Throughput: 0: 822.8. Samples: 654184. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 15:58:15,726][00200] Avg episode reward: [(0, '21.966')] |
|
[2024-07-27 15:58:15,736][08821] Saving new best policy, reward=21.966! |
|
[2024-07-27 15:58:20,721][00200] Fps is (10 sec: 4095.9, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 2641920. Throughput: 0: 846.7. Samples: 660024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-27 15:58:20,726][00200] Avg episode reward: [(0, '21.548')] |
|
[2024-07-27 15:58:25,726][00200] Fps is (10 sec: 2865.8, 60 sec: 3276.5, 300 sec: 3304.5). Total num frames: 2650112. Throughput: 0: 849.7. Samples: 662046. Policy #0 lag: (min: 0.0, avg: 0.2, max: 2.0) |
|
[2024-07-27 15:58:25,733][00200] Avg episode reward: [(0, '21.629')] |
|
[2024-07-27 15:58:28,260][08838] Updated weights for policy 0, policy_version 650 (0.0016) |
|
[2024-07-27 15:58:30,721][00200] Fps is (10 sec: 2867.3, 60 sec: 3276.9, 300 sec: 3318.5). Total num frames: 2670592. Throughput: 0: 823.7. Samples: 666468. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-07-27 15:58:30,724][00200] Avg episode reward: [(0, '20.979')] |
|
[2024-07-27 15:58:35,721][00200] Fps is (10 sec: 4098.0, 60 sec: 3413.6, 300 sec: 3332.3). Total num frames: 2691072. Throughput: 0: 833.3. Samples: 672278. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:58:35,725][00200] Avg episode reward: [(0, '21.562')] |
|
[2024-07-27 15:58:35,736][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000657_2691072.pth... |
|
[2024-07-27 15:58:35,856][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000462_1892352.pth |
|
[2024-07-27 15:58:39,952][08838] Updated weights for policy 0, policy_version 660 (0.0034) |
|
[2024-07-27 15:58:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2703360. Throughput: 0: 848.9. Samples: 674756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:58:40,723][00200] Avg episode reward: [(0, '21.150')] |
|
[2024-07-27 15:58:45,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2719744. Throughput: 0: 828.4. Samples: 678952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:58:45,722][00200] Avg episode reward: [(0, '20.925')] |
|
[2024-07-27 15:58:50,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 2740224. Throughput: 0: 827.5. Samples: 684748. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:58:50,724][00200] Avg episode reward: [(0, '21.323')] |
|
[2024-07-27 15:58:51,735][08838] Updated weights for policy 0, policy_version 670 (0.0030) |
|
[2024-07-27 15:58:55,724][00200] Fps is (10 sec: 3275.9, 60 sec: 3344.9, 300 sec: 3318.4). Total num frames: 2752512. Throughput: 0: 845.7. Samples: 687418. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:58:55,726][00200] Avg episode reward: [(0, '22.297')] |
|
[2024-07-27 15:58:55,740][08821] Saving new best policy, reward=22.297! |
|
[2024-07-27 15:59:00,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2768896. Throughput: 0: 821.4. Samples: 691148. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:59:00,723][00200] Avg episode reward: [(0, '22.586')] |
|
[2024-07-27 15:59:00,729][08821] Saving new best policy, reward=22.586! |
|
[2024-07-27 15:59:04,564][08838] Updated weights for policy 0, policy_version 680 (0.0026) |
|
[2024-07-27 15:59:05,721][00200] Fps is (10 sec: 3687.4, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 2789376. Throughput: 0: 819.6. Samples: 696904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:05,723][00200] Avg episode reward: [(0, '22.607')] |
|
[2024-07-27 15:59:05,733][08821] Saving new best policy, reward=22.607! |
|
[2024-07-27 15:59:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2801664. Throughput: 0: 837.2. Samples: 699716. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:59:10,730][00200] Avg episode reward: [(0, '23.172')] |
|
[2024-07-27 15:59:10,735][08821] Saving new best policy, reward=23.172! |
|
[2024-07-27 15:59:15,721][00200] Fps is (10 sec: 2867.1, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2818048. Throughput: 0: 821.9. Samples: 703452. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:59:15,727][00200] Avg episode reward: [(0, '23.073')] |
|
[2024-07-27 15:59:17,568][08838] Updated weights for policy 0, policy_version 690 (0.0017) |
|
[2024-07-27 15:59:20,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 2838528. Throughput: 0: 818.3. Samples: 709100. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:59:20,723][00200] Avg episode reward: [(0, '22.302')] |
|
[2024-07-27 15:59:25,721][00200] Fps is (10 sec: 3686.6, 60 sec: 3413.6, 300 sec: 3332.3). Total num frames: 2854912. Throughput: 0: 826.2. Samples: 711934. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 15:59:25,723][00200] Avg episode reward: [(0, '21.438')] |
|
[2024-07-27 15:59:30,301][08838] Updated weights for policy 0, policy_version 700 (0.0017) |
|
[2024-07-27 15:59:30,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2867200. Throughput: 0: 824.6. Samples: 716060. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:30,723][00200] Avg episode reward: [(0, '22.143')] |
|
[2024-07-27 15:59:35,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.4). Total num frames: 2887680. Throughput: 0: 817.7. Samples: 721546. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:35,723][00200] Avg episode reward: [(0, '21.154')] |
|
[2024-07-27 15:59:40,724][00200] Fps is (10 sec: 3685.3, 60 sec: 3344.9, 300 sec: 3332.3). Total num frames: 2904064. Throughput: 0: 822.4. Samples: 724428. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 15:59:40,732][00200] Avg episode reward: [(0, '20.889')] |
|
[2024-07-27 15:59:41,416][08838] Updated weights for policy 0, policy_version 710 (0.0015) |
|
[2024-07-27 15:59:45,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2916352. Throughput: 0: 839.2. Samples: 728914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:45,727][00200] Avg episode reward: [(0, '20.791')] |
|
[2024-07-27 15:59:50,721][00200] Fps is (10 sec: 3277.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2936832. Throughput: 0: 825.2. Samples: 734036. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:50,729][00200] Avg episode reward: [(0, '21.446')] |
|
[2024-07-27 15:59:53,615][08838] Updated weights for policy 0, policy_version 720 (0.0018) |
|
[2024-07-27 15:59:55,721][00200] Fps is (10 sec: 4095.8, 60 sec: 3413.5, 300 sec: 3346.2). Total num frames: 2957312. Throughput: 0: 824.9. Samples: 736838. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 15:59:55,724][00200] Avg episode reward: [(0, '22.161')] |
|
[2024-07-27 16:00:00,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 2969600. Throughput: 0: 847.1. Samples: 741570. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:00,724][00200] Avg episode reward: [(0, '20.762')] |
|
[2024-07-27 16:00:05,721][00200] Fps is (10 sec: 2867.3, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2985984. Throughput: 0: 829.3. Samples: 746418. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:00:05,729][00200] Avg episode reward: [(0, '21.100')] |
|
[2024-07-27 16:00:06,313][08838] Updated weights for policy 0, policy_version 730 (0.0020) |
|
[2024-07-27 16:00:10,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 3006464. Throughput: 0: 831.6. Samples: 749358. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:10,729][00200] Avg episode reward: [(0, '21.424')] |
|
[2024-07-27 16:00:15,722][00200] Fps is (10 sec: 3276.4, 60 sec: 3345.0, 300 sec: 3318.4). Total num frames: 3018752. Throughput: 0: 850.9. Samples: 754352. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:15,729][00200] Avg episode reward: [(0, '20.413')] |
|
[2024-07-27 16:00:19,039][08838] Updated weights for policy 0, policy_version 740 (0.0024) |
|
[2024-07-27 16:00:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 3035136. Throughput: 0: 830.4. Samples: 758916. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:00:20,723][00200] Avg episode reward: [(0, '20.442')] |
|
[2024-07-27 16:00:25,721][00200] Fps is (10 sec: 3686.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3055616. Throughput: 0: 831.8. Samples: 761856. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:25,723][00200] Avg episode reward: [(0, '20.638')] |
|
[2024-07-27 16:00:30,425][08838] Updated weights for policy 0, policy_version 750 (0.0013) |
|
[2024-07-27 16:00:30,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 3072000. Throughput: 0: 852.0. Samples: 767254. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:30,723][00200] Avg episode reward: [(0, '21.546')] |
|
[2024-07-27 16:00:35,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3084288. Throughput: 0: 832.2. Samples: 771484. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:00:35,724][00200] Avg episode reward: [(0, '22.728')] |
|
[2024-07-27 16:00:35,735][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000753_3084288.pth... |
|
[2024-07-27 16:00:35,863][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000559_2289664.pth |
|
[2024-07-27 16:00:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3332.3). Total num frames: 3104768. Throughput: 0: 835.3. Samples: 774424. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:00:40,723][00200] Avg episode reward: [(0, '22.283')] |
|
[2024-07-27 16:00:41,918][08838] Updated weights for policy 0, policy_version 760 (0.0022) |
|
[2024-07-27 16:00:45,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 3121152. Throughput: 0: 858.8. Samples: 780218. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:00:45,727][00200] Avg episode reward: [(0, '23.204')] |
|
[2024-07-27 16:00:45,740][08821] Saving new best policy, reward=23.204! |
|
[2024-07-27 16:00:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3137536. Throughput: 0: 836.6. Samples: 784066. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:00:50,727][00200] Avg episode reward: [(0, '22.943')] |
|
[2024-07-27 16:00:54,647][08838] Updated weights for policy 0, policy_version 770 (0.0025) |
|
[2024-07-27 16:00:55,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 3153920. Throughput: 0: 837.2. Samples: 787034. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:00:55,728][00200] Avg episode reward: [(0, '23.297')] |
|
[2024-07-27 16:00:55,767][08821] Saving new best policy, reward=23.297! |
|
[2024-07-27 16:01:00,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 3174400. Throughput: 0: 856.2. Samples: 792878. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:00,728][00200] Avg episode reward: [(0, '23.253')] |
|
[2024-07-27 16:01:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3186688. Throughput: 0: 842.4. Samples: 796824. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:01:05,731][00200] Avg episode reward: [(0, '21.972')] |
|
[2024-07-27 16:01:07,296][08838] Updated weights for policy 0, policy_version 780 (0.0026) |
|
[2024-07-27 16:01:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 3207168. Throughput: 0: 838.3. Samples: 799580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:10,726][00200] Avg episode reward: [(0, '21.644')] |
|
[2024-07-27 16:01:15,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3332.3). Total num frames: 3223552. Throughput: 0: 849.4. Samples: 805476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:01:15,728][00200] Avg episode reward: [(0, '22.699')] |
|
[2024-07-27 16:01:18,994][08838] Updated weights for policy 0, policy_version 790 (0.0025) |
|
[2024-07-27 16:01:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3235840. Throughput: 0: 850.5. Samples: 809756. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:01:20,726][00200] Avg episode reward: [(0, '23.781')] |
|
[2024-07-27 16:01:20,740][08821] Saving new best policy, reward=23.781! |
|
[2024-07-27 16:01:25,721][00200] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3256320. Throughput: 0: 836.7. Samples: 812074. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:25,727][00200] Avg episode reward: [(0, '21.816')] |
|
[2024-07-27 16:01:30,410][08838] Updated weights for policy 0, policy_version 800 (0.0016) |
|
[2024-07-27 16:01:30,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 3276800. Throughput: 0: 836.1. Samples: 817842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:30,733][00200] Avg episode reward: [(0, '22.067')] |
|
[2024-07-27 16:01:35,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 3289088. Throughput: 0: 853.0. Samples: 822452. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:35,726][00200] Avg episode reward: [(0, '22.134')] |
|
[2024-07-27 16:01:40,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3305472. Throughput: 0: 833.4. Samples: 824536. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:40,728][00200] Avg episode reward: [(0, '21.976')] |
|
[2024-07-27 16:01:43,062][08838] Updated weights for policy 0, policy_version 810 (0.0019) |
|
[2024-07-27 16:01:45,721][00200] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 3325952. Throughput: 0: 835.2. Samples: 830464. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:45,722][00200] Avg episode reward: [(0, '21.315')] |
|
[2024-07-27 16:01:50,721][00200] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 3342336. Throughput: 0: 856.8. Samples: 835380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:01:50,725][00200] Avg episode reward: [(0, '20.450')] |
|
[2024-07-27 16:01:55,723][00200] Fps is (10 sec: 2866.5, 60 sec: 3344.9, 300 sec: 3318.4). Total num frames: 3354624. Throughput: 0: 836.2. Samples: 837212. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:01:55,725][00200] Avg episode reward: [(0, '21.045')] |
|
[2024-07-27 16:01:55,746][08838] Updated weights for policy 0, policy_version 820 (0.0018) |
|
[2024-07-27 16:02:00,721][00200] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3375104. Throughput: 0: 835.9. Samples: 843092. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:00,728][00200] Avg episode reward: [(0, '21.329')] |
|
[2024-07-27 16:02:05,723][00200] Fps is (10 sec: 3686.6, 60 sec: 3413.2, 300 sec: 3346.2). Total num frames: 3391488. Throughput: 0: 854.7. Samples: 848218. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:05,725][00200] Avg episode reward: [(0, '20.934')] |
|
[2024-07-27 16:02:08,036][08838] Updated weights for policy 0, policy_version 830 (0.0022) |
|
[2024-07-27 16:02:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3407872. Throughput: 0: 844.9. Samples: 850094. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:10,728][00200] Avg episode reward: [(0, '20.779')] |
|
[2024-07-27 16:02:15,721][00200] Fps is (10 sec: 3277.5, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3424256. Throughput: 0: 838.9. Samples: 855592. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:15,723][00200] Avg episode reward: [(0, '19.571')] |
|
[2024-07-27 16:02:18,670][08838] Updated weights for policy 0, policy_version 840 (0.0015) |
|
[2024-07-27 16:02:20,724][00200] Fps is (10 sec: 3685.1, 60 sec: 3481.4, 300 sec: 3360.1). Total num frames: 3444736. Throughput: 0: 858.2. Samples: 861072. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:20,727][00200] Avg episode reward: [(0, '20.283')] |
|
[2024-07-27 16:02:25,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 3457024. Throughput: 0: 853.0. Samples: 862922. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:02:25,723][00200] Avg episode reward: [(0, '21.287')] |
|
[2024-07-27 16:02:30,721][00200] Fps is (10 sec: 3277.9, 60 sec: 3345.1, 300 sec: 3360.2). Total num frames: 3477504. Throughput: 0: 837.2. Samples: 868136. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:02:30,722][00200] Avg episode reward: [(0, '22.365')] |
|
[2024-07-27 16:02:31,514][08838] Updated weights for policy 0, policy_version 850 (0.0029) |
|
[2024-07-27 16:02:35,722][00200] Fps is (10 sec: 3685.9, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3493888. Throughput: 0: 854.8. Samples: 873848. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:02:35,725][00200] Avg episode reward: [(0, '23.252')] |
|
[2024-07-27 16:02:35,735][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000853_3493888.pth... |
|
[2024-07-27 16:02:35,878][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000657_2691072.pth |
|
[2024-07-27 16:02:40,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3506176. Throughput: 0: 852.4. Samples: 875568. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-07-27 16:02:40,727][00200] Avg episode reward: [(0, '23.642')] |
|
[2024-07-27 16:02:44,447][08838] Updated weights for policy 0, policy_version 860 (0.0017) |
|
[2024-07-27 16:02:45,721][00200] Fps is (10 sec: 3277.3, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3526656. Throughput: 0: 831.4. Samples: 880504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-27 16:02:45,729][00200] Avg episode reward: [(0, '24.515')] |
|
[2024-07-27 16:02:45,741][08821] Saving new best policy, reward=24.515! |
|
[2024-07-27 16:02:50,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3374.0). Total num frames: 3547136. Throughput: 0: 845.2. Samples: 886248. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:02:50,727][00200] Avg episode reward: [(0, '24.939')] |
|
[2024-07-27 16:02:50,732][08821] Saving new best policy, reward=24.939! |
|
[2024-07-27 16:02:55,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.2, 300 sec: 3332.3). Total num frames: 3555328. Throughput: 0: 849.1. Samples: 888302. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:02:55,725][00200] Avg episode reward: [(0, '25.388')] |
|
[2024-07-27 16:02:55,813][08821] Saving new best policy, reward=25.388! |
|
[2024-07-27 16:02:57,418][08838] Updated weights for policy 0, policy_version 870 (0.0019) |
|
[2024-07-27 16:03:00,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3575808. Throughput: 0: 823.4. Samples: 892644. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:03:00,730][00200] Avg episode reward: [(0, '24.949')] |
|
[2024-07-27 16:03:05,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3413.5, 300 sec: 3374.0). Total num frames: 3596288. Throughput: 0: 833.6. Samples: 898582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:03:05,729][00200] Avg episode reward: [(0, '25.303')] |
|
[2024-07-27 16:03:08,665][08838] Updated weights for policy 0, policy_version 880 (0.0030) |
|
[2024-07-27 16:03:10,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3608576. Throughput: 0: 849.3. Samples: 901142. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:03:10,728][00200] Avg episode reward: [(0, '24.970')] |
|
[2024-07-27 16:03:15,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3624960. Throughput: 0: 824.0. Samples: 905216. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:03:15,729][00200] Avg episode reward: [(0, '24.710')] |
|
[2024-07-27 16:03:20,277][08838] Updated weights for policy 0, policy_version 890 (0.0019) |
|
[2024-07-27 16:03:20,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.3, 300 sec: 3374.0). Total num frames: 3645440. Throughput: 0: 829.9. Samples: 911194. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:03:20,723][00200] Avg episode reward: [(0, '26.023')] |
|
[2024-07-27 16:03:20,729][08821] Saving new best policy, reward=26.023! |
|
[2024-07-27 16:03:25,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3657728. Throughput: 0: 854.8. Samples: 914036. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:03:25,726][00200] Avg episode reward: [(0, '25.322')] |
|
[2024-07-27 16:03:30,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 3674112. Throughput: 0: 828.4. Samples: 917782. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:03:30,728][00200] Avg episode reward: [(0, '24.950')] |
|
[2024-07-27 16:03:32,821][08838] Updated weights for policy 0, policy_version 900 (0.0018) |
|
[2024-07-27 16:03:35,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3694592. Throughput: 0: 836.3. Samples: 923882. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:03:35,722][00200] Avg episode reward: [(0, '25.406')] |
|
[2024-07-27 16:03:40,722][00200] Fps is (10 sec: 4095.4, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 3715072. Throughput: 0: 858.7. Samples: 926944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:03:40,725][00200] Avg episode reward: [(0, '24.889')] |
|
[2024-07-27 16:03:45,375][08838] Updated weights for policy 0, policy_version 910 (0.0018) |
|
[2024-07-27 16:03:45,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3727360. Throughput: 0: 851.6. Samples: 930968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:03:45,723][00200] Avg episode reward: [(0, '24.816')] |
|
[2024-07-27 16:03:50,721][00200] Fps is (10 sec: 3277.3, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 3747840. Throughput: 0: 842.5. Samples: 936496. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:03:50,723][00200] Avg episode reward: [(0, '22.878')] |
|
[2024-07-27 16:03:55,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 3764224. Throughput: 0: 850.9. Samples: 939434. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:03:55,727][00200] Avg episode reward: [(0, '21.042')] |
|
[2024-07-27 16:03:56,380][08838] Updated weights for policy 0, policy_version 920 (0.0019) |
|
[2024-07-27 16:04:00,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3776512. Throughput: 0: 858.1. Samples: 943830. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:04:00,722][00200] Avg episode reward: [(0, '22.187')] |
|
[2024-07-27 16:04:05,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 3796992. Throughput: 0: 846.3. Samples: 949276. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:04:05,723][00200] Avg episode reward: [(0, '21.801')] |
|
[2024-07-27 16:04:08,156][08838] Updated weights for policy 0, policy_version 930 (0.0023) |
|
[2024-07-27 16:04:10,721][00200] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3817472. Throughput: 0: 849.1. Samples: 952246. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:04:10,723][00200] Avg episode reward: [(0, '23.185')] |
|
[2024-07-27 16:04:15,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3829760. Throughput: 0: 871.7. Samples: 957010. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:04:15,727][00200] Avg episode reward: [(0, '22.542')] |
|
[2024-07-27 16:04:20,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3846144. Throughput: 0: 845.4. Samples: 961926. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:04:20,728][00200] Avg episode reward: [(0, '24.462')] |
|
[2024-07-27 16:04:20,781][08838] Updated weights for policy 0, policy_version 940 (0.0013) |
|
[2024-07-27 16:04:25,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3866624. Throughput: 0: 843.0. Samples: 964880. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:04:25,722][00200] Avg episode reward: [(0, '25.687')] |
|
[2024-07-27 16:04:30,721][00200] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 3883008. Throughput: 0: 865.5. Samples: 969914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:04:30,723][00200] Avg episode reward: [(0, '26.680')] |
|
[2024-07-27 16:04:30,726][08821] Saving new best policy, reward=26.680! |
|
[2024-07-27 16:04:33,391][08838] Updated weights for policy 0, policy_version 950 (0.0013) |
|
[2024-07-27 16:04:35,721][00200] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3899392. Throughput: 0: 842.4. Samples: 974404. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:04:35,728][00200] Avg episode reward: [(0, '25.653')] |
|
[2024-07-27 16:04:35,737][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000952_3899392.pth... |
|
[2024-07-27 16:04:35,737][00200] Components not started: RolloutWorker_w0, RolloutWorker_w3, RolloutWorker_w4, wait_time=1200.0 seconds |
|
[2024-07-27 16:04:35,858][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000753_3084288.pth |
|
[2024-07-27 16:04:40,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 3915776. Throughput: 0: 841.5. Samples: 977300. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-27 16:04:40,723][00200] Avg episode reward: [(0, '25.346')] |
|
[2024-07-27 16:04:44,746][08838] Updated weights for policy 0, policy_version 960 (0.0016) |
|
[2024-07-27 16:04:45,725][00200] Fps is (10 sec: 3275.4, 60 sec: 3413.1, 300 sec: 3373.9). Total num frames: 3932160. Throughput: 0: 867.4. Samples: 982866. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-07-27 16:04:45,729][00200] Avg episode reward: [(0, '25.844')] |
|
[2024-07-27 16:04:50,721][00200] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3948544. Throughput: 0: 842.3. Samples: 987180. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:04:50,724][00200] Avg episode reward: [(0, '26.201')] |
|
[2024-07-27 16:04:55,721][00200] Fps is (10 sec: 3688.0, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 3969024. Throughput: 0: 842.0. Samples: 990136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-27 16:04:55,723][00200] Avg episode reward: [(0, '25.769')] |
|
[2024-07-27 16:04:56,281][08838] Updated weights for policy 0, policy_version 970 (0.0013) |
|
[2024-07-27 16:05:00,725][00200] Fps is (10 sec: 3684.9, 60 sec: 3481.4, 300 sec: 3387.8). Total num frames: 3985408. Throughput: 0: 861.4. Samples: 995776. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:05:00,727][00200] Avg episode reward: [(0, '24.797')] |
|
[2024-07-27 16:05:05,721][00200] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3997696. Throughput: 0: 842.6. Samples: 999844. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-27 16:05:05,728][00200] Avg episode reward: [(0, '26.062')] |
|
[2024-07-27 16:05:06,796][08821] Stopping Batcher_0... |
|
[2024-07-27 16:05:06,798][08821] Loop batcher_evt_loop terminating... |
|
[2024-07-27 16:05:06,800][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-27 16:05:06,796][00200] Component Batcher_0 stopped! |
|
[2024-07-27 16:05:06,806][00200] Component RolloutWorker_w0 process died already! Don't wait for it. |
|
[2024-07-27 16:05:06,808][00200] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2024-07-27 16:05:06,813][00200] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2024-07-27 16:05:06,880][08845] Stopping RolloutWorker_w6... |
|
[2024-07-27 16:05:06,881][08845] Loop rollout_proc6_evt_loop terminating... |
|
[2024-07-27 16:05:06,881][00200] Component RolloutWorker_w6 stopped! |
|
[2024-07-27 16:05:06,892][08838] Weights refcount: 2 0 |
|
[2024-07-27 16:05:06,894][08841] Stopping RolloutWorker_w2... |
|
[2024-07-27 16:05:06,894][00200] Component RolloutWorker_w2 stopped! |
|
[2024-07-27 16:05:06,897][08841] Loop rollout_proc2_evt_loop terminating... |
|
[2024-07-27 16:05:06,906][00200] Component InferenceWorker_p0-w0 stopped! |
|
[2024-07-27 16:05:06,910][08838] Stopping InferenceWorker_p0-w0... |
|
[2024-07-27 16:05:06,911][08838] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-07-27 16:05:06,917][00200] Component RolloutWorker_w1 stopped! |
|
[2024-07-27 16:05:06,924][08840] Stopping RolloutWorker_w1... |
|
[2024-07-27 16:05:06,925][08840] Loop rollout_proc1_evt_loop terminating... |
|
[2024-07-27 16:05:06,927][08821] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000853_3493888.pth |
|
[2024-07-27 16:05:06,941][08821] Saving new best policy, reward=26.710! |
|
[2024-07-27 16:05:06,942][00200] Component RolloutWorker_w5 stopped! |
|
[2024-07-27 16:05:06,947][08844] Stopping RolloutWorker_w5... |
|
[2024-07-27 16:05:06,955][08844] Loop rollout_proc5_evt_loop terminating... |
|
[2024-07-27 16:05:06,957][00200] Component RolloutWorker_w7 stopped! |
|
[2024-07-27 16:05:06,960][08846] Stopping RolloutWorker_w7... |
|
[2024-07-27 16:05:06,960][08846] Loop rollout_proc7_evt_loop terminating... |
|
[2024-07-27 16:05:07,046][08821] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-27 16:05:07,245][08821] Stopping LearnerWorker_p0... |
|
[2024-07-27 16:05:07,247][08821] Loop learner_proc0_evt_loop terminating... |
|
[2024-07-27 16:05:07,245][00200] Component LearnerWorker_p0 stopped! |
|
[2024-07-27 16:05:07,249][00200] Waiting for process learner_proc0 to stop... |
|
[2024-07-27 16:05:08,502][00200] Waiting for process inference_proc0-0 to join... |
|
[2024-07-27 16:05:08,508][00200] Waiting for process rollout_proc0 to join... |
|
[2024-07-27 16:05:08,511][00200] Waiting for process rollout_proc1 to join... |
|
[2024-07-27 16:05:09,525][00200] Waiting for process rollout_proc2 to join... |
|
[2024-07-27 16:05:09,528][00200] Waiting for process rollout_proc3 to join... |
|
[2024-07-27 16:05:09,531][00200] Waiting for process rollout_proc4 to join... |
|
[2024-07-27 16:05:09,533][00200] Waiting for process rollout_proc5 to join... |
|
[2024-07-27 16:05:09,536][00200] Waiting for process rollout_proc6 to join... |
|
[2024-07-27 16:05:09,541][00200] Waiting for process rollout_proc7 to join... |
|
[2024-07-27 16:05:09,544][00200] Batcher 0 profile tree view: |
|
batching: 24.7615, releasing_batches: 0.0288 |
|
[2024-07-27 16:05:09,549][00200] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 487.7886 |
|
update_model: 9.9441 |
|
weight_update: 0.0014 |
|
one_step: 0.0110 |
|
handle_policy_step: 661.1007 |
|
deserialize: 16.3490, stack: 3.8332, obs_to_device_normalize: 141.7770, forward: 356.5040, send_messages: 24.2416 |
|
prepare_outputs: 84.9542 |
|
to_cpu: 50.8555 |
|
[2024-07-27 16:05:09,551][00200] Learner 0 profile tree view: |
|
misc: 0.0061, prepare_batch: 13.9547 |
|
train: 70.0783 |
|
epoch_init: 0.0058, minibatch_init: 0.0120, losses_postprocess: 0.6060, kl_divergence: 0.7015, after_optimizer: 33.4337 |
|
calculate_losses: 24.1028 |
|
losses_init: 0.0038, forward_head: 1.1862, bptt_initial: 16.2164, tail: 1.0180, advantages_returns: 0.2476, losses: 3.2075 |
|
bptt: 1.9376 |
|
bptt_forward_core: 1.8164 |
|
update: 10.6468 |
|
clip: 0.9834 |
|
[2024-07-27 16:05:09,554][00200] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.4495, enqueue_policy_requests: 127.6148, env_step: 932.7726, overhead: 20.1241, complete_rollouts: 9.6032 |
|
save_policy_outputs: 28.8533 |
|
split_output_tensors: 11.5385 |
|
[2024-07-27 16:05:09,556][00200] Loop Runner_EvtLoop terminating... |
|
[2024-07-27 16:05:09,558][00200] Runner profile tree view: |
|
main_loop: 1229.3517 |
|
[2024-07-27 16:05:09,559][00200] Collected {0: 4005888}, FPS: 3258.5 |
|
[2024-07-27 16:41:08,641][00200] Environment doom_basic already registered, overwriting... |
|
[2024-07-27 16:41:08,644][00200] Environment doom_two_colors_easy already registered, overwriting... |
|
[2024-07-27 16:41:08,660][00200] Environment doom_two_colors_hard already registered, overwriting... |
|
[2024-07-27 16:41:08,663][00200] Environment doom_dm already registered, overwriting... |
|
[2024-07-27 16:41:08,666][00200] Environment doom_dwango5 already registered, overwriting... |
|
[2024-07-27 16:41:08,668][00200] Environment doom_my_way_home_flat_actions already registered, overwriting... |
|
[2024-07-27 16:41:08,670][00200] Environment doom_defend_the_center_flat_actions already registered, overwriting... |
|
[2024-07-27 16:41:08,672][00200] Environment doom_my_way_home already registered, overwriting... |
|
[2024-07-27 16:41:08,676][00200] Environment doom_deadly_corridor already registered, overwriting... |
|
[2024-07-27 16:41:08,679][00200] Environment doom_defend_the_center already registered, overwriting... |
|
[2024-07-27 16:41:08,692][00200] Environment doom_defend_the_line already registered, overwriting... |
|
[2024-07-27 16:41:08,696][00200] Environment doom_health_gathering already registered, overwriting... |
|
[2024-07-27 16:41:08,699][00200] Environment doom_health_gathering_supreme already registered, overwriting... |
|
[2024-07-27 16:41:08,700][00200] Environment doom_battle already registered, overwriting... |
|
[2024-07-27 16:41:08,702][00200] Environment doom_battle2 already registered, overwriting... |
|
[2024-07-27 16:41:08,704][00200] Environment doom_duel_bots already registered, overwriting... |
|
[2024-07-27 16:41:08,706][00200] Environment doom_deathmatch_bots already registered, overwriting... |
|
[2024-07-27 16:41:08,707][00200] Environment doom_duel already registered, overwriting... |
|
[2024-07-27 16:41:08,711][00200] Environment doom_deathmatch_full already registered, overwriting... |
|
[2024-07-27 16:41:08,713][00200] Environment doom_benchmark already registered, overwriting... |
|
[2024-07-27 16:41:08,715][00200] register_encoder_factory: <function make_vizdoom_encoder at 0x7903dd707400> |
|
[2024-07-27 16:41:08,748][00200] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-27 16:41:08,757][00200] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2024-07-27 16:41:08,761][00200] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2024-07-27 16:41:08,764][00200] Weights and Biases integration disabled |
|
[2024-07-27 16:41:08,769][00200] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2024-07-27 16:41:11,821][00200] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=8 |
|
num_envs_per_worker=4 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=4000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2024-07-27 16:41:11,826][00200] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-07-27 16:41:11,831][00200] Rollout worker 0 uses device cpu |
|
[2024-07-27 16:41:11,832][00200] Rollout worker 1 uses device cpu |
|
[2024-07-27 16:41:11,833][00200] Rollout worker 2 uses device cpu |
|
[2024-07-27 16:41:11,835][00200] Rollout worker 3 uses device cpu |
|
[2024-07-27 16:41:11,840][00200] Rollout worker 4 uses device cpu |
|
[2024-07-27 16:41:11,845][00200] Rollout worker 5 uses device cpu |
|
[2024-07-27 16:41:11,847][00200] Rollout worker 6 uses device cpu |
|
[2024-07-27 16:41:11,850][00200] Rollout worker 7 uses device cpu |
|
[2024-07-27 16:41:12,274][00200] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:41:12,281][00200] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-07-27 16:41:12,431][00200] Starting all processes... |
|
[2024-07-27 16:41:12,434][00200] Starting process learner_proc0 |
|
[2024-07-27 16:41:12,606][00200] Starting all processes... |
|
[2024-07-27 16:41:12,668][00200] Starting process inference_proc0-0 |
|
[2024-07-27 16:41:12,669][00200] Starting process rollout_proc0 |
|
[2024-07-27 16:41:12,679][00200] Starting process rollout_proc1 |
|
[2024-07-27 16:41:12,680][00200] Starting process rollout_proc2 |
|
[2024-07-27 16:41:12,680][00200] Starting process rollout_proc3 |
|
[2024-07-27 16:41:12,680][00200] Starting process rollout_proc4 |
|
[2024-07-27 16:41:12,680][00200] Starting process rollout_proc5 |
|
[2024-07-27 16:41:12,694][00200] Starting process rollout_proc6 |
|
[2024-07-27 16:41:12,705][00200] Starting process rollout_proc7 |
|
[2024-07-27 16:41:30,870][24654] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:41:30,871][24654] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-07-27 16:41:30,956][24654] Num visible devices: 1 |
|
[2024-07-27 16:41:30,998][24654] Starting seed is not provided |
|
[2024-07-27 16:41:30,999][24654] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:41:31,000][24654] Initializing actor-critic model on device cuda:0 |
|
[2024-07-27 16:41:31,000][24654] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:41:31,002][24654] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:41:31,103][24667] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:41:31,105][24667] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-07-27 16:41:31,118][24670] Worker 2 uses CPU cores [0] |
|
[2024-07-27 16:41:31,126][24672] Worker 4 uses CPU cores [0] |
|
[2024-07-27 16:41:31,139][24667] Num visible devices: 1 |
|
[2024-07-27 16:41:31,160][24654] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:41:31,427][24669] Worker 1 uses CPU cores [1] |
|
[2024-07-27 16:41:31,478][24668] Worker 0 uses CPU cores [0] |
|
[2024-07-27 16:41:31,480][24671] Worker 3 uses CPU cores [1] |
|
[2024-07-27 16:41:31,542][24674] Worker 6 uses CPU cores [0] |
|
[2024-07-27 16:41:31,653][24654] Conv encoder output size: 512 |
|
[2024-07-27 16:41:31,655][24654] Policy head output size: 512 |
|
[2024-07-27 16:41:31,699][24654] Created Actor Critic model with architecture: |
|
[2024-07-27 16:41:31,702][24654] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-07-27 16:41:31,733][24673] Worker 5 uses CPU cores [1] |
|
[2024-07-27 16:41:31,884][24675] Worker 7 uses CPU cores [1] |
|
[2024-07-27 16:41:32,186][24654] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-07-27 16:41:32,210][00200] Heartbeat connected on Batcher_0 |
|
[2024-07-27 16:41:32,274][00200] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-07-27 16:41:32,298][00200] Heartbeat connected on RolloutWorker_w0 |
|
[2024-07-27 16:41:32,321][00200] Heartbeat connected on RolloutWorker_w1 |
|
[2024-07-27 16:41:32,328][00200] Heartbeat connected on RolloutWorker_w2 |
|
[2024-07-27 16:41:32,356][00200] Heartbeat connected on RolloutWorker_w3 |
|
[2024-07-27 16:41:32,367][00200] Heartbeat connected on RolloutWorker_w4 |
|
[2024-07-27 16:41:32,376][00200] Heartbeat connected on RolloutWorker_w5 |
|
[2024-07-27 16:41:32,397][00200] Heartbeat connected on RolloutWorker_w6 |
|
[2024-07-27 16:41:32,411][00200] Heartbeat connected on RolloutWorker_w7 |
|
[2024-07-27 16:41:33,857][24654] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-27 16:41:33,903][24654] Loading model from checkpoint |
|
[2024-07-27 16:41:33,912][24654] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
|
[2024-07-27 16:41:33,913][24654] Initialized policy 0 weights for model version 978 |
|
[2024-07-27 16:41:33,929][24654] LearnerWorker_p0 finished initialization! |
|
[2024-07-27 16:41:33,930][00200] Heartbeat connected on LearnerWorker_p0 |
|
[2024-07-27 16:41:33,934][24654] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:41:34,160][24667] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:41:34,162][24667] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:41:34,178][24667] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:41:34,282][24667] Conv encoder output size: 512 |
|
[2024-07-27 16:41:34,282][24667] Policy head output size: 512 |
|
[2024-07-27 16:41:34,337][00200] Inference worker 0-0 is ready! |
|
[2024-07-27 16:41:34,339][00200] All inference workers are ready! Signal rollout workers to start! |
|
[2024-07-27 16:41:34,568][24668] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,570][24671] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,570][24673] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,574][24672] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,575][24669] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,580][24674] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,595][24675] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:34,599][24670] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:36,019][24671] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,016][24669] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,018][24673] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,025][24668] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,027][24674] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,029][24672] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:36,792][24672] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:36,870][24670] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:37,540][24671] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:37,543][24675] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:41:37,547][24673] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:37,652][24669] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:37,838][24670] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:38,091][24672] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:38,695][24674] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:38,769][00200] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 16:41:39,219][24675] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:39,815][24671] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:39,825][24672] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:39,851][24673] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:39,966][24669] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:41,732][24675] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:41,839][24668] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:41:41,842][24671] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:42,719][24670] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:43,769][00200] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 110.4. Samples: 552. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 16:41:43,774][00200] Avg episode reward: [(0, '2.860')] |
|
[2024-07-27 16:41:44,246][24674] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:44,902][24669] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:45,009][24668] Decorrelating experience for 64 frames... |
|
[2024-07-27 16:41:46,842][24673] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:47,875][24675] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:48,196][24674] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:48,668][24654] Signal inference workers to stop experience collection... |
|
[2024-07-27 16:41:48,690][24667] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-07-27 16:41:48,769][00200] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 151.2. Samples: 1512. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 16:41:48,771][00200] Avg episode reward: [(0, '6.399')] |
|
[2024-07-27 16:41:48,831][24670] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:49,226][24668] Decorrelating experience for 96 frames... |
|
[2024-07-27 16:41:49,980][24654] Signal inference workers to resume experience collection... |
|
[2024-07-27 16:41:49,981][24667] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-07-27 16:41:49,983][24654] Stopping Batcher_0... |
|
[2024-07-27 16:41:49,997][24654] Loop batcher_evt_loop terminating... |
|
[2024-07-27 16:41:49,983][00200] Component Batcher_0 stopped! |
|
[2024-07-27 16:41:50,014][00200] Component RolloutWorker_w4 stopped! |
|
[2024-07-27 16:41:50,013][24672] Stopping RolloutWorker_w4... |
|
[2024-07-27 16:41:50,042][00200] Component RolloutWorker_w1 stopped! |
|
[2024-07-27 16:41:50,046][24669] Stopping RolloutWorker_w1... |
|
[2024-07-27 16:41:50,047][24669] Loop rollout_proc1_evt_loop terminating... |
|
[2024-07-27 16:41:50,018][24672] Loop rollout_proc4_evt_loop terminating... |
|
[2024-07-27 16:41:50,055][00200] Component RolloutWorker_w3 stopped! |
|
[2024-07-27 16:41:50,059][24671] Stopping RolloutWorker_w3... |
|
[2024-07-27 16:41:50,066][24671] Loop rollout_proc3_evt_loop terminating... |
|
[2024-07-27 16:41:50,077][24670] Stopping RolloutWorker_w2... |
|
[2024-07-27 16:41:50,079][24670] Loop rollout_proc2_evt_loop terminating... |
|
[2024-07-27 16:41:50,081][24674] Stopping RolloutWorker_w6... |
|
[2024-07-27 16:41:50,077][00200] Component RolloutWorker_w2 stopped! |
|
[2024-07-27 16:41:50,081][24674] Loop rollout_proc6_evt_loop terminating... |
|
[2024-07-27 16:41:50,082][00200] Component RolloutWorker_w6 stopped! |
|
[2024-07-27 16:41:50,091][24668] Stopping RolloutWorker_w0... |
|
[2024-07-27 16:41:50,092][24668] Loop rollout_proc0_evt_loop terminating... |
|
[2024-07-27 16:41:50,091][00200] Component RolloutWorker_w0 stopped! |
|
[2024-07-27 16:41:50,100][00200] Component RolloutWorker_w7 stopped! |
|
[2024-07-27 16:41:50,104][24675] Stopping RolloutWorker_w7... |
|
[2024-07-27 16:41:50,104][24675] Loop rollout_proc7_evt_loop terminating... |
|
[2024-07-27 16:41:50,106][00200] Component RolloutWorker_w5 stopped! |
|
[2024-07-27 16:41:50,110][24673] Stopping RolloutWorker_w5... |
|
[2024-07-27 16:41:50,110][24673] Loop rollout_proc5_evt_loop terminating... |
|
[2024-07-27 16:41:50,190][24667] Weights refcount: 2 0 |
|
[2024-07-27 16:41:50,198][00200] Component InferenceWorker_p0-w0 stopped! |
|
[2024-07-27 16:41:50,203][24667] Stopping InferenceWorker_p0-w0... |
|
[2024-07-27 16:41:50,204][24667] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-07-27 16:41:51,256][24654] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2024-07-27 16:41:51,372][24654] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000952_3899392.pth |
|
[2024-07-27 16:41:51,387][24654] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2024-07-27 16:41:51,559][00200] Component LearnerWorker_p0 stopped! |
|
[2024-07-27 16:41:51,562][00200] Waiting for process learner_proc0 to stop... |
|
[2024-07-27 16:41:51,564][24654] Stopping LearnerWorker_p0... |
|
[2024-07-27 16:41:51,564][24654] Loop learner_proc0_evt_loop terminating... |
|
[2024-07-27 16:41:52,976][00200] Waiting for process inference_proc0-0 to join... |
|
[2024-07-27 16:41:52,978][00200] Waiting for process rollout_proc0 to join... |
|
[2024-07-27 16:41:54,129][00200] Waiting for process rollout_proc1 to join... |
|
[2024-07-27 16:41:54,132][00200] Waiting for process rollout_proc2 to join... |
|
[2024-07-27 16:41:54,134][00200] Waiting for process rollout_proc3 to join... |
|
[2024-07-27 16:41:54,136][00200] Waiting for process rollout_proc4 to join... |
|
[2024-07-27 16:41:54,138][00200] Waiting for process rollout_proc5 to join... |
|
[2024-07-27 16:41:54,140][00200] Waiting for process rollout_proc6 to join... |
|
[2024-07-27 16:41:54,142][00200] Waiting for process rollout_proc7 to join... |
|
[2024-07-27 16:41:54,144][00200] Batcher 0 profile tree view: |
|
batching: 0.0473, releasing_batches: 0.0005 |
|
[2024-07-27 16:41:54,145][00200] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0001 |
|
wait_policy_total: 10.8467 |
|
update_model: 0.0534 |
|
weight_update: 0.0202 |
|
one_step: 0.1632 |
|
handle_policy_step: 3.3714 |
|
deserialize: 0.0567, stack: 0.0112, obs_to_device_normalize: 0.6572, forward: 2.2354, send_messages: 0.0736 |
|
prepare_outputs: 0.2333 |
|
to_cpu: 0.1250 |
|
[2024-07-27 16:41:54,147][00200] Learner 0 profile tree view: |
|
misc: 0.0000, prepare_batch: 2.6210 |
|
train: 3.3050 |
|
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0005, kl_divergence: 0.0253, after_optimizer: 0.0558 |
|
calculate_losses: 1.9682 |
|
losses_init: 0.0000, forward_head: 0.3075, bptt_initial: 1.4801, tail: 0.0812, advantages_returns: 0.0012, losses: 0.0941 |
|
bptt: 0.0036 |
|
bptt_forward_core: 0.0034 |
|
update: 1.2522 |
|
clip: 0.0451 |
|
[2024-07-27 16:41:54,149][00200] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0005 |
|
[2024-07-27 16:41:54,150][00200] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0024, enqueue_policy_requests: 0.0480, env_step: 0.0700, overhead: 0.0010, complete_rollouts: 0.0000 |
|
save_policy_outputs: 0.0149 |
|
split_output_tensors: 0.0005 |
|
[2024-07-27 16:41:54,152][00200] Loop Runner_EvtLoop terminating... |
|
[2024-07-27 16:41:54,160][00200] Runner profile tree view: |
|
main_loop: 41.7294 |
|
[2024-07-27 16:41:54,162][00200] Collected {0: 4014080}, FPS: 196.3 |
|
[2024-07-27 16:41:54,607][00200] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-27 16:41:54,610][00200] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-27 16:41:54,611][00200] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-27 16:41:54,613][00200] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-27 16:41:54,615][00200] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 16:41:54,617][00200] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-27 16:41:54,618][00200] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 16:41:54,620][00200] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-27 16:41:54,621][00200] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-07-27 16:41:54,622][00200] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-07-27 16:41:54,623][00200] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-27 16:41:54,624][00200] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-27 16:41:54,625][00200] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-27 16:41:54,626][00200] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-27 16:41:54,627][00200] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-27 16:41:54,672][00200] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:41:54,676][00200] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:41:54,679][00200] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:41:54,699][00200] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:41:54,814][00200] Conv encoder output size: 512 |
|
[2024-07-27 16:41:54,817][00200] Policy head output size: 512 |
|
[2024-07-27 16:41:54,999][00200] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2024-07-27 16:41:56,428][00200] Num frames 100... |
|
[2024-07-27 16:41:56,618][00200] Num frames 200... |
|
[2024-07-27 16:41:56,982][00200] Num frames 300... |
|
[2024-07-27 16:41:57,462][00200] Num frames 400... |
|
[2024-07-27 16:41:57,965][00200] Num frames 500... |
|
[2024-07-27 16:41:58,415][00200] Num frames 600... |
|
[2024-07-27 16:41:58,792][00200] Num frames 700... |
|
[2024-07-27 16:41:59,013][00200] Num frames 800... |
|
[2024-07-27 16:41:59,278][00200] Num frames 900... |
|
[2024-07-27 16:41:59,563][00200] Num frames 1000... |
|
[2024-07-27 16:41:59,818][00200] Num frames 1100... |
|
[2024-07-27 16:42:00,034][00200] Num frames 1200... |
|
[2024-07-27 16:42:00,287][00200] Num frames 1300... |
|
[2024-07-27 16:42:00,492][00200] Num frames 1400... |
|
[2024-07-27 16:42:00,713][00200] Num frames 1500... |
|
[2024-07-27 16:42:00,984][00200] Num frames 1600... |
|
[2024-07-27 16:42:01,214][00200] Num frames 1700... |
|
[2024-07-27 16:42:01,438][00200] Num frames 1800... |
|
[2024-07-27 16:42:01,603][00200] Avg episode rewards: #0: 48.529, true rewards: #0: 18.530 |
|
[2024-07-27 16:42:01,609][00200] Avg episode reward: 48.529, avg true_objective: 18.530 |
|
[2024-07-27 16:42:01,719][00200] Num frames 1900... |
|
[2024-07-27 16:42:01,929][00200] Num frames 2000... |
|
[2024-07-27 16:42:02,126][00200] Num frames 2100... |
|
[2024-07-27 16:42:02,355][00200] Num frames 2200... |
|
[2024-07-27 16:42:02,613][00200] Num frames 2300... |
|
[2024-07-27 16:42:02,859][00200] Num frames 2400... |
|
[2024-07-27 16:42:03,137][00200] Num frames 2500... |
|
[2024-07-27 16:42:03,481][00200] Num frames 2600... |
|
[2024-07-27 16:42:03,836][00200] Num frames 2700... |
|
[2024-07-27 16:42:04,024][00200] Avg episode rewards: #0: 36.245, true rewards: #0: 13.745 |
|
[2024-07-27 16:42:04,031][00200] Avg episode reward: 36.245, avg true_objective: 13.745 |
|
[2024-07-27 16:42:04,194][00200] Num frames 2800... |
|
[2024-07-27 16:42:04,393][00200] Num frames 2900... |
|
[2024-07-27 16:42:04,637][00200] Num frames 3000... |
|
[2024-07-27 16:42:05,018][00200] Num frames 3100... |
|
[2024-07-27 16:42:05,151][00200] Num frames 3200... |
|
[2024-07-27 16:42:05,212][00200] Avg episode rewards: #0: 27.343, true rewards: #0: 10.677 |
|
[2024-07-27 16:42:05,214][00200] Avg episode reward: 27.343, avg true_objective: 10.677 |
|
[2024-07-27 16:42:05,346][00200] Num frames 3300... |
|
[2024-07-27 16:42:05,481][00200] Num frames 3400... |
|
[2024-07-27 16:42:05,614][00200] Num frames 3500... |
|
[2024-07-27 16:42:05,758][00200] Num frames 3600... |
|
[2024-07-27 16:42:05,889][00200] Num frames 3700... |
|
[2024-07-27 16:42:06,018][00200] Num frames 3800... |
|
[2024-07-27 16:42:06,149][00200] Num frames 3900... |
|
[2024-07-27 16:42:06,281][00200] Num frames 4000... |
|
[2024-07-27 16:42:06,412][00200] Num frames 4100... |
|
[2024-07-27 16:42:06,550][00200] Num frames 4200... |
|
[2024-07-27 16:42:06,689][00200] Num frames 4300... |
|
[2024-07-27 16:42:06,826][00200] Num frames 4400... |
|
[2024-07-27 16:42:06,957][00200] Num frames 4500... |
|
[2024-07-27 16:42:07,090][00200] Num frames 4600... |
|
[2024-07-27 16:42:07,243][00200] Avg episode rewards: #0: 29.937, true rewards: #0: 11.687 |
|
[2024-07-27 16:42:07,245][00200] Avg episode reward: 29.937, avg true_objective: 11.687 |
|
[2024-07-27 16:42:07,281][00200] Num frames 4700... |
|
[2024-07-27 16:42:07,405][00200] Num frames 4800... |
|
[2024-07-27 16:42:07,535][00200] Num frames 4900... |
|
[2024-07-27 16:42:07,673][00200] Num frames 5000... |
|
[2024-07-27 16:42:07,811][00200] Num frames 5100... |
|
[2024-07-27 16:42:07,939][00200] Num frames 5200... |
|
[2024-07-27 16:42:08,067][00200] Num frames 5300... |
|
[2024-07-27 16:42:08,195][00200] Num frames 5400... |
|
[2024-07-27 16:42:08,327][00200] Num frames 5500... |
|
[2024-07-27 16:42:08,483][00200] Num frames 5600... |
|
[2024-07-27 16:42:08,696][00200] Num frames 5700... |
|
[2024-07-27 16:42:08,901][00200] Num frames 5800... |
|
[2024-07-27 16:42:09,091][00200] Num frames 5900... |
|
[2024-07-27 16:42:09,296][00200] Num frames 6000... |
|
[2024-07-27 16:42:09,482][00200] Num frames 6100... |
|
[2024-07-27 16:42:09,627][00200] Avg episode rewards: #0: 31.294, true rewards: #0: 12.294 |
|
[2024-07-27 16:42:09,629][00200] Avg episode reward: 31.294, avg true_objective: 12.294 |
|
[2024-07-27 16:42:09,724][00200] Num frames 6200... |
|
[2024-07-27 16:42:09,916][00200] Num frames 6300... |
|
[2024-07-27 16:42:10,108][00200] Num frames 6400... |
|
[2024-07-27 16:42:10,299][00200] Num frames 6500... |
|
[2024-07-27 16:42:10,492][00200] Num frames 6600... |
|
[2024-07-27 16:42:10,701][00200] Num frames 6700... |
|
[2024-07-27 16:42:10,905][00200] Num frames 6800... |
|
[2024-07-27 16:42:11,068][00200] Num frames 6900... |
|
[2024-07-27 16:42:11,202][00200] Num frames 7000... |
|
[2024-07-27 16:42:11,335][00200] Num frames 7100... |
|
[2024-07-27 16:42:11,465][00200] Num frames 7200... |
|
[2024-07-27 16:42:11,600][00200] Num frames 7300... |
|
[2024-07-27 16:42:11,738][00200] Avg episode rewards: #0: 30.605, true rewards: #0: 12.272 |
|
[2024-07-27 16:42:11,741][00200] Avg episode reward: 30.605, avg true_objective: 12.272 |
|
[2024-07-27 16:42:11,791][00200] Num frames 7400... |
|
[2024-07-27 16:42:11,924][00200] Num frames 7500... |
|
[2024-07-27 16:42:12,057][00200] Num frames 7600... |
|
[2024-07-27 16:42:12,187][00200] Num frames 7700... |
|
[2024-07-27 16:42:12,313][00200] Num frames 7800... |
|
[2024-07-27 16:42:12,449][00200] Num frames 7900... |
|
[2024-07-27 16:42:12,587][00200] Num frames 8000... |
|
[2024-07-27 16:42:12,724][00200] Num frames 8100... |
|
[2024-07-27 16:42:12,859][00200] Num frames 8200... |
|
[2024-07-27 16:42:12,998][00200] Num frames 8300... |
|
[2024-07-27 16:42:13,128][00200] Num frames 8400... |
|
[2024-07-27 16:42:13,262][00200] Num frames 8500... |
|
[2024-07-27 16:42:13,420][00200] Avg episode rewards: #0: 29.964, true rewards: #0: 12.250 |
|
[2024-07-27 16:42:13,422][00200] Avg episode reward: 29.964, avg true_objective: 12.250 |
|
[2024-07-27 16:42:13,457][00200] Num frames 8600... |
|
[2024-07-27 16:42:13,591][00200] Num frames 8700... |
|
[2024-07-27 16:42:13,716][00200] Num frames 8800... |
|
[2024-07-27 16:42:13,845][00200] Num frames 8900... |
|
[2024-07-27 16:42:13,975][00200] Num frames 9000... |
|
[2024-07-27 16:42:14,107][00200] Num frames 9100... |
|
[2024-07-27 16:42:14,232][00200] Num frames 9200... |
|
[2024-07-27 16:42:14,362][00200] Num frames 9300... |
|
[2024-07-27 16:42:14,494][00200] Num frames 9400... |
|
[2024-07-27 16:42:14,628][00200] Num frames 9500... |
|
[2024-07-27 16:42:14,754][00200] Num frames 9600... |
|
[2024-07-27 16:42:14,898][00200] Avg episode rewards: #0: 29.212, true rewards: #0: 12.087 |
|
[2024-07-27 16:42:14,900][00200] Avg episode reward: 29.212, avg true_objective: 12.087 |
|
[2024-07-27 16:42:14,941][00200] Num frames 9700... |
|
[2024-07-27 16:42:15,073][00200] Num frames 9800... |
|
[2024-07-27 16:42:15,200][00200] Num frames 9900... |
|
[2024-07-27 16:42:15,327][00200] Num frames 10000... |
|
[2024-07-27 16:42:15,456][00200] Num frames 10100... |
|
[2024-07-27 16:42:15,593][00200] Num frames 10200... |
|
[2024-07-27 16:42:15,727][00200] Num frames 10300... |
|
[2024-07-27 16:42:15,858][00200] Num frames 10400... |
|
[2024-07-27 16:42:15,986][00200] Num frames 10500... |
|
[2024-07-27 16:42:16,094][00200] Avg episode rewards: #0: 27.927, true rewards: #0: 11.704 |
|
[2024-07-27 16:42:16,097][00200] Avg episode reward: 27.927, avg true_objective: 11.704 |
|
[2024-07-27 16:42:16,183][00200] Num frames 10600... |
|
[2024-07-27 16:42:16,311][00200] Num frames 10700... |
|
[2024-07-27 16:42:16,441][00200] Num frames 10800... |
|
[2024-07-27 16:42:16,580][00200] Num frames 10900... |
|
[2024-07-27 16:42:16,714][00200] Num frames 11000... |
|
[2024-07-27 16:42:16,851][00200] Num frames 11100... |
|
[2024-07-27 16:42:16,984][00200] Num frames 11200... |
|
[2024-07-27 16:42:17,120][00200] Num frames 11300... |
|
[2024-07-27 16:42:17,253][00200] Num frames 11400... |
|
[2024-07-27 16:42:17,383][00200] Num frames 11500... |
|
[2024-07-27 16:42:17,513][00200] Num frames 11600... |
|
[2024-07-27 16:42:17,649][00200] Num frames 11700... |
|
[2024-07-27 16:42:17,781][00200] Num frames 11800... |
|
[2024-07-27 16:42:17,913][00200] Num frames 11900... |
|
[2024-07-27 16:42:18,041][00200] Num frames 12000... |
|
[2024-07-27 16:42:18,187][00200] Num frames 12100... |
|
[2024-07-27 16:42:18,320][00200] Num frames 12200... |
|
[2024-07-27 16:42:18,451][00200] Num frames 12300... |
|
[2024-07-27 16:42:18,588][00200] Num frames 12400... |
|
[2024-07-27 16:42:18,721][00200] Num frames 12500... |
|
[2024-07-27 16:42:18,849][00200] Num frames 12600... |
|
[2024-07-27 16:42:18,947][00200] Avg episode rewards: #0: 30.934, true rewards: #0: 12.634 |
|
[2024-07-27 16:42:18,949][00200] Avg episode reward: 30.934, avg true_objective: 12.634 |
|
[2024-07-27 16:43:41,887][00200] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-07-27 16:43:42,878][00200] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-27 16:43:42,880][00200] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-27 16:43:42,881][00200] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-27 16:43:42,882][00200] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-27 16:43:42,884][00200] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 16:43:42,885][00200] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-27 16:43:42,887][00200] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-07-27 16:43:42,888][00200] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-27 16:43:42,891][00200] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-07-27 16:43:42,897][00200] Adding new argument 'hf_repository'='ThomasSimonini/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-07-27 16:43:42,900][00200] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-27 16:43:42,901][00200] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-27 16:43:42,902][00200] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-27 16:43:42,906][00200] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-27 16:43:42,907][00200] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-27 16:43:42,963][00200] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:43:42,971][00200] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:43:42,999][00200] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:43:43,079][00200] Conv encoder output size: 512 |
|
[2024-07-27 16:43:43,081][00200] Policy head output size: 512 |
|
[2024-07-27 16:43:43,109][00200] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2024-07-27 16:43:43,889][00200] Num frames 100... |
|
[2024-07-27 16:43:44,101][00200] Num frames 200... |
|
[2024-07-27 16:43:44,298][00200] Num frames 300... |
|
[2024-07-27 16:43:44,359][00200] Avg episode rewards: #0: 4.010, true rewards: #0: 3.010 |
|
[2024-07-27 16:43:44,361][00200] Avg episode reward: 4.010, avg true_objective: 3.010 |
|
[2024-07-27 16:43:44,582][00200] Num frames 400... |
|
[2024-07-27 16:43:44,822][00200] Num frames 500... |
|
[2024-07-27 16:43:45,041][00200] Num frames 600... |
|
[2024-07-27 16:43:45,215][00200] Num frames 700... |
|
[2024-07-27 16:43:45,280][00200] Avg episode rewards: #0: 5.015, true rewards: #0: 3.515 |
|
[2024-07-27 16:43:45,282][00200] Avg episode reward: 5.015, avg true_objective: 3.515 |
|
[2024-07-27 16:43:45,447][00200] Num frames 800... |
|
[2024-07-27 16:43:45,626][00200] Num frames 900... |
|
[2024-07-27 16:43:45,804][00200] Num frames 1000... |
|
[2024-07-27 16:43:45,977][00200] Num frames 1100... |
|
[2024-07-27 16:43:46,153][00200] Num frames 1200... |
|
[2024-07-27 16:43:46,324][00200] Num frames 1300... |
|
[2024-07-27 16:43:46,499][00200] Num frames 1400... |
|
[2024-07-27 16:43:46,573][00200] Avg episode rewards: #0: 8.023, true rewards: #0: 4.690 |
|
[2024-07-27 16:43:46,575][00200] Avg episode reward: 8.023, avg true_objective: 4.690 |
|
[2024-07-27 16:43:46,736][00200] Num frames 1500... |
|
[2024-07-27 16:43:46,911][00200] Num frames 1600... |
|
[2024-07-27 16:43:47,090][00200] Num frames 1700... |
|
[2024-07-27 16:43:47,271][00200] Num frames 1800... |
|
[2024-07-27 16:43:47,447][00200] Num frames 1900... |
|
[2024-07-27 16:43:47,638][00200] Num frames 2000... |
|
[2024-07-27 16:43:47,829][00200] Num frames 2100... |
|
[2024-07-27 16:43:48,015][00200] Num frames 2200... |
|
[2024-07-27 16:43:48,204][00200] Num frames 2300... |
|
[2024-07-27 16:43:48,389][00200] Num frames 2400... |
|
[2024-07-27 16:43:48,576][00200] Num frames 2500... |
|
[2024-07-27 16:43:48,821][00200] Num frames 2600... |
|
[2024-07-27 16:43:49,052][00200] Num frames 2700... |
|
[2024-07-27 16:43:49,277][00200] Num frames 2800... |
|
[2024-07-27 16:43:49,475][00200] Num frames 2900... |
|
[2024-07-27 16:43:49,657][00200] Num frames 3000... |
|
[2024-07-27 16:43:49,877][00200] Avg episode rewards: #0: 16.677, true rewards: #0: 7.677 |
|
[2024-07-27 16:43:49,879][00200] Avg episode reward: 16.677, avg true_objective: 7.677 |
|
[2024-07-27 16:43:49,937][00200] Num frames 3100... |
|
[2024-07-27 16:43:50,131][00200] Num frames 3200... |
|
[2024-07-27 16:43:50,326][00200] Num frames 3300... |
|
[2024-07-27 16:43:50,524][00200] Num frames 3400... |
|
[2024-07-27 16:43:50,728][00200] Num frames 3500... |
|
[2024-07-27 16:43:50,921][00200] Num frames 3600... |
|
[2024-07-27 16:43:51,127][00200] Num frames 3700... |
|
[2024-07-27 16:43:51,325][00200] Num frames 3800... |
|
[2024-07-27 16:43:51,520][00200] Num frames 3900... |
|
[2024-07-27 16:43:51,702][00200] Num frames 4000... |
|
[2024-07-27 16:43:51,892][00200] Num frames 4100... |
|
[2024-07-27 16:43:52,045][00200] Num frames 4200... |
|
[2024-07-27 16:43:52,177][00200] Num frames 4300... |
|
[2024-07-27 16:43:52,307][00200] Avg episode rewards: #0: 20.302, true rewards: #0: 8.702 |
|
[2024-07-27 16:43:52,309][00200] Avg episode reward: 20.302, avg true_objective: 8.702 |
|
[2024-07-27 16:43:52,372][00200] Num frames 4400... |
|
[2024-07-27 16:43:52,499][00200] Num frames 4500... |
|
[2024-07-27 16:43:52,635][00200] Num frames 4600... |
|
[2024-07-27 16:43:52,759][00200] Num frames 4700... |
|
[2024-07-27 16:43:52,894][00200] Num frames 4800... |
|
[2024-07-27 16:43:53,020][00200] Num frames 4900... |
|
[2024-07-27 16:43:53,146][00200] Num frames 5000... |
|
[2024-07-27 16:43:53,210][00200] Avg episode rewards: #0: 19.010, true rewards: #0: 8.343 |
|
[2024-07-27 16:43:53,211][00200] Avg episode reward: 19.010, avg true_objective: 8.343 |
|
[2024-07-27 16:43:53,336][00200] Num frames 5100... |
|
[2024-07-27 16:43:53,473][00200] Num frames 5200... |
|
[2024-07-27 16:43:53,615][00200] Num frames 5300... |
|
[2024-07-27 16:43:53,745][00200] Num frames 5400... |
|
[2024-07-27 16:43:53,883][00200] Num frames 5500... |
|
[2024-07-27 16:43:54,049][00200] Num frames 5600... |
|
[2024-07-27 16:43:54,182][00200] Num frames 5700... |
|
[2024-07-27 16:43:54,319][00200] Num frames 5800... |
|
[2024-07-27 16:43:54,478][00200] Num frames 5900... |
|
[2024-07-27 16:43:54,628][00200] Num frames 6000... |
|
[2024-07-27 16:43:54,759][00200] Num frames 6100... |
|
[2024-07-27 16:43:54,889][00200] Num frames 6200... |
|
[2024-07-27 16:43:55,030][00200] Num frames 6300... |
|
[2024-07-27 16:43:55,226][00200] Num frames 6400... |
|
[2024-07-27 16:43:55,416][00200] Num frames 6500... |
|
[2024-07-27 16:43:55,600][00200] Num frames 6600... |
|
[2024-07-27 16:43:55,779][00200] Num frames 6700... |
|
[2024-07-27 16:43:55,963][00200] Num frames 6800... |
|
[2024-07-27 16:43:56,138][00200] Num frames 6900... |
|
[2024-07-27 16:43:56,329][00200] Num frames 7000... |
|
[2024-07-27 16:43:56,519][00200] Num frames 7100... |
|
[2024-07-27 16:43:56,592][00200] Avg episode rewards: #0: 24.437, true rewards: #0: 10.151 |
|
[2024-07-27 16:43:56,594][00200] Avg episode reward: 24.437, avg true_objective: 10.151 |
|
[2024-07-27 16:43:56,772][00200] Num frames 7200... |
|
[2024-07-27 16:43:56,960][00200] Num frames 7300... |
|
[2024-07-27 16:43:57,137][00200] Avg episode rewards: #0: 21.702, true rewards: #0: 9.202 |
|
[2024-07-27 16:43:57,138][00200] Avg episode reward: 21.702, avg true_objective: 9.202 |
|
[2024-07-27 16:43:57,215][00200] Num frames 7400... |
|
[2024-07-27 16:43:57,414][00200] Num frames 7500... |
|
[2024-07-27 16:43:57,543][00200] Num frames 7600... |
|
[2024-07-27 16:43:57,677][00200] Num frames 7700... |
|
[2024-07-27 16:43:57,806][00200] Num frames 7800... |
|
[2024-07-27 16:43:57,937][00200] Num frames 7900... |
|
[2024-07-27 16:43:58,076][00200] Num frames 8000... |
|
[2024-07-27 16:43:58,204][00200] Num frames 8100... |
|
[2024-07-27 16:43:58,333][00200] Num frames 8200... |
|
[2024-07-27 16:43:58,464][00200] Num frames 8300... |
|
[2024-07-27 16:43:58,644][00200] Avg episode rewards: #0: 21.877, true rewards: #0: 9.321 |
|
[2024-07-27 16:43:58,646][00200] Avg episode reward: 21.877, avg true_objective: 9.321 |
|
[2024-07-27 16:43:58,666][00200] Num frames 8400... |
|
[2024-07-27 16:43:58,794][00200] Num frames 8500... |
|
[2024-07-27 16:43:58,926][00200] Num frames 8600... |
|
[2024-07-27 16:43:59,063][00200] Num frames 8700... |
|
[2024-07-27 16:43:59,195][00200] Num frames 8800... |
|
[2024-07-27 16:43:59,329][00200] Num frames 8900... |
|
[2024-07-27 16:43:59,468][00200] Num frames 9000... |
|
[2024-07-27 16:43:59,606][00200] Num frames 9100... |
|
[2024-07-27 16:43:59,688][00200] Avg episode rewards: #0: 21.220, true rewards: #0: 9.120 |
|
[2024-07-27 16:43:59,690][00200] Avg episode reward: 21.220, avg true_objective: 9.120 |
|
[2024-07-27 16:44:57,589][00200] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-07-27 16:59:36,732][30159] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-07-27 16:59:36,736][30159] Rollout worker 0 uses device cpu |
|
[2024-07-27 16:59:36,737][30159] Rollout worker 1 uses device cpu |
|
[2024-07-27 16:59:36,739][30159] Rollout worker 2 uses device cpu |
|
[2024-07-27 16:59:36,740][30159] Rollout worker 3 uses device cpu |
|
[2024-07-27 16:59:36,742][30159] Rollout worker 4 uses device cpu |
|
[2024-07-27 16:59:36,746][30159] Rollout worker 5 uses device cpu |
|
[2024-07-27 16:59:36,747][30159] Rollout worker 6 uses device cpu |
|
[2024-07-27 16:59:36,748][30159] Rollout worker 7 uses device cpu |
|
[2024-07-27 16:59:36,901][30159] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:59:36,903][30159] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-07-27 16:59:36,940][30159] Starting all processes... |
|
[2024-07-27 16:59:36,941][30159] Starting process learner_proc0 |
|
[2024-07-27 16:59:36,990][30159] Starting all processes... |
|
[2024-07-27 16:59:37,000][30159] Starting process inference_proc0-0 |
|
[2024-07-27 16:59:37,001][30159] Starting process rollout_proc0 |
|
[2024-07-27 16:59:37,001][30159] Starting process rollout_proc1 |
|
[2024-07-27 16:59:37,006][30159] Starting process rollout_proc2 |
|
[2024-07-27 16:59:37,028][30159] Starting process rollout_proc3 |
|
[2024-07-27 16:59:37,028][30159] Starting process rollout_proc4 |
|
[2024-07-27 16:59:37,029][30159] Starting process rollout_proc5 |
|
[2024-07-27 16:59:37,029][30159] Starting process rollout_proc6 |
|
[2024-07-27 16:59:37,029][30159] Starting process rollout_proc7 |
|
[2024-07-27 16:59:48,074][33847] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:59:48,078][33847] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-07-27 16:59:48,151][33847] Num visible devices: 1 |
|
[2024-07-27 16:59:48,185][33864] Worker 3 uses CPU cores [1] |
|
[2024-07-27 16:59:48,194][33847] Starting seed is not provided |
|
[2024-07-27 16:59:48,195][33847] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:59:48,195][33847] Initializing actor-critic model on device cuda:0 |
|
[2024-07-27 16:59:48,196][33847] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:59:48,198][33847] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:59:48,282][33847] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:59:48,628][33862] Worker 1 uses CPU cores [1] |
|
[2024-07-27 16:59:48,647][33861] Worker 0 uses CPU cores [0] |
|
[2024-07-27 16:59:48,663][33865] Worker 4 uses CPU cores [0] |
|
[2024-07-27 16:59:48,699][33863] Worker 2 uses CPU cores [0] |
|
[2024-07-27 16:59:48,720][33867] Worker 6 uses CPU cores [0] |
|
[2024-07-27 16:59:48,734][33860] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:59:48,735][33860] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-07-27 16:59:48,756][33860] Num visible devices: 1 |
|
[2024-07-27 16:59:48,760][33868] Worker 7 uses CPU cores [1] |
|
[2024-07-27 16:59:48,772][33866] Worker 5 uses CPU cores [1] |
|
[2024-07-27 16:59:48,815][33847] Conv encoder output size: 512 |
|
[2024-07-27 16:59:48,815][33847] Policy head output size: 512 |
|
[2024-07-27 16:59:48,832][33847] Created Actor Critic model with architecture: |
|
[2024-07-27 16:59:48,832][33847] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-07-27 16:59:52,466][33847] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-07-27 16:59:52,467][33847] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2024-07-27 16:59:52,505][33847] Loading model from checkpoint |
|
[2024-07-27 16:59:52,509][33847] Loaded experiment state at self.train_step=980, self.env_steps=4014080 |
|
[2024-07-27 16:59:52,510][33847] Initialized policy 0 weights for model version 980 |
|
[2024-07-27 16:59:52,516][33847] LearnerWorker_p0 finished initialization! |
|
[2024-07-27 16:59:52,517][33847] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-27 16:59:52,632][33860] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 16:59:52,633][33860] RunningMeanStd input shape: (1,) |
|
[2024-07-27 16:59:52,650][33860] ConvEncoder: input_channels=3 |
|
[2024-07-27 16:59:52,759][33860] Conv encoder output size: 512 |
|
[2024-07-27 16:59:52,760][33860] Policy head output size: 512 |
|
[2024-07-27 16:59:53,123][30159] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4014080. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 16:59:54,581][30159] Inference worker 0-0 is ready! |
|
[2024-07-27 16:59:54,583][30159] All inference workers are ready! Signal rollout workers to start! |
|
[2024-07-27 16:59:54,753][33863] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,766][33866] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,792][33864] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,796][33862] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,804][33868] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,813][33867] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,853][33865] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:54,870][33861] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 16:59:56,286][33861] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,292][33863] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,858][33864] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,856][33866] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,862][33868] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,869][33862] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:56,899][30159] Heartbeat connected on Batcher_0 |
|
[2024-07-27 16:59:56,917][30159] Heartbeat connected on LearnerWorker_p0 |
|
[2024-07-27 16:59:56,950][30159] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-07-27 16:59:58,126][30159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 16:59:58,374][33864] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:59:58,384][33866] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:59:58,409][33862] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:59:58,471][33861] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:59:58,555][33865] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:59,131][33863] Decorrelating experience for 32 frames... |
|
[2024-07-27 16:59:59,144][33867] Decorrelating experience for 0 frames... |
|
[2024-07-27 16:59:59,919][33865] Decorrelating experience for 32 frames... |
|
[2024-07-27 17:00:00,070][33868] Decorrelating experience for 32 frames... |
|
[2024-07-27 17:00:00,499][33862] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:00,502][33866] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:00,526][33867] Decorrelating experience for 32 frames... |
|
[2024-07-27 17:00:01,558][33868] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:01,640][33862] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:01,652][33865] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:01,727][33861] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:01,816][30159] Heartbeat connected on RolloutWorker_w1 |
|
[2024-07-27 17:00:01,941][33863] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:02,639][33865] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:02,789][30159] Heartbeat connected on RolloutWorker_w4 |
|
[2024-07-27 17:00:02,831][33863] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:02,923][30159] Heartbeat connected on RolloutWorker_w2 |
|
[2024-07-27 17:00:02,949][33868] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:03,017][33866] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:03,123][30159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 17:00:03,194][30159] Heartbeat connected on RolloutWorker_w7 |
|
[2024-07-27 17:00:03,281][30159] Heartbeat connected on RolloutWorker_w5 |
|
[2024-07-27 17:00:03,604][33861] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:03,695][33864] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:03,758][30159] Heartbeat connected on RolloutWorker_w0 |
|
[2024-07-27 17:00:04,245][33864] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:04,270][33867] Decorrelating experience for 64 frames... |
|
[2024-07-27 17:00:04,365][30159] Heartbeat connected on RolloutWorker_w3 |
|
[2024-07-27 17:00:04,817][33867] Decorrelating experience for 96 frames... |
|
[2024-07-27 17:00:05,197][30159] Heartbeat connected on RolloutWorker_w6 |
|
[2024-07-27 17:00:08,123][30159] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 124.0. Samples: 1860. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-27 17:00:08,126][30159] Avg episode reward: [(0, '2.403')] |
|
[2024-07-27 17:00:08,385][33847] Signal inference workers to stop experience collection... |
|
[2024-07-27 17:00:08,433][33860] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-07-27 17:00:10,481][33847] Signal inference workers to resume experience collection... |
|
[2024-07-27 17:00:10,483][33847] Stopping Batcher_0... |
|
[2024-07-27 17:00:10,483][33860] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-07-27 17:00:10,483][33847] Loop batcher_evt_loop terminating... |
|
[2024-07-27 17:00:10,484][30159] Component Batcher_0 stopped! |
|
[2024-07-27 17:00:10,542][30159] Component RolloutWorker_w1 stopped! |
|
[2024-07-27 17:00:10,544][33862] Stopping RolloutWorker_w1... |
|
[2024-07-27 17:00:10,552][33862] Loop rollout_proc1_evt_loop terminating... |
|
[2024-07-27 17:00:10,546][30159] Component RolloutWorker_w4 stopped! |
|
[2024-07-27 17:00:10,553][33865] Stopping RolloutWorker_w4... |
|
[2024-07-27 17:00:10,565][33860] Weights refcount: 2 0 |
|
[2024-07-27 17:00:10,564][33864] Stopping RolloutWorker_w3... |
|
[2024-07-27 17:00:10,567][33868] Stopping RolloutWorker_w7... |
|
[2024-07-27 17:00:10,571][30159] Component RolloutWorker_w3 stopped! |
|
[2024-07-27 17:00:10,574][33860] Stopping InferenceWorker_p0-w0... |
|
[2024-07-27 17:00:10,576][33860] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-07-27 17:00:10,576][33864] Loop rollout_proc3_evt_loop terminating... |
|
[2024-07-27 17:00:10,572][30159] Component RolloutWorker_w7 stopped! |
|
[2024-07-27 17:00:10,578][30159] Component InferenceWorker_p0-w0 stopped! |
|
[2024-07-27 17:00:10,582][33868] Loop rollout_proc7_evt_loop terminating... |
|
[2024-07-27 17:00:10,591][30159] Component RolloutWorker_w0 stopped! |
|
[2024-07-27 17:00:10,596][33861] Stopping RolloutWorker_w0... |
|
[2024-07-27 17:00:10,597][33861] Loop rollout_proc0_evt_loop terminating... |
|
[2024-07-27 17:00:10,553][33865] Loop rollout_proc4_evt_loop terminating... |
|
[2024-07-27 17:00:10,607][33866] Stopping RolloutWorker_w5... |
|
[2024-07-27 17:00:10,610][33866] Loop rollout_proc5_evt_loop terminating... |
|
[2024-07-27 17:00:10,608][30159] Component RolloutWorker_w5 stopped! |
|
[2024-07-27 17:00:10,632][30159] Component RolloutWorker_w2 stopped! |
|
[2024-07-27 17:00:10,638][33863] Stopping RolloutWorker_w2... |
|
[2024-07-27 17:00:10,648][33863] Loop rollout_proc2_evt_loop terminating... |
|
[2024-07-27 17:00:10,698][30159] Component RolloutWorker_w6 stopped! |
|
[2024-07-27 17:00:10,700][33867] Stopping RolloutWorker_w6... |
|
[2024-07-27 17:00:10,703][33867] Loop rollout_proc6_evt_loop terminating... |
|
[2024-07-27 17:00:13,721][33847] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... |
|
[2024-07-27 17:00:13,830][33847] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth |
|
[2024-07-27 17:00:13,832][33847] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... |
|
[2024-07-27 17:00:13,996][33847] Stopping LearnerWorker_p0... |
|
[2024-07-27 17:00:13,997][33847] Loop learner_proc0_evt_loop terminating... |
|
[2024-07-27 17:00:13,998][30159] Component LearnerWorker_p0 stopped! |
|
[2024-07-27 17:00:14,005][30159] Waiting for process learner_proc0 to stop... |
|
[2024-07-27 17:00:15,328][30159] Waiting for process inference_proc0-0 to join... |
|
[2024-07-27 17:00:15,335][30159] Waiting for process rollout_proc0 to join... |
|
[2024-07-27 17:00:15,439][30159] Waiting for process rollout_proc1 to join... |
|
[2024-07-27 17:00:15,445][30159] Waiting for process rollout_proc2 to join... |
|
[2024-07-27 17:00:15,449][30159] Waiting for process rollout_proc3 to join... |
|
[2024-07-27 17:00:15,455][30159] Waiting for process rollout_proc4 to join... |
|
[2024-07-27 17:00:15,459][30159] Waiting for process rollout_proc5 to join... |
|
[2024-07-27 17:00:15,462][30159] Waiting for process rollout_proc6 to join... |
|
[2024-07-27 17:00:15,465][30159] Waiting for process rollout_proc7 to join... |
|
[2024-07-27 17:00:15,469][30159] Batcher 0 profile tree view: |
|
batching: 0.0359, releasing_batches: 0.0005 |
|
[2024-07-27 17:00:15,470][30159] InferenceWorker_p0-w0 profile tree view: |
|
update_model: 0.0144 |
|
wait_policy: 0.0000 |
|
wait_policy_total: 8.8166 |
|
one_step: 0.0034 |
|
handle_policy_step: 4.7231 |
|
deserialize: 0.0611, stack: 0.0073, obs_to_device_normalize: 0.3358, forward: 3.8944, send_messages: 0.0878 |
|
prepare_outputs: 0.2490 |
|
to_cpu: 0.1309 |
|
[2024-07-27 17:00:15,472][30159] Learner 0 profile tree view: |
|
misc: 0.0000, prepare_batch: 5.9911 |
|
train: 1.4930 |
|
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0004, kl_divergence: 0.0025, after_optimizer: 0.0143 |
|
calculate_losses: 0.3232 |
|
losses_init: 0.0000, forward_head: 0.1888, bptt_initial: 0.1040, tail: 0.0018, advantages_returns: 0.0029, losses: 0.0218 |
|
bptt: 0.0032 |
|
bptt_forward_core: 0.0030 |
|
update: 1.1515 |
|
clip: 0.0058 |
|
[2024-07-27 17:00:15,475][30159] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0008, enqueue_policy_requests: 0.7352, env_step: 2.7217, overhead: 0.0405, complete_rollouts: 0.0006 |
|
save_policy_outputs: 0.0806 |
|
split_output_tensors: 0.0539 |
|
[2024-07-27 17:00:15,477][30159] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0020, enqueue_policy_requests: 0.4856, env_step: 2.8832, overhead: 0.0862, complete_rollouts: 0.0071 |
|
save_policy_outputs: 0.1253 |
|
split_output_tensors: 0.0507 |
|
[2024-07-27 17:00:15,478][30159] Loop Runner_EvtLoop terminating... |
|
[2024-07-27 17:00:15,480][30159] Runner profile tree view: |
|
main_loop: 38.5402 |
|
[2024-07-27 17:00:15,481][30159] Collected {0: 4022272}, FPS: 212.6 |
|
[2024-07-27 17:00:32,613][30159] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-27 17:00:32,615][30159] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-27 17:00:32,617][30159] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-27 17:00:32,619][30159] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-27 17:00:32,621][30159] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 17:00:32,623][30159] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-27 17:00:32,626][30159] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 17:00:32,627][30159] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-27 17:00:32,629][30159] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-07-27 17:00:32,630][30159] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-07-27 17:00:32,632][30159] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-27 17:00:32,633][30159] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-27 17:00:32,635][30159] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-27 17:00:32,636][30159] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-27 17:00:32,638][30159] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-27 17:00:32,657][30159] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-27 17:00:32,660][30159] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 17:00:32,662][30159] RunningMeanStd input shape: (1,) |
|
[2024-07-27 17:00:32,679][30159] ConvEncoder: input_channels=3 |
|
[2024-07-27 17:00:32,808][30159] Conv encoder output size: 512 |
|
[2024-07-27 17:00:32,810][30159] Policy head output size: 512 |
|
[2024-07-27 17:00:34,418][30159] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... |
|
[2024-07-27 17:00:35,522][30159] Num frames 100... |
|
[2024-07-27 17:00:35,724][30159] Num frames 200... |
|
[2024-07-27 17:00:35,929][30159] Num frames 300... |
|
[2024-07-27 17:00:36,108][30159] Num frames 400... |
|
[2024-07-27 17:00:36,292][30159] Num frames 500... |
|
[2024-07-27 17:00:36,487][30159] Num frames 600... |
|
[2024-07-27 17:00:36,685][30159] Num frames 700... |
|
[2024-07-27 17:00:36,877][30159] Num frames 800... |
|
[2024-07-27 17:00:37,073][30159] Num frames 900... |
|
[2024-07-27 17:00:37,266][30159] Num frames 1000... |
|
[2024-07-27 17:00:37,440][30159] Num frames 1100... |
|
[2024-07-27 17:00:37,573][30159] Num frames 1200... |
|
[2024-07-27 17:00:37,713][30159] Num frames 1300... |
|
[2024-07-27 17:00:37,846][30159] Num frames 1400... |
|
[2024-07-27 17:00:37,981][30159] Num frames 1500... |
|
[2024-07-27 17:00:38,163][30159] Avg episode rewards: #0: 40.950, true rewards: #0: 15.950 |
|
[2024-07-27 17:00:38,164][30159] Avg episode reward: 40.950, avg true_objective: 15.950 |
|
[2024-07-27 17:00:38,175][30159] Num frames 1600... |
|
[2024-07-27 17:00:38,308][30159] Num frames 1700... |
|
[2024-07-27 17:00:38,438][30159] Num frames 1800... |
|
[2024-07-27 17:00:38,567][30159] Num frames 1900... |
|
[2024-07-27 17:00:38,702][30159] Num frames 2000... |
|
[2024-07-27 17:00:38,840][30159] Num frames 2100... |
|
[2024-07-27 17:00:39,006][30159] Num frames 2200... |
|
[2024-07-27 17:00:39,137][30159] Num frames 2300... |
|
[2024-07-27 17:00:39,268][30159] Num frames 2400... |
|
[2024-07-27 17:00:39,360][30159] Avg episode rewards: #0: 30.630, true rewards: #0: 12.130 |
|
[2024-07-27 17:00:39,361][30159] Avg episode reward: 30.630, avg true_objective: 12.130 |
|
[2024-07-27 17:00:39,454][30159] Num frames 2500... |
|
[2024-07-27 17:00:39,579][30159] Num frames 2600... |
|
[2024-07-27 17:00:39,712][30159] Num frames 2700... |
|
[2024-07-27 17:00:39,845][30159] Num frames 2800... |
|
[2024-07-27 17:00:39,972][30159] Num frames 2900... |
|
[2024-07-27 17:00:40,108][30159] Num frames 3000... |
|
[2024-07-27 17:00:40,239][30159] Num frames 3100... |
|
[2024-07-27 17:00:40,314][30159] Avg episode rewards: #0: 25.380, true rewards: #0: 10.380 |
|
[2024-07-27 17:00:40,315][30159] Avg episode reward: 25.380, avg true_objective: 10.380 |
|
[2024-07-27 17:00:40,425][30159] Num frames 3200... |
|
[2024-07-27 17:00:40,550][30159] Num frames 3300... |
|
[2024-07-27 17:00:40,675][30159] Num frames 3400... |
|
[2024-07-27 17:00:40,813][30159] Num frames 3500... |
|
[2024-07-27 17:00:40,948][30159] Num frames 3600... |
|
[2024-07-27 17:00:41,080][30159] Num frames 3700... |
|
[2024-07-27 17:00:41,209][30159] Num frames 3800... |
|
[2024-07-27 17:00:41,337][30159] Num frames 3900... |
|
[2024-07-27 17:00:41,468][30159] Num frames 4000... |
|
[2024-07-27 17:00:41,591][30159] Num frames 4100... |
|
[2024-07-27 17:00:41,716][30159] Num frames 4200... |
|
[2024-07-27 17:00:41,852][30159] Num frames 4300... |
|
[2024-07-27 17:00:41,979][30159] Num frames 4400... |
|
[2024-07-27 17:00:42,108][30159] Num frames 4500... |
|
[2024-07-27 17:00:42,236][30159] Num frames 4600... |
|
[2024-07-27 17:00:42,363][30159] Num frames 4700... |
|
[2024-07-27 17:00:42,492][30159] Num frames 4800... |
|
[2024-07-27 17:00:42,621][30159] Num frames 4900... |
|
[2024-07-27 17:00:42,752][30159] Num frames 5000... |
|
[2024-07-27 17:00:42,893][30159] Num frames 5100... |
|
[2024-07-27 17:00:43,034][30159] Num frames 5200... |
|
[2024-07-27 17:00:43,110][30159] Avg episode rewards: #0: 32.535, true rewards: #0: 13.035 |
|
[2024-07-27 17:00:43,113][30159] Avg episode reward: 32.535, avg true_objective: 13.035 |
|
[2024-07-27 17:00:43,226][30159] Num frames 5300... |
|
[2024-07-27 17:00:43,357][30159] Num frames 5400... |
|
[2024-07-27 17:00:43,480][30159] Num frames 5500... |
|
[2024-07-27 17:00:43,608][30159] Num frames 5600... |
|
[2024-07-27 17:00:43,735][30159] Num frames 5700... |
|
[2024-07-27 17:00:43,877][30159] Num frames 5800... |
|
[2024-07-27 17:00:44,011][30159] Num frames 5900... |
|
[2024-07-27 17:00:44,154][30159] Num frames 6000... |
|
[2024-07-27 17:00:44,296][30159] Num frames 6100... |
|
[2024-07-27 17:00:44,424][30159] Num frames 6200... |
|
[2024-07-27 17:00:44,552][30159] Num frames 6300... |
|
[2024-07-27 17:00:44,694][30159] Num frames 6400... |
|
[2024-07-27 17:00:44,825][30159] Num frames 6500... |
|
[2024-07-27 17:00:44,961][30159] Num frames 6600... |
|
[2024-07-27 17:00:45,092][30159] Num frames 6700... |
|
[2024-07-27 17:00:45,218][30159] Num frames 6800... |
|
[2024-07-27 17:00:45,348][30159] Num frames 6900... |
|
[2024-07-27 17:00:45,418][30159] Avg episode rewards: #0: 34.820, true rewards: #0: 13.820 |
|
[2024-07-27 17:00:45,420][30159] Avg episode reward: 34.820, avg true_objective: 13.820 |
|
[2024-07-27 17:00:45,533][30159] Num frames 7000... |
|
[2024-07-27 17:00:45,660][30159] Num frames 7100... |
|
[2024-07-27 17:00:45,791][30159] Num frames 7200... |
|
[2024-07-27 17:00:45,929][30159] Num frames 7300... |
|
[2024-07-27 17:00:46,059][30159] Num frames 7400... |
|
[2024-07-27 17:00:46,183][30159] Num frames 7500... |
|
[2024-07-27 17:00:46,314][30159] Num frames 7600... |
|
[2024-07-27 17:00:46,442][30159] Num frames 7700... |
|
[2024-07-27 17:00:46,592][30159] Avg episode rewards: #0: 31.790, true rewards: #0: 12.957 |
|
[2024-07-27 17:00:46,594][30159] Avg episode reward: 31.790, avg true_objective: 12.957 |
|
[2024-07-27 17:00:46,628][30159] Num frames 7800... |
|
[2024-07-27 17:00:46,751][30159] Num frames 7900... |
|
[2024-07-27 17:00:46,874][30159] Num frames 8000... |
|
[2024-07-27 17:00:47,016][30159] Num frames 8100... |
|
[2024-07-27 17:00:47,140][30159] Num frames 8200... |
|
[2024-07-27 17:00:47,265][30159] Num frames 8300... |
|
[2024-07-27 17:00:47,393][30159] Num frames 8400... |
|
[2024-07-27 17:00:47,572][30159] Num frames 8500... |
|
[2024-07-27 17:00:47,753][30159] Num frames 8600... |
|
[2024-07-27 17:00:47,945][30159] Num frames 8700... |
|
[2024-07-27 17:00:48,181][30159] Avg episode rewards: #0: 30.711, true rewards: #0: 12.569 |
|
[2024-07-27 17:00:48,183][30159] Avg episode reward: 30.711, avg true_objective: 12.569 |
|
[2024-07-27 17:00:48,187][30159] Num frames 8800... |
|
[2024-07-27 17:00:48,364][30159] Num frames 8900... |
|
[2024-07-27 17:00:48,543][30159] Num frames 9000... |
|
[2024-07-27 17:00:48,727][30159] Num frames 9100... |
|
[2024-07-27 17:00:48,921][30159] Num frames 9200... |
|
[2024-07-27 17:00:49,111][30159] Num frames 9300... |
|
[2024-07-27 17:00:49,308][30159] Num frames 9400... |
|
[2024-07-27 17:00:49,496][30159] Num frames 9500... |
|
[2024-07-27 17:00:49,681][30159] Num frames 9600... |
|
[2024-07-27 17:00:49,871][30159] Num frames 9700... |
|
[2024-07-27 17:00:50,018][30159] Num frames 9800... |
|
[2024-07-27 17:00:50,093][30159] Avg episode rewards: #0: 29.639, true rewards: #0: 12.264 |
|
[2024-07-27 17:00:50,095][30159] Avg episode reward: 29.639, avg true_objective: 12.264 |
|
[2024-07-27 17:00:50,207][30159] Num frames 9900... |
|
[2024-07-27 17:00:50,333][30159] Num frames 10000... |
|
[2024-07-27 17:00:50,460][30159] Num frames 10100... |
|
[2024-07-27 17:00:50,587][30159] Num frames 10200... |
|
[2024-07-27 17:00:50,712][30159] Num frames 10300... |
|
[2024-07-27 17:00:50,839][30159] Num frames 10400... |
|
[2024-07-27 17:00:50,964][30159] Num frames 10500... |
|
[2024-07-27 17:00:51,107][30159] Num frames 10600... |
|
[2024-07-27 17:00:51,233][30159] Num frames 10700... |
|
[2024-07-27 17:00:51,358][30159] Num frames 10800... |
|
[2024-07-27 17:00:51,490][30159] Num frames 10900... |
|
[2024-07-27 17:00:51,579][30159] Avg episode rewards: #0: 29.584, true rewards: #0: 12.140 |
|
[2024-07-27 17:00:51,580][30159] Avg episode reward: 29.584, avg true_objective: 12.140 |
|
[2024-07-27 17:00:51,672][30159] Num frames 11000... |
|
[2024-07-27 17:00:51,803][30159] Num frames 11100... |
|
[2024-07-27 17:00:51,932][30159] Num frames 11200... |
|
[2024-07-27 17:00:52,069][30159] Num frames 11300... |
|
[2024-07-27 17:00:52,201][30159] Num frames 11400... |
|
[2024-07-27 17:00:52,336][30159] Num frames 11500... |
|
[2024-07-27 17:00:52,472][30159] Num frames 11600... |
|
[2024-07-27 17:00:52,604][30159] Num frames 11700... |
|
[2024-07-27 17:00:52,730][30159] Num frames 11800... |
|
[2024-07-27 17:00:52,855][30159] Num frames 11900... |
|
[2024-07-27 17:00:52,986][30159] Num frames 12000... |
|
[2024-07-27 17:00:53,123][30159] Num frames 12100... |
|
[2024-07-27 17:00:53,251][30159] Num frames 12200... |
|
[2024-07-27 17:00:53,385][30159] Num frames 12300... |
|
[2024-07-27 17:00:53,509][30159] Num frames 12400... |
|
[2024-07-27 17:00:53,639][30159] Num frames 12500... |
|
[2024-07-27 17:00:53,768][30159] Num frames 12600... |
|
[2024-07-27 17:00:53,899][30159] Num frames 12700... |
|
[2024-07-27 17:00:54,031][30159] Num frames 12800... |
|
[2024-07-27 17:00:54,169][30159] Num frames 12900... |
|
[2024-07-27 17:00:54,307][30159] Num frames 13000... |
|
[2024-07-27 17:00:54,399][30159] Avg episode rewards: #0: 32.526, true rewards: #0: 13.026 |
|
[2024-07-27 17:00:54,401][30159] Avg episode reward: 32.526, avg true_objective: 13.026 |
|
[2024-07-27 17:02:16,756][30159] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-07-27 17:30:39,523][30159] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-27 17:30:39,524][30159] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-27 17:30:39,527][30159] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-27 17:30:39,529][30159] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-27 17:30:39,530][30159] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-27 17:30:39,532][30159] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-27 17:30:39,534][30159] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-07-27 17:30:39,536][30159] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-27 17:30:39,536][30159] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-07-27 17:30:39,537][30159] Adding new argument 'hf_repository'='bakermann/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-07-27 17:30:39,539][30159] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-27 17:30:39,539][30159] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-27 17:30:39,540][30159] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-27 17:30:39,541][30159] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-27 17:30:39,542][30159] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-27 17:30:39,552][30159] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-27 17:30:39,559][30159] RunningMeanStd input shape: (1,) |
|
[2024-07-27 17:30:39,572][30159] ConvEncoder: input_channels=3 |
|
[2024-07-27 17:30:39,642][30159] Conv encoder output size: 512 |
|
[2024-07-27 17:30:39,645][30159] Policy head output size: 512 |
|
[2024-07-27 17:30:39,664][30159] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... |
|
[2024-07-27 17:30:40,167][30159] Num frames 100... |
|
[2024-07-27 17:30:40,307][30159] Num frames 200... |
|
[2024-07-27 17:30:40,455][30159] Num frames 300... |
|
[2024-07-27 17:30:40,597][30159] Num frames 400... |
|
[2024-07-27 17:30:40,726][30159] Num frames 500... |
|
[2024-07-27 17:30:40,862][30159] Num frames 600... |
|
[2024-07-27 17:30:41,027][30159] Avg episode rewards: #0: 14.730, true rewards: #0: 6.730 |
|
[2024-07-27 17:30:41,029][30159] Avg episode reward: 14.730, avg true_objective: 6.730 |
|
[2024-07-27 17:30:41,065][30159] Num frames 700... |
|
[2024-07-27 17:30:41,201][30159] Num frames 800... |
|
[2024-07-27 17:30:41,352][30159] Num frames 900... |
|
[2024-07-27 17:30:41,478][30159] Num frames 1000... |
|
[2024-07-27 17:30:41,606][30159] Num frames 1100... |
|
[2024-07-27 17:30:41,730][30159] Num frames 1200... |
|
[2024-07-27 17:30:41,808][30159] Avg episode rewards: #0: 11.585, true rewards: #0: 6.085 |
|
[2024-07-27 17:30:41,809][30159] Avg episode reward: 11.585, avg true_objective: 6.085 |
|
[2024-07-27 17:30:41,913][30159] Num frames 1300... |
|
[2024-07-27 17:30:42,059][30159] Num frames 1400... |
|
[2024-07-27 17:30:42,183][30159] Num frames 1500... |
|
[2024-07-27 17:30:42,343][30159] Num frames 1600... |
|
[2024-07-27 17:30:42,529][30159] Num frames 1700... |
|
[2024-07-27 17:30:42,708][30159] Num frames 1800... |
|
[2024-07-27 17:30:42,884][30159] Num frames 1900... |
|
[2024-07-27 17:30:43,061][30159] Num frames 2000... |
|
[2024-07-27 17:30:43,234][30159] Num frames 2100... |
|
[2024-07-27 17:30:43,423][30159] Num frames 2200... |
|
[2024-07-27 17:30:43,606][30159] Num frames 2300... |
|
[2024-07-27 17:30:43,790][30159] Num frames 2400... |
|
[2024-07-27 17:30:43,972][30159] Num frames 2500... |
|
[2024-07-27 17:30:44,179][30159] Num frames 2600... |
|
[2024-07-27 17:30:44,357][30159] Num frames 2700... |
|
[2024-07-27 17:30:44,566][30159] Num frames 2800... |
|
[2024-07-27 17:30:44,760][30159] Num frames 2900... |
|
[2024-07-27 17:30:44,898][30159] Num frames 3000... |
|
[2024-07-27 17:30:45,026][30159] Num frames 3100... |
|
[2024-07-27 17:30:45,154][30159] Num frames 3200... |
|
[2024-07-27 17:30:45,279][30159] Num frames 3300... |
|
[2024-07-27 17:30:45,358][30159] Avg episode rewards: #0: 25.056, true rewards: #0: 11.057 |
|
[2024-07-27 17:30:45,359][30159] Avg episode reward: 25.056, avg true_objective: 11.057 |
|
[2024-07-27 17:30:45,471][30159] Num frames 3400... |
|
[2024-07-27 17:30:45,601][30159] Num frames 3500... |
|
[2024-07-27 17:30:45,733][30159] Num frames 3600... |
|
[2024-07-27 17:30:45,856][30159] Num frames 3700... |
|
[2024-07-27 17:30:45,985][30159] Num frames 3800... |
|
[2024-07-27 17:30:46,122][30159] Num frames 3900... |
|
[2024-07-27 17:30:46,250][30159] Num frames 4000... |
|
[2024-07-27 17:30:46,378][30159] Num frames 4100... |
|
[2024-07-27 17:30:46,515][30159] Num frames 4200... |
|
[2024-07-27 17:30:46,638][30159] Num frames 4300... |
|
[2024-07-27 17:30:46,785][30159] Avg episode rewards: #0: 24.182, true rewards: #0: 10.933 |
|
[2024-07-27 17:30:46,787][30159] Avg episode reward: 24.182, avg true_objective: 10.933 |
|
[2024-07-27 17:30:46,823][30159] Num frames 4400... |
|
[2024-07-27 17:30:46,948][30159] Num frames 4500... |
|
[2024-07-27 17:30:47,077][30159] Num frames 4600... |
|
[2024-07-27 17:30:47,201][30159] Num frames 4700... |
|
[2024-07-27 17:30:47,328][30159] Num frames 4800... |
|
[2024-07-27 17:30:47,451][30159] Num frames 4900... |
|
[2024-07-27 17:30:47,589][30159] Num frames 5000... |
|
[2024-07-27 17:30:47,721][30159] Num frames 5100... |
|
[2024-07-27 17:30:47,849][30159] Num frames 5200... |
|
[2024-07-27 17:30:47,975][30159] Num frames 5300... |
|
[2024-07-27 17:30:48,107][30159] Num frames 5400... |
|
[2024-07-27 17:30:48,236][30159] Num frames 5500... |
|
[2024-07-27 17:30:48,364][30159] Num frames 5600... |
|
[2024-07-27 17:30:48,498][30159] Num frames 5700... |
|
[2024-07-27 17:30:48,633][30159] Num frames 5800... |
|
[2024-07-27 17:30:48,763][30159] Num frames 5900... |
|
[2024-07-27 17:30:48,849][30159] Avg episode rewards: #0: 26.848, true rewards: #0: 11.848 |
|
[2024-07-27 17:30:48,850][30159] Avg episode reward: 26.848, avg true_objective: 11.848 |
|
[2024-07-27 17:30:48,951][30159] Num frames 6000... |
|
[2024-07-27 17:30:49,085][30159] Num frames 6100... |
|
[2024-07-27 17:30:49,212][30159] Num frames 6200... |
|
[2024-07-27 17:30:49,340][30159] Num frames 6300... |
|
[2024-07-27 17:30:49,467][30159] Num frames 6400... |
|
[2024-07-27 17:30:49,613][30159] Num frames 6500... |
|
[2024-07-27 17:30:49,738][30159] Num frames 6600... |
|
[2024-07-27 17:30:49,870][30159] Num frames 6700... |
|
[2024-07-27 17:30:50,012][30159] Num frames 6800... |
|
[2024-07-27 17:30:50,142][30159] Num frames 6900... |
|
[2024-07-27 17:30:50,270][30159] Num frames 7000... |
|
[2024-07-27 17:30:50,396][30159] Num frames 7100... |
|
[2024-07-27 17:30:50,525][30159] Num frames 7200... |
|
[2024-07-27 17:30:50,659][30159] Num frames 7300... |
|
[2024-07-27 17:30:50,792][30159] Num frames 7400... |
|
[2024-07-27 17:30:50,926][30159] Num frames 7500... |
|
[2024-07-27 17:30:51,060][30159] Num frames 7600... |
|
[2024-07-27 17:30:51,194][30159] Num frames 7700... |
|
[2024-07-27 17:30:51,324][30159] Num frames 7800... |
|
[2024-07-27 17:30:51,465][30159] Num frames 7900... |
|
[2024-07-27 17:30:51,532][30159] Avg episode rewards: #0: 31.346, true rewards: #0: 13.180 |
|
[2024-07-27 17:30:51,533][30159] Avg episode reward: 31.346, avg true_objective: 13.180 |
|
[2024-07-27 17:30:51,657][30159] Num frames 8000... |
|
[2024-07-27 17:30:51,788][30159] Num frames 8100... |
|
[2024-07-27 17:30:51,921][30159] Num frames 8200... |
|
[2024-07-27 17:30:52,053][30159] Num frames 8300... |
|
[2024-07-27 17:30:52,186][30159] Num frames 8400... |
|
[2024-07-27 17:30:52,322][30159] Num frames 8500... |
|
[2024-07-27 17:30:52,449][30159] Num frames 8600... |
|
[2024-07-27 17:30:52,581][30159] Num frames 8700... |
|
[2024-07-27 17:30:52,712][30159] Num frames 8800... |
|
[2024-07-27 17:30:52,832][30159] Num frames 8900... |
|
[2024-07-27 17:30:53,015][30159] Avg episode rewards: #0: 30.708, true rewards: #0: 12.851 |
|
[2024-07-27 17:30:53,017][30159] Avg episode reward: 30.708, avg true_objective: 12.851 |
|
[2024-07-27 17:30:53,025][30159] Num frames 9000... |
|
[2024-07-27 17:30:53,148][30159] Num frames 9100... |
|
[2024-07-27 17:30:53,272][30159] Num frames 9200... |
|
[2024-07-27 17:30:53,397][30159] Num frames 9300... |
|
[2024-07-27 17:30:53,522][30159] Num frames 9400... |
|
[2024-07-27 17:30:53,656][30159] Num frames 9500... |
|
[2024-07-27 17:30:53,804][30159] Avg episode rewards: #0: 28.216, true rewards: #0: 11.966 |
|
[2024-07-27 17:30:53,806][30159] Avg episode reward: 28.216, avg true_objective: 11.966 |
|
[2024-07-27 17:30:53,846][30159] Num frames 9600... |
|
[2024-07-27 17:30:53,973][30159] Num frames 9700... |
|
[2024-07-27 17:30:54,105][30159] Num frames 9800... |
|
[2024-07-27 17:30:54,227][30159] Num frames 9900... |
|
[2024-07-27 17:30:54,371][30159] Num frames 10000... |
|
[2024-07-27 17:30:54,557][30159] Avg episode rewards: #0: 26.329, true rewards: #0: 11.218 |
|
[2024-07-27 17:30:54,559][30159] Avg episode reward: 26.329, avg true_objective: 11.218 |
|
[2024-07-27 17:30:54,568][30159] Num frames 10100... |
|
[2024-07-27 17:30:54,701][30159] Num frames 10200... |
|
[2024-07-27 17:30:54,848][30159] Num frames 10300... |
|
[2024-07-27 17:30:55,034][30159] Num frames 10400... |
|
[2024-07-27 17:30:55,240][30159] Num frames 10500... |
|
[2024-07-27 17:30:55,444][30159] Num frames 10600... |
|
[2024-07-27 17:30:55,642][30159] Num frames 10700... |
|
[2024-07-27 17:30:55,824][30159] Num frames 10800... |
|
[2024-07-27 17:30:56,002][30159] Num frames 10900... |
|
[2024-07-27 17:30:56,177][30159] Num frames 11000... |
|
[2024-07-27 17:30:56,368][30159] Avg episode rewards: #0: 26.376, true rewards: #0: 11.076 |
|
[2024-07-27 17:30:56,370][30159] Avg episode reward: 26.376, avg true_objective: 11.076 |
|
[2024-07-27 17:32:05,961][30159] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|