[2023-02-22 15:55:46,126][11727] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-02-22 15:55:46,128][11727] Rollout worker 0 uses device cpu [2023-02-22 15:55:46,129][11727] Rollout worker 1 uses device cpu [2023-02-22 15:55:46,130][11727] Rollout worker 2 uses device cpu [2023-02-22 15:55:46,132][11727] Rollout worker 3 uses device cpu [2023-02-22 15:55:46,133][11727] Rollout worker 4 uses device cpu [2023-02-22 15:55:46,136][11727] Rollout worker 5 uses device cpu [2023-02-22 15:55:46,137][11727] Rollout worker 6 uses device cpu [2023-02-22 15:55:46,139][11727] Rollout worker 7 uses device cpu [2023-02-22 15:55:46,236][11727] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-22 15:55:46,238][11727] InferenceWorker_p0-w0: min num requests: 2 [2023-02-22 15:55:46,268][11727] Starting all processes... [2023-02-22 15:55:46,270][11727] Starting process learner_proc0 [2023-02-22 15:55:46,326][11727] Starting all processes... [2023-02-22 15:55:46,338][11727] Starting process inference_proc0-0 [2023-02-22 15:55:46,338][11727] Starting process rollout_proc0 [2023-02-22 15:55:46,339][11727] Starting process rollout_proc1 [2023-02-22 15:55:46,340][11727] Starting process rollout_proc2 [2023-02-22 15:55:46,342][11727] Starting process rollout_proc3 [2023-02-22 15:55:46,342][11727] Starting process rollout_proc4 [2023-02-22 15:55:46,357][11727] Starting process rollout_proc5 [2023-02-22 15:55:46,358][11727] Starting process rollout_proc6 [2023-02-22 15:55:46,358][11727] Starting process rollout_proc7 [2023-02-22 15:55:48,129][11948] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-22 15:55:48,129][11948] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-02-22 15:55:48,448][11949] Worker 0 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,552][11934] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-22 15:55:48,553][11934] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-02-22 15:55:48,778][11974] Worker 6 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,778][11953] Worker 3 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,807][11950] Worker 1 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,860][11951] Worker 2 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,864][11973] Worker 7 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,895][11975] Worker 4 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:48,947][11970] Worker 5 uses CPU cores [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] [2023-02-22 15:55:49,003][11948] Num visible devices: 1 [2023-02-22 15:55:49,003][11934] Num visible devices: 1 [2023-02-22 15:55:49,028][11934] Starting seed is not provided [2023-02-22 15:55:49,028][11934] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-22 15:55:49,028][11934] Initializing actor-critic model on device cuda:0 [2023-02-22 15:55:49,028][11934] RunningMeanStd input shape: (3, 72, 128) [2023-02-22 15:55:49,030][11934] RunningMeanStd input shape: (1,) [2023-02-22 15:55:49,044][11934] ConvEncoder: input_channels=3 [2023-02-22 15:55:49,304][11934] Conv encoder output size: 512 [2023-02-22 15:55:49,304][11934] Policy head output size: 512 [2023-02-22 15:55:49,345][11934] Created Actor Critic model with architecture: [2023-02-22 15:55:49,345][11934] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-02-22 15:55:56,154][11934] Using optimizer [2023-02-22 15:55:56,155][11934] No checkpoints found [2023-02-22 15:55:56,156][11934] Did not load from checkpoint, starting from scratch! [2023-02-22 15:55:56,156][11934] Initialized policy 0 weights for model version 0 [2023-02-22 15:55:56,158][11934] LearnerWorker_p0 finished initialization! [2023-02-22 15:55:56,159][11934] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-22 15:55:56,267][11948] RunningMeanStd input shape: (3, 72, 128) [2023-02-22 15:55:56,268][11948] RunningMeanStd input shape: (1,) [2023-02-22 15:55:56,284][11948] ConvEncoder: input_channels=3 [2023-02-22 15:55:56,395][11948] Conv encoder output size: 512 [2023-02-22 15:55:56,396][11948] Policy head output size: 512 [2023-02-22 15:55:57,406][11727] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-22 15:55:59,174][11727] Inference worker 0-0 is ready! [2023-02-22 15:55:59,176][11727] All inference workers are ready! Signal rollout workers to start! [2023-02-22 15:55:59,195][11974] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,195][11973] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,201][11950] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,202][11970] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,202][11951] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,202][11975] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,202][11953] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,202][11949] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 15:55:59,257][11974] VizDoom game.init() threw an exception ViZDoomUnexpectedExitException('Controlled ViZDoom instance exited unexpectedly.'). Terminate process... [2023-02-22 15:55:59,258][11974] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init self.game.init() vizdoom.vizdoom.ViZDoomUnexpectedExitException: Controlled ViZDoom instance exited unexpectedly. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init env_runner.init(self.timing) File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init self._reset() File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 379, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 84, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset self._ensure_initialized() File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized self.initialize() File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize self._game_init() File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init raise EnvCriticalError() sample_factory.envs.env_utils.EnvCriticalError [2023-02-22 15:55:59,260][11974] Unhandled exception in evt loop rollout_proc6_evt_loop [2023-02-22 15:55:59,528][11973] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,528][11953] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,528][11951] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,528][11949] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,599][11950] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,600][11970] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,776][11951] Decorrelating experience for 32 frames... [2023-02-22 15:55:59,801][11953] Decorrelating experience for 32 frames... [2023-02-22 15:55:59,821][11975] Decorrelating experience for 0 frames... [2023-02-22 15:55:59,852][11970] Decorrelating experience for 32 frames... [2023-02-22 15:55:59,871][11950] Decorrelating experience for 32 frames... [2023-02-22 15:55:59,887][11949] Decorrelating experience for 32 frames... [2023-02-22 15:56:00,034][11973] Decorrelating experience for 32 frames... [2023-02-22 15:56:00,121][11951] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,134][11975] Decorrelating experience for 32 frames... [2023-02-22 15:56:00,157][11970] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,175][11950] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,349][11973] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,423][11953] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,427][11949] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,428][11951] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,460][11970] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,695][11950] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,714][11975] Decorrelating experience for 64 frames... [2023-02-22 15:56:00,726][11949] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,736][11953] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,775][11973] Decorrelating experience for 96 frames... [2023-02-22 15:56:00,999][11975] Decorrelating experience for 96 frames... [2023-02-22 15:56:02,406][11727] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-22 15:56:05,026][11934] Signal inference workers to stop experience collection... [2023-02-22 15:56:05,032][11948] InferenceWorker_p0-w0: stopping experience collection [2023-02-22 15:56:06,229][11727] Heartbeat connected on Batcher_0 [2023-02-22 15:56:06,237][11727] Heartbeat connected on InferenceWorker_p0-w0 [2023-02-22 15:56:06,244][11727] Heartbeat connected on RolloutWorker_w0 [2023-02-22 15:56:06,247][11727] Heartbeat connected on RolloutWorker_w1 [2023-02-22 15:56:06,251][11727] Heartbeat connected on RolloutWorker_w2 [2023-02-22 15:56:06,254][11727] Heartbeat connected on RolloutWorker_w3 [2023-02-22 15:56:06,258][11727] Heartbeat connected on RolloutWorker_w4 [2023-02-22 15:56:06,261][11727] Heartbeat connected on RolloutWorker_w5 [2023-02-22 15:56:06,268][11727] Heartbeat connected on RolloutWorker_w7 [2023-02-22 15:56:07,406][11727] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 310.2. Samples: 3102. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-22 15:56:07,408][11727] Avg episode reward: [(0, '2.718')] [2023-02-22 15:56:07,941][11934] Signal inference workers to resume experience collection... [2023-02-22 15:56:07,941][11948] InferenceWorker_p0-w0: resuming experience collection [2023-02-22 15:56:08,828][11727] Heartbeat connected on LearnerWorker_p0 [2023-02-22 15:56:10,521][11948] Updated weights for policy 0, policy_version 10 (0.0011) [2023-02-22 15:56:12,406][11727] Fps is (10 sec: 6963.0, 60 sec: 4642.1, 300 sec: 4642.1). Total num frames: 69632. Throughput: 0: 901.7. Samples: 13526. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-22 15:56:12,408][11727] Avg episode reward: [(0, '4.505')] [2023-02-22 15:56:12,915][11948] Updated weights for policy 0, policy_version 20 (0.0011) [2023-02-22 15:56:15,191][11948] Updated weights for policy 0, policy_version 30 (0.0011) [2023-02-22 15:56:17,406][11727] Fps is (10 sec: 15564.7, 60 sec: 7782.4, 300 sec: 7782.4). Total num frames: 155648. Throughput: 0: 1980.7. Samples: 39614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:56:17,408][11727] Avg episode reward: [(0, '4.543')] [2023-02-22 15:56:17,424][11934] Saving new best policy, reward=4.543! [2023-02-22 15:56:17,653][11948] Updated weights for policy 0, policy_version 40 (0.0011) [2023-02-22 15:56:20,028][11948] Updated weights for policy 0, policy_version 50 (0.0011) [2023-02-22 15:56:22,406][11727] Fps is (10 sec: 17203.5, 60 sec: 9666.5, 300 sec: 9666.5). Total num frames: 241664. Throughput: 0: 2092.2. Samples: 52304. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:56:22,409][11727] Avg episode reward: [(0, '4.353')] [2023-02-22 15:56:22,468][11948] Updated weights for policy 0, policy_version 60 (0.0011) [2023-02-22 15:56:24,692][11948] Updated weights for policy 0, policy_version 70 (0.0011) [2023-02-22 15:56:26,917][11948] Updated weights for policy 0, policy_version 80 (0.0011) [2023-02-22 15:56:27,406][11727] Fps is (10 sec: 17612.8, 60 sec: 11059.2, 300 sec: 11059.2). Total num frames: 331776. Throughput: 0: 2634.7. Samples: 79042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:56:27,408][11727] Avg episode reward: [(0, '4.552')] [2023-02-22 15:56:27,412][11934] Saving new best policy, reward=4.552! [2023-02-22 15:56:29,208][11948] Updated weights for policy 0, policy_version 90 (0.0011) [2023-02-22 15:56:31,459][11948] Updated weights for policy 0, policy_version 100 (0.0010) [2023-02-22 15:56:32,406][11727] Fps is (10 sec: 18432.0, 60 sec: 12171.0, 300 sec: 12171.0). Total num frames: 425984. Throughput: 0: 3030.2. Samples: 106058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:56:32,408][11727] Avg episode reward: [(0, '4.755')] [2023-02-22 15:56:32,417][11934] Saving new best policy, reward=4.755! [2023-02-22 15:56:33,894][11948] Updated weights for policy 0, policy_version 110 (0.0011) [2023-02-22 15:56:36,358][11948] Updated weights for policy 0, policy_version 120 (0.0012) [2023-02-22 15:56:37,406][11727] Fps is (10 sec: 17612.8, 60 sec: 12697.6, 300 sec: 12697.6). Total num frames: 507904. Throughput: 0: 2967.0. Samples: 118682. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2023-02-22 15:56:37,408][11727] Avg episode reward: [(0, '4.517')] [2023-02-22 15:56:38,646][11948] Updated weights for policy 0, policy_version 130 (0.0011) [2023-02-22 15:56:40,961][11948] Updated weights for policy 0, policy_version 140 (0.0011) [2023-02-22 15:56:42,406][11727] Fps is (10 sec: 17203.3, 60 sec: 13289.2, 300 sec: 13289.2). Total num frames: 598016. Throughput: 0: 3222.5. Samples: 145014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:56:42,408][11727] Avg episode reward: [(0, '4.649')] [2023-02-22 15:56:43,202][11948] Updated weights for policy 0, policy_version 150 (0.0011) [2023-02-22 15:56:45,462][11948] Updated weights for policy 0, policy_version 160 (0.0011) [2023-02-22 15:56:47,406][11727] Fps is (10 sec: 18022.4, 60 sec: 13762.6, 300 sec: 13762.6). Total num frames: 688128. Throughput: 0: 3827.0. Samples: 172214. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:56:47,409][11727] Avg episode reward: [(0, '4.915')] [2023-02-22 15:56:47,413][11934] Saving new best policy, reward=4.915! [2023-02-22 15:56:47,753][11948] Updated weights for policy 0, policy_version 170 (0.0011) [2023-02-22 15:56:50,203][11948] Updated weights for policy 0, policy_version 180 (0.0011) [2023-02-22 15:56:52,406][11727] Fps is (10 sec: 17612.7, 60 sec: 14075.3, 300 sec: 14075.3). Total num frames: 774144. Throughput: 0: 4038.2. Samples: 184820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:56:52,409][11727] Avg episode reward: [(0, '5.185')] [2023-02-22 15:56:52,417][11934] Saving new best policy, reward=5.185! [2023-02-22 15:56:52,622][11948] Updated weights for policy 0, policy_version 190 (0.0011) [2023-02-22 15:56:54,883][11948] Updated weights for policy 0, policy_version 200 (0.0011) [2023-02-22 15:56:57,222][11948] Updated weights for policy 0, policy_version 210 (0.0011) [2023-02-22 15:56:57,406][11727] Fps is (10 sec: 17203.3, 60 sec: 14336.0, 300 sec: 14336.0). Total num frames: 860160. Throughput: 0: 4388.8. Samples: 211022. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:56:57,409][11727] Avg episode reward: [(0, '5.563')] [2023-02-22 15:56:57,425][11934] Saving new best policy, reward=5.563! [2023-02-22 15:56:59,425][11948] Updated weights for policy 0, policy_version 220 (0.0010) [2023-02-22 15:57:01,745][11948] Updated weights for policy 0, policy_version 230 (0.0011) [2023-02-22 15:57:02,406][11727] Fps is (10 sec: 18022.5, 60 sec: 15906.1, 300 sec: 14682.6). Total num frames: 954368. Throughput: 0: 4411.3. Samples: 238122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:57:02,409][11727] Avg episode reward: [(0, '6.131')] [2023-02-22 15:57:02,415][11934] Saving new best policy, reward=6.131! [2023-02-22 15:57:04,074][11948] Updated weights for policy 0, policy_version 240 (0.0011) [2023-02-22 15:57:06,481][11948] Updated weights for policy 0, policy_version 250 (0.0011) [2023-02-22 15:57:07,406][11727] Fps is (10 sec: 17612.8, 60 sec: 17271.5, 300 sec: 14804.1). Total num frames: 1036288. Throughput: 0: 4415.2. Samples: 250988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:57:07,409][11727] Avg episode reward: [(0, '6.780')] [2023-02-22 15:57:07,427][11934] Saving new best policy, reward=6.780! [2023-02-22 15:57:08,893][11948] Updated weights for policy 0, policy_version 260 (0.0011) [2023-02-22 15:57:11,132][11948] Updated weights for policy 0, policy_version 270 (0.0010) [2023-02-22 15:57:12,406][11727] Fps is (10 sec: 17203.3, 60 sec: 17612.9, 300 sec: 15018.7). Total num frames: 1126400. Throughput: 0: 4406.4. Samples: 277330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:57:12,408][11727] Avg episode reward: [(0, '7.336')] [2023-02-22 15:57:12,417][11934] Saving new best policy, reward=7.336! [2023-02-22 15:57:13,391][11948] Updated weights for policy 0, policy_version 280 (0.0012) [2023-02-22 15:57:15,635][11948] Updated weights for policy 0, policy_version 290 (0.0010) [2023-02-22 15:57:17,406][11727] Fps is (10 sec: 18022.3, 60 sec: 17681.1, 300 sec: 15206.4). Total num frames: 1216512. Throughput: 0: 4410.0. Samples: 304508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-22 15:57:17,408][11727] Avg episode reward: [(0, '7.986')] [2023-02-22 15:57:17,419][11934] Saving new best policy, reward=7.986! [2023-02-22 15:57:17,940][11948] Updated weights for policy 0, policy_version 300 (0.0011) [2023-02-22 15:57:20,251][11948] Updated weights for policy 0, policy_version 310 (0.0011) [2023-02-22 15:57:22,406][11727] Fps is (10 sec: 17612.6, 60 sec: 17681.1, 300 sec: 15323.9). Total num frames: 1302528. Throughput: 0: 4421.6. Samples: 317652. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2023-02-22 15:57:22,408][11727] Avg episode reward: [(0, '9.339')] [2023-02-22 15:57:22,419][11934] Saving new best policy, reward=9.339! [2023-02-22 15:57:22,705][11948] Updated weights for policy 0, policy_version 320 (0.0012) [2023-02-22 15:57:25,079][11948] Updated weights for policy 0, policy_version 330 (0.0011) [2023-02-22 15:57:27,406][11727] Fps is (10 sec: 17203.4, 60 sec: 17612.8, 300 sec: 15428.3). Total num frames: 1388544. Throughput: 0: 4407.8. Samples: 343364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-22 15:57:27,408][11727] Avg episode reward: [(0, '9.500')] [2023-02-22 15:57:27,420][11934] Saving new best policy, reward=9.500! [2023-02-22 15:57:27,422][11948] Updated weights for policy 0, policy_version 340 (0.0011) [2023-02-22 15:57:29,748][11948] Updated weights for policy 0, policy_version 350 (0.0011) [2023-02-22 15:57:32,048][11948] Updated weights for policy 0, policy_version 360 (0.0010) [2023-02-22 15:57:32,406][11727] Fps is (10 sec: 17612.9, 60 sec: 17544.5, 300 sec: 15564.8). Total num frames: 1478656. Throughput: 0: 4389.4. Samples: 369738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:57:32,408][11727] Avg episode reward: [(0, '10.590')] [2023-02-22 15:57:32,416][11934] Saving new best policy, reward=10.590! [2023-02-22 15:57:34,420][11948] Updated weights for policy 0, policy_version 370 (0.0011) [2023-02-22 15:57:36,777][11948] Updated weights for policy 0, policy_version 380 (0.0011) [2023-02-22 15:57:37,406][11727] Fps is (10 sec: 17612.8, 60 sec: 17612.8, 300 sec: 15646.7). Total num frames: 1564672. Throughput: 0: 4400.5. Samples: 382842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-22 15:57:37,408][11727] Avg episode reward: [(0, '11.263')] [2023-02-22 15:57:37,412][11934] Saving new best policy, reward=11.263! [2023-02-22 15:57:39,255][11948] Updated weights for policy 0, policy_version 390 (0.0012) [2023-02-22 15:57:41,601][11948] Updated weights for policy 0, policy_version 400 (0.0011) [2023-02-22 15:57:42,406][11727] Fps is (10 sec: 17203.1, 60 sec: 17544.5, 300 sec: 15720.8). Total num frames: 1650688. Throughput: 0: 4381.9. Samples: 408208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:57:42,408][11727] Avg episode reward: [(0, '11.024')] [2023-02-22 15:57:42,417][11934] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000403_1650688.pth... [2023-02-22 15:57:43,882][11948] Updated weights for policy 0, policy_version 410 (0.0018) [2023-02-22 15:57:46,152][11948] Updated weights for policy 0, policy_version 420 (0.0011) [2023-02-22 15:57:47,406][11727] Fps is (10 sec: 17612.9, 60 sec: 17544.6, 300 sec: 15825.5). Total num frames: 1740800. Throughput: 0: 4378.0. Samples: 435132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:57:47,408][11727] Avg episode reward: [(0, '14.537')] [2023-02-22 15:57:47,410][11934] Saving new best policy, reward=14.537! [2023-02-22 15:57:48,472][11948] Updated weights for policy 0, policy_version 430 (0.0011) [2023-02-22 15:57:50,720][11948] Updated weights for policy 0, policy_version 440 (0.0011) [2023-02-22 15:57:52,406][11727] Fps is (10 sec: 17612.9, 60 sec: 17544.6, 300 sec: 15885.4). Total num frames: 1826816. Throughput: 0: 4394.8. Samples: 448752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:57:52,409][11727] Avg episode reward: [(0, '15.374')] [2023-02-22 15:57:52,415][11934] Saving new best policy, reward=15.374! [2023-02-22 15:57:53,133][11948] Updated weights for policy 0, policy_version 450 (0.0011) [2023-02-22 15:57:55,519][11948] Updated weights for policy 0, policy_version 460 (0.0011) [2023-02-22 15:57:57,406][11727] Fps is (10 sec: 17612.7, 60 sec: 17612.8, 300 sec: 15974.4). Total num frames: 1916928. Throughput: 0: 4381.9. Samples: 474514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:57:57,409][11727] Avg episode reward: [(0, '17.365')] [2023-02-22 15:57:57,411][11934] Saving new best policy, reward=17.365! [2023-02-22 15:57:57,797][11948] Updated weights for policy 0, policy_version 470 (0.0011) [2023-02-22 15:58:00,064][11948] Updated weights for policy 0, policy_version 480 (0.0011) [2023-02-22 15:58:02,345][11948] Updated weights for policy 0, policy_version 490 (0.0010) [2023-02-22 15:58:02,406][11727] Fps is (10 sec: 18022.6, 60 sec: 17544.6, 300 sec: 16056.3). Total num frames: 2007040. Throughput: 0: 4381.5. Samples: 501676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:58:02,409][11727] Avg episode reward: [(0, '15.039')] [2023-02-22 15:58:04,589][11948] Updated weights for policy 0, policy_version 500 (0.0011) [2023-02-22 15:58:06,878][11948] Updated weights for policy 0, policy_version 510 (0.0011) [2023-02-22 15:58:07,406][11727] Fps is (10 sec: 17612.8, 60 sec: 17612.8, 300 sec: 16100.4). Total num frames: 2093056. Throughput: 0: 4393.0. Samples: 515336. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:58:07,409][11727] Avg episode reward: [(0, '16.264')] [2023-02-22 15:58:09,322][11948] Updated weights for policy 0, policy_version 520 (0.0011) [2023-02-22 15:58:11,720][11948] Updated weights for policy 0, policy_version 530 (0.0011) [2023-02-22 15:58:12,406][11727] Fps is (10 sec: 17203.1, 60 sec: 17544.5, 300 sec: 16141.3). Total num frames: 2179072. Throughput: 0: 4393.1. Samples: 541052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:58:12,409][11727] Avg episode reward: [(0, '16.818')] [2023-02-22 15:58:14,040][11948] Updated weights for policy 0, policy_version 540 (0.0011) [2023-02-22 15:58:16,254][11948] Updated weights for policy 0, policy_version 550 (0.0011) [2023-02-22 15:58:17,406][11727] Fps is (10 sec: 17612.7, 60 sec: 17544.5, 300 sec: 16208.5). Total num frames: 2269184. Throughput: 0: 4403.9. Samples: 567914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:58:17,409][11727] Avg episode reward: [(0, '17.111')] [2023-02-22 15:58:18,537][11948] Updated weights for policy 0, policy_version 560 (0.0011) [2023-02-22 15:58:20,818][11948] Updated weights for policy 0, policy_version 570 (0.0011) [2023-02-22 15:58:22,406][11727] Fps is (10 sec: 18432.0, 60 sec: 17681.1, 300 sec: 16299.3). Total num frames: 2363392. Throughput: 0: 4414.4. Samples: 581488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:58:22,408][11727] Avg episode reward: [(0, '18.845')] [2023-02-22 15:58:22,417][11934] Saving new best policy, reward=18.845! [2023-02-22 15:58:23,129][11948] Updated weights for policy 0, policy_version 580 (0.0011) [2023-02-22 15:58:25,602][11948] Updated weights for policy 0, policy_version 590 (0.0011) [2023-02-22 15:58:27,406][11727] Fps is (10 sec: 17613.0, 60 sec: 17612.8, 300 sec: 16302.1). Total num frames: 2445312. Throughput: 0: 4420.1. Samples: 607112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:58:27,409][11727] Avg episode reward: [(0, '17.650')] [2023-02-22 15:58:28,042][11948] Updated weights for policy 0, policy_version 600 (0.0012) [2023-02-22 15:58:30,381][11948] Updated weights for policy 0, policy_version 610 (0.0011) [2023-02-22 15:58:32,406][11727] Fps is (10 sec: 16793.4, 60 sec: 17544.5, 300 sec: 16331.1). Total num frames: 2531328. Throughput: 0: 4401.9. Samples: 633220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:58:32,409][11727] Avg episode reward: [(0, '18.158')] [2023-02-22 15:58:32,688][11948] Updated weights for policy 0, policy_version 620 (0.0011) [2023-02-22 15:58:35,144][11948] Updated weights for policy 0, policy_version 630 (0.0011) [2023-02-22 15:58:37,406][11727] Fps is (10 sec: 17203.0, 60 sec: 17544.5, 300 sec: 16358.4). Total num frames: 2617344. Throughput: 0: 4381.1. Samples: 645902. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:58:37,409][11727] Avg episode reward: [(0, '17.190')] [2023-02-22 15:58:37,540][11948] Updated weights for policy 0, policy_version 640 (0.0011) [2023-02-22 15:58:40,020][11948] Updated weights for policy 0, policy_version 650 (0.0012) [2023-02-22 15:58:42,406][11727] Fps is (10 sec: 16793.8, 60 sec: 17476.3, 300 sec: 16359.2). Total num frames: 2699264. Throughput: 0: 4369.1. Samples: 671124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:58:42,410][11727] Avg episode reward: [(0, '20.379')] [2023-02-22 15:58:42,443][11934] Saving new best policy, reward=20.379! [2023-02-22 15:58:42,447][11948] Updated weights for policy 0, policy_version 660 (0.0011) [2023-02-22 15:58:44,798][11948] Updated weights for policy 0, policy_version 670 (0.0012) [2023-02-22 15:58:47,071][11948] Updated weights for policy 0, policy_version 680 (0.0010) [2023-02-22 15:58:47,406][11727] Fps is (10 sec: 17203.2, 60 sec: 17476.2, 300 sec: 16408.1). Total num frames: 2789376. Throughput: 0: 4347.0. Samples: 697292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-22 15:58:47,409][11727] Avg episode reward: [(0, '20.186')] [2023-02-22 15:58:49,407][11948] Updated weights for policy 0, policy_version 690 (0.0010) [2023-02-22 15:58:51,689][11948] Updated weights for policy 0, policy_version 700 (0.0011) [2023-02-22 15:58:52,406][11727] Fps is (10 sec: 18022.2, 60 sec: 17544.5, 300 sec: 16454.2). Total num frames: 2879488. Throughput: 0: 4340.9. Samples: 710678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:58:52,408][11727] Avg episode reward: [(0, '22.052')] [2023-02-22 15:58:52,417][11934] Saving new best policy, reward=22.052! [2023-02-22 15:58:54,011][11948] Updated weights for policy 0, policy_version 710 (0.0011) [2023-02-22 15:58:56,392][11948] Updated weights for policy 0, policy_version 720 (0.0011) [2023-02-22 15:58:57,406][11727] Fps is (10 sec: 17613.0, 60 sec: 17476.3, 300 sec: 16475.0). Total num frames: 2965504. Throughput: 0: 4353.2. Samples: 736946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:58:57,409][11727] Avg episode reward: [(0, '21.568')] [2023-02-22 15:58:58,776][11948] Updated weights for policy 0, policy_version 730 (0.0011) [2023-02-22 15:59:01,128][11948] Updated weights for policy 0, policy_version 740 (0.0010) [2023-02-22 15:59:02,406][11727] Fps is (10 sec: 17203.3, 60 sec: 17408.0, 300 sec: 16494.7). Total num frames: 3051520. Throughput: 0: 4339.9. Samples: 763210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:59:02,409][11727] Avg episode reward: [(0, '20.665')] [2023-02-22 15:59:03,339][11948] Updated weights for policy 0, policy_version 750 (0.0011) [2023-02-22 15:59:05,685][11948] Updated weights for policy 0, policy_version 760 (0.0011) [2023-02-22 15:59:07,406][11727] Fps is (10 sec: 17612.6, 60 sec: 17476.3, 300 sec: 16534.9). Total num frames: 3141632. Throughput: 0: 4337.4. Samples: 776670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:59:07,409][11727] Avg episode reward: [(0, '21.001')] [2023-02-22 15:59:07,943][11948] Updated weights for policy 0, policy_version 770 (0.0010) [2023-02-22 15:59:10,233][11948] Updated weights for policy 0, policy_version 780 (0.0011) [2023-02-22 15:59:12,406][11727] Fps is (10 sec: 17612.8, 60 sec: 17476.3, 300 sec: 16552.0). Total num frames: 3227648. Throughput: 0: 4356.0. Samples: 803132. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-22 15:59:12,408][11727] Avg episode reward: [(0, '21.359')] [2023-02-22 15:59:12,713][11948] Updated weights for policy 0, policy_version 790 (0.0012) [2023-02-22 15:59:15,093][11948] Updated weights for policy 0, policy_version 800 (0.0011) [2023-02-22 15:59:17,407][11948] Updated weights for policy 0, policy_version 810 (0.0011) [2023-02-22 15:59:17,406][11727] Fps is (10 sec: 17613.2, 60 sec: 17476.3, 300 sec: 16588.8). Total num frames: 3317760. Throughput: 0: 4351.6. Samples: 829042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:59:17,409][11727] Avg episode reward: [(0, '19.476')] [2023-02-22 15:59:19,655][11948] Updated weights for policy 0, policy_version 820 (0.0011) [2023-02-22 15:59:21,941][11948] Updated weights for policy 0, policy_version 830 (0.0010) [2023-02-22 15:59:22,409][11727] Fps is (10 sec: 18016.5, 60 sec: 17407.0, 300 sec: 16623.5). Total num frames: 3407872. Throughput: 0: 4370.1. Samples: 842572. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:59:22,411][11727] Avg episode reward: [(0, '19.382')] [2023-02-22 15:59:24,222][11948] Updated weights for policy 0, policy_version 840 (0.0011) [2023-02-22 15:59:26,601][11948] Updated weights for policy 0, policy_version 850 (0.0011) [2023-02-22 15:59:27,406][11727] Fps is (10 sec: 17612.3, 60 sec: 17476.2, 300 sec: 16637.6). Total num frames: 3493888. Throughput: 0: 4403.1. Samples: 869264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-22 15:59:27,409][11727] Avg episode reward: [(0, '19.677')] [2023-02-22 15:59:29,011][11948] Updated weights for policy 0, policy_version 860 (0.0012) [2023-02-22 15:59:31,383][11948] Updated weights for policy 0, policy_version 870 (0.0011) [2023-02-22 15:59:32,406][11727] Fps is (10 sec: 17208.8, 60 sec: 17476.3, 300 sec: 16650.7). Total num frames: 3579904. Throughput: 0: 4391.0. Samples: 894886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:59:32,409][11727] Avg episode reward: [(0, '20.009')] [2023-02-22 15:59:33,716][11948] Updated weights for policy 0, policy_version 880 (0.0011) [2023-02-22 15:59:35,972][11948] Updated weights for policy 0, policy_version 890 (0.0011) [2023-02-22 15:59:37,406][11727] Fps is (10 sec: 17612.9, 60 sec: 17544.5, 300 sec: 16681.9). Total num frames: 3670016. Throughput: 0: 4391.6. Samples: 908300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-22 15:59:37,408][11727] Avg episode reward: [(0, '19.299')] [2023-02-22 15:59:38,317][11948] Updated weights for policy 0, policy_version 900 (0.0011) [2023-02-22 15:59:40,527][11948] Updated weights for policy 0, policy_version 910 (0.0011) [2023-02-22 15:59:42,406][11727] Fps is (10 sec: 18022.4, 60 sec: 17681.1, 300 sec: 16711.7). Total num frames: 3760128. Throughput: 0: 4407.5. Samples: 935284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-22 15:59:42,409][11727] Avg episode reward: [(0, '21.332')] [2023-02-22 15:59:42,417][11934] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000918_3760128.pth... [2023-02-22 15:59:42,893][11948] Updated weights for policy 0, policy_version 920 (0.0011) [2023-02-22 15:59:45,384][11948] Updated weights for policy 0, policy_version 930 (0.0012) [2023-02-22 15:59:47,406][11727] Fps is (10 sec: 17203.2, 60 sec: 17544.5, 300 sec: 16704.6). Total num frames: 3842048. Throughput: 0: 4387.0. Samples: 960626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-22 15:59:47,409][11727] Avg episode reward: [(0, '19.742')] [2023-02-22 15:59:47,743][11948] Updated weights for policy 0, policy_version 940 (0.0011) [2023-02-22 15:59:50,051][11948] Updated weights for policy 0, policy_version 950 (0.0011) [2023-02-22 15:59:52,331][11948] Updated weights for policy 0, policy_version 960 (0.0011) [2023-02-22 15:59:52,406][11727] Fps is (10 sec: 17203.2, 60 sec: 17544.5, 300 sec: 16732.6). Total num frames: 3932160. Throughput: 0: 4384.1. Samples: 973954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-22 15:59:52,408][11727] Avg episode reward: [(0, '23.771')] [2023-02-22 15:59:52,417][11934] Saving new best policy, reward=23.771! [2023-02-22 15:59:54,622][11948] Updated weights for policy 0, policy_version 970 (0.0011) [2023-02-22 15:59:56,458][11934] Stopping Batcher_0... [2023-02-22 15:59:56,459][11934] Loop batcher_evt_loop terminating... [2023-02-22 15:59:56,459][11934] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-22 15:59:56,459][11727] Component Batcher_0 stopped! [2023-02-22 15:59:56,462][11727] Component RolloutWorker_w6 process died already! Don't wait for it. [2023-02-22 15:59:56,473][11948] Weights refcount: 2 0 [2023-02-22 15:59:56,473][11951] Stopping RolloutWorker_w2... [2023-02-22 15:59:56,473][11950] Stopping RolloutWorker_w1... [2023-02-22 15:59:56,474][11950] Loop rollout_proc1_evt_loop terminating... [2023-02-22 15:59:56,474][11951] Loop rollout_proc2_evt_loop terminating... [2023-02-22 15:59:56,475][11948] Stopping InferenceWorker_p0-w0... [2023-02-22 15:59:56,475][11948] Loop inference_proc0-0_evt_loop terminating... [2023-02-22 15:59:56,473][11727] Component RolloutWorker_w1 stopped! [2023-02-22 15:59:56,476][11970] Stopping RolloutWorker_w5... [2023-02-22 15:59:56,476][11953] Stopping RolloutWorker_w3... [2023-02-22 15:59:56,476][11970] Loop rollout_proc5_evt_loop terminating... [2023-02-22 15:59:56,476][11953] Loop rollout_proc3_evt_loop terminating... [2023-02-22 15:59:56,476][11975] Stopping RolloutWorker_w4... [2023-02-22 15:59:56,477][11975] Loop rollout_proc4_evt_loop terminating... [2023-02-22 15:59:56,478][11949] Stopping RolloutWorker_w0... [2023-02-22 15:59:56,479][11949] Loop rollout_proc0_evt_loop terminating... [2023-02-22 15:59:56,477][11727] Component RolloutWorker_w2 stopped! [2023-02-22 15:59:56,480][11727] Component InferenceWorker_p0-w0 stopped! [2023-02-22 15:59:56,481][11727] Component RolloutWorker_w5 stopped! [2023-02-22 15:59:56,482][11727] Component RolloutWorker_w3 stopped! [2023-02-22 15:59:56,484][11727] Component RolloutWorker_w4 stopped! [2023-02-22 15:59:56,484][11973] Stopping RolloutWorker_w7... [2023-02-22 15:59:56,485][11727] Component RolloutWorker_w0 stopped! [2023-02-22 15:59:56,486][11973] Loop rollout_proc7_evt_loop terminating... [2023-02-22 15:59:56,487][11727] Component RolloutWorker_w7 stopped! [2023-02-22 15:59:56,533][11934] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000403_1650688.pth [2023-02-22 15:59:56,542][11934] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-22 15:59:56,658][11934] Stopping LearnerWorker_p0... [2023-02-22 15:59:56,659][11934] Loop learner_proc0_evt_loop terminating... [2023-02-22 15:59:56,658][11727] Component LearnerWorker_p0 stopped! [2023-02-22 15:59:56,660][11727] Waiting for process learner_proc0 to stop... [2023-02-22 15:59:58,235][11727] Waiting for process inference_proc0-0 to join... [2023-02-22 15:59:58,237][11727] Waiting for process rollout_proc0 to join... [2023-02-22 15:59:58,239][11727] Waiting for process rollout_proc1 to join... [2023-02-22 15:59:58,241][11727] Waiting for process rollout_proc2 to join... [2023-02-22 15:59:58,243][11727] Waiting for process rollout_proc3 to join... [2023-02-22 15:59:58,245][11727] Waiting for process rollout_proc4 to join... [2023-02-22 15:59:58,246][11727] Waiting for process rollout_proc5 to join... [2023-02-22 15:59:58,248][11727] Waiting for process rollout_proc6 to join... [2023-02-22 15:59:58,249][11727] Waiting for process rollout_proc7 to join... [2023-02-22 15:59:58,252][11727] Batcher 0 profile tree view: batching: 15.6119, releasing_batches: 0.0487 [2023-02-22 15:59:58,253][11727] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0001 wait_policy_total: 4.2324 update_model: 3.4167 weight_update: 0.0011 one_step: 0.0029 handle_policy_step: 214.4727 deserialize: 8.7479, stack: 1.4146, obs_to_device_normalize: 50.9557, forward: 97.6511, send_messages: 15.7666 prepare_outputs: 30.0742 to_cpu: 18.3660 [2023-02-22 15:59:58,254][11727] Learner 0 profile tree view: misc: 0.0057, prepare_batch: 10.1041 train: 19.8425 epoch_init: 0.0057, minibatch_init: 0.0062, losses_postprocess: 0.3212, kl_divergence: 0.4617, after_optimizer: 1.0259 calculate_losses: 7.7728 losses_init: 0.0032, forward_head: 1.1080, bptt_initial: 3.2464, tail: 0.6356, advantages_returns: 0.1694, losses: 1.0630 bptt: 1.3701 bptt_forward_core: 1.3165 update: 9.9027 clip: 1.1259 [2023-02-22 15:59:58,257][11727] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.1715, enqueue_policy_requests: 8.7683, env_step: 144.6926, overhead: 11.5534, complete_rollouts: 0.2874 save_policy_outputs: 9.9906 split_output_tensors: 4.7988 [2023-02-22 15:59:58,258][11727] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.1719, enqueue_policy_requests: 8.7025, env_step: 145.0611, overhead: 11.7300, complete_rollouts: 0.2973 save_policy_outputs: 9.8243 split_output_tensors: 4.7671 [2023-02-22 15:59:58,260][11727] Loop Runner_EvtLoop terminating... [2023-02-22 15:59:58,263][11727] Runner profile tree view: main_loop: 251.9949 [2023-02-22 15:59:58,264][11727] Collected {0: 4005888}, FPS: 15896.7 [2023-02-22 16:11:15,029][11727] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-22 16:11:15,031][11727] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-22 16:11:15,033][11727] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-22 16:11:15,034][11727] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-22 16:11:15,036][11727] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-22 16:11:15,037][11727] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-22 16:11:15,038][11727] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-02-22 16:11:15,040][11727] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-22 16:11:15,041][11727] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-02-22 16:11:15,043][11727] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-02-22 16:11:15,044][11727] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-22 16:11:15,045][11727] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-22 16:11:15,047][11727] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-22 16:11:15,048][11727] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-22 16:11:15,049][11727] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-22 16:11:15,067][11727] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-22 16:11:15,070][11727] RunningMeanStd input shape: (3, 72, 128) [2023-02-22 16:11:15,073][11727] RunningMeanStd input shape: (1,) [2023-02-22 16:11:15,093][11727] ConvEncoder: input_channels=3 [2023-02-22 16:11:15,967][11727] Conv encoder output size: 512 [2023-02-22 16:11:15,970][11727] Policy head output size: 512 [2023-02-22 16:11:18,855][11727] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-22 16:11:20,688][11727] Num frames 100... [2023-02-22 16:11:20,811][11727] Num frames 200... [2023-02-22 16:11:20,926][11727] Num frames 300... [2023-02-22 16:11:21,038][11727] Num frames 400... [2023-02-22 16:11:21,156][11727] Num frames 500... [2023-02-22 16:11:21,269][11727] Num frames 600... [2023-02-22 16:11:21,382][11727] Num frames 700... [2023-02-22 16:11:21,496][11727] Num frames 800... [2023-02-22 16:11:21,608][11727] Num frames 900... [2023-02-22 16:11:21,720][11727] Num frames 1000... [2023-02-22 16:11:21,833][11727] Num frames 1100... [2023-02-22 16:11:21,947][11727] Avg episode rewards: #0: 24.520, true rewards: #0: 11.520 [2023-02-22 16:11:21,948][11727] Avg episode reward: 24.520, avg true_objective: 11.520 [2023-02-22 16:11:22,005][11727] Num frames 1200... [2023-02-22 16:11:22,118][11727] Num frames 1300... [2023-02-22 16:11:22,232][11727] Num frames 1400... [2023-02-22 16:11:22,346][11727] Num frames 1500... [2023-02-22 16:11:22,457][11727] Num frames 1600... [2023-02-22 16:11:22,568][11727] Num frames 1700... [2023-02-22 16:11:22,681][11727] Num frames 1800... [2023-02-22 16:11:22,794][11727] Num frames 1900... [2023-02-22 16:11:22,904][11727] Num frames 2000... [2023-02-22 16:11:23,020][11727] Num frames 2100... [2023-02-22 16:11:23,138][11727] Num frames 2200... [2023-02-22 16:11:23,252][11727] Num frames 2300... [2023-02-22 16:11:23,365][11727] Num frames 2400... [2023-02-22 16:11:23,484][11727] Num frames 2500... [2023-02-22 16:11:23,601][11727] Num frames 2600... [2023-02-22 16:11:23,718][11727] Num frames 2700... [2023-02-22 16:11:23,803][11727] Avg episode rewards: #0: 34.130, true rewards: #0: 13.630 [2023-02-22 16:11:23,805][11727] Avg episode reward: 34.130, avg true_objective: 13.630 [2023-02-22 16:11:23,891][11727] Num frames 2800... [2023-02-22 16:11:24,003][11727] Num frames 2900... [2023-02-22 16:11:24,121][11727] Num frames 3000... [2023-02-22 16:11:24,234][11727] Num frames 3100... [2023-02-22 16:11:24,312][11727] Avg episode rewards: #0: 25.063, true rewards: #0: 10.397 [2023-02-22 16:11:24,313][11727] Avg episode reward: 25.063, avg true_objective: 10.397 [2023-02-22 16:11:24,406][11727] Num frames 3200... [2023-02-22 16:11:24,518][11727] Num frames 3300... [2023-02-22 16:11:24,680][11727] Avg episode rewards: #0: 19.732, true rewards: #0: 8.482 [2023-02-22 16:11:24,682][11727] Avg episode reward: 19.732, avg true_objective: 8.482 [2023-02-22 16:11:24,692][11727] Num frames 3400... [2023-02-22 16:11:24,818][11727] Num frames 3500... [2023-02-22 16:11:24,939][11727] Num frames 3600... [2023-02-22 16:11:25,055][11727] Num frames 3700... [2023-02-22 16:11:25,171][11727] Num frames 3800... [2023-02-22 16:11:25,285][11727] Num frames 3900... [2023-02-22 16:11:25,401][11727] Num frames 4000... [2023-02-22 16:11:25,517][11727] Num frames 4100... [2023-02-22 16:11:25,635][11727] Num frames 4200... [2023-02-22 16:11:25,749][11727] Num frames 4300... [2023-02-22 16:11:25,864][11727] Num frames 4400... [2023-02-22 16:11:25,984][11727] Num frames 4500... [2023-02-22 16:11:26,099][11727] Num frames 4600... [2023-02-22 16:11:26,238][11727] Num frames 4700... [2023-02-22 16:11:26,362][11727] Num frames 4800... [2023-02-22 16:11:26,482][11727] Num frames 4900... [2023-02-22 16:11:26,602][11727] Num frames 5000... [2023-02-22 16:11:26,720][11727] Num frames 5100... [2023-02-22 16:11:26,822][11727] Avg episode rewards: #0: 24.078, true rewards: #0: 10.278 [2023-02-22 16:11:26,824][11727] Avg episode reward: 24.078, avg true_objective: 10.278 [2023-02-22 16:11:26,904][11727] Num frames 5200... [2023-02-22 16:11:27,021][11727] Num frames 5300... [2023-02-22 16:11:27,140][11727] Num frames 5400... [2023-02-22 16:11:27,256][11727] Num frames 5500... [2023-02-22 16:11:27,369][11727] Num frames 5600... [2023-02-22 16:11:27,479][11727] Num frames 5700... [2023-02-22 16:11:27,587][11727] Avg episode rewards: #0: 22.078, true rewards: #0: 9.578 [2023-02-22 16:11:27,589][11727] Avg episode reward: 22.078, avg true_objective: 9.578 [2023-02-22 16:11:27,652][11727] Num frames 5800... [2023-02-22 16:11:27,767][11727] Num frames 5900... [2023-02-22 16:11:27,883][11727] Num frames 6000... [2023-02-22 16:11:27,996][11727] Num frames 6100... [2023-02-22 16:11:28,109][11727] Num frames 6200... [2023-02-22 16:11:28,225][11727] Num frames 6300... [2023-02-22 16:11:28,338][11727] Num frames 6400... [2023-02-22 16:11:28,451][11727] Avg episode rewards: #0: 21.073, true rewards: #0: 9.216 [2023-02-22 16:11:28,453][11727] Avg episode reward: 21.073, avg true_objective: 9.216 [2023-02-22 16:11:28,511][11727] Num frames 6500... [2023-02-22 16:11:28,625][11727] Num frames 6600... [2023-02-22 16:11:28,743][11727] Num frames 6700... [2023-02-22 16:11:28,855][11727] Num frames 6800... [2023-02-22 16:11:28,951][11727] Avg episode rewards: #0: 19.294, true rewards: #0: 8.544 [2023-02-22 16:11:28,953][11727] Avg episode reward: 19.294, avg true_objective: 8.544 [2023-02-22 16:11:29,031][11727] Num frames 6900... [2023-02-22 16:11:29,146][11727] Num frames 7000... [2023-02-22 16:11:29,286][11727] Num frames 7100... [2023-02-22 16:11:29,396][11727] Num frames 7200... [2023-02-22 16:11:29,505][11727] Num frames 7300... [2023-02-22 16:11:29,619][11727] Num frames 7400... [2023-02-22 16:11:29,732][11727] Num frames 7500... [2023-02-22 16:11:29,849][11727] Num frames 7600... [2023-02-22 16:11:29,963][11727] Num frames 7700... [2023-02-22 16:11:30,076][11727] Num frames 7800... [2023-02-22 16:11:30,143][11727] Avg episode rewards: #0: 19.677, true rewards: #0: 8.677 [2023-02-22 16:11:30,145][11727] Avg episode reward: 19.677, avg true_objective: 8.677 [2023-02-22 16:11:30,249][11727] Num frames 7900... [2023-02-22 16:11:30,364][11727] Num frames 8000... [2023-02-22 16:11:30,478][11727] Num frames 8100... [2023-02-22 16:11:30,594][11727] Num frames 8200... [2023-02-22 16:11:30,707][11727] Num frames 8300... [2023-02-22 16:11:30,822][11727] Num frames 8400... [2023-02-22 16:11:30,937][11727] Num frames 8500... [2023-02-22 16:11:31,049][11727] Num frames 8600... [2023-02-22 16:11:31,162][11727] Num frames 8700... [2023-02-22 16:11:31,277][11727] Num frames 8800... [2023-02-22 16:11:31,371][11727] Avg episode rewards: #0: 19.533, true rewards: #0: 8.833 [2023-02-22 16:11:31,373][11727] Avg episode reward: 19.533, avg true_objective: 8.833 [2023-02-22 16:11:52,331][11727] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-22 16:12:41,257][11727] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-22 16:12:41,259][11727] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-22 16:12:41,260][11727] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-22 16:12:41,261][11727] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-22 16:12:41,264][11727] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-22 16:12:41,265][11727] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-22 16:12:41,266][11727] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-02-22 16:12:41,268][11727] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-22 16:12:41,270][11727] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-02-22 16:12:41,271][11727] Adding new argument 'hf_repository'='Unterwexi/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-02-22 16:12:41,272][11727] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-22 16:12:41,274][11727] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-22 16:12:41,276][11727] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-22 16:12:41,277][11727] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-22 16:12:41,279][11727] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-22 16:12:41,297][11727] RunningMeanStd input shape: (3, 72, 128) [2023-02-22 16:12:41,300][11727] RunningMeanStd input shape: (1,) [2023-02-22 16:12:41,315][11727] ConvEncoder: input_channels=3 [2023-02-22 16:12:41,358][11727] Conv encoder output size: 512 [2023-02-22 16:12:41,359][11727] Policy head output size: 512 [2023-02-22 16:12:41,382][11727] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-22 16:12:41,857][11727] Num frames 100... [2023-02-22 16:12:41,979][11727] Num frames 200... [2023-02-22 16:12:42,097][11727] Num frames 300... [2023-02-22 16:12:42,233][11727] Avg episode rewards: #0: 7.670, true rewards: #0: 3.670 [2023-02-22 16:12:42,235][11727] Avg episode reward: 7.670, avg true_objective: 3.670 [2023-02-22 16:12:42,276][11727] Num frames 400... [2023-02-22 16:12:42,396][11727] Num frames 500... [2023-02-22 16:12:42,528][11727] Num frames 600... [2023-02-22 16:12:42,653][11727] Num frames 700... [2023-02-22 16:12:42,780][11727] Num frames 800... [2023-02-22 16:12:42,907][11727] Num frames 900... [2023-02-22 16:12:43,034][11727] Num frames 1000... [2023-02-22 16:12:43,181][11727] Avg episode rewards: #0: 12.870, true rewards: #0: 5.370 [2023-02-22 16:12:43,183][11727] Avg episode reward: 12.870, avg true_objective: 5.370 [2023-02-22 16:12:43,215][11727] Num frames 1100... [2023-02-22 16:12:43,331][11727] Num frames 1200... [2023-02-22 16:12:43,441][11727] Num frames 1300... [2023-02-22 16:12:43,554][11727] Num frames 1400... [2023-02-22 16:12:43,685][11727] Avg episode rewards: #0: 10.860, true rewards: #0: 4.860 [2023-02-22 16:12:43,687][11727] Avg episode reward: 10.860, avg true_objective: 4.860 [2023-02-22 16:12:43,745][11727] Num frames 1500... [2023-02-22 16:12:43,870][11727] Num frames 1600... [2023-02-22 16:12:43,992][11727] Num frames 1700... [2023-02-22 16:12:44,116][11727] Num frames 1800... [2023-02-22 16:12:44,184][11727] Avg episode rewards: #0: 9.775, true rewards: #0: 4.525 [2023-02-22 16:12:44,187][11727] Avg episode reward: 9.775, avg true_objective: 4.525 [2023-02-22 16:12:44,288][11727] Num frames 1900... [2023-02-22 16:12:44,400][11727] Num frames 2000... [2023-02-22 16:12:44,510][11727] Num frames 2100... [2023-02-22 16:12:44,622][11727] Num frames 2200... [2023-02-22 16:12:44,709][11727] Avg episode rewards: #0: 9.052, true rewards: #0: 4.452 [2023-02-22 16:12:44,711][11727] Avg episode reward: 9.052, avg true_objective: 4.452 [2023-02-22 16:12:44,795][11727] Num frames 2300... [2023-02-22 16:12:44,906][11727] Num frames 2400... [2023-02-22 16:12:45,039][11727] Num frames 2500... [2023-02-22 16:12:45,152][11727] Num frames 2600... [2023-02-22 16:12:45,268][11727] Num frames 2700... [2023-02-22 16:12:45,381][11727] Num frames 2800... [2023-02-22 16:12:45,492][11727] Num frames 2900... [2023-02-22 16:12:45,602][11727] Num frames 3000... [2023-02-22 16:12:45,715][11727] Num frames 3100... [2023-02-22 16:12:45,826][11727] Num frames 3200... [2023-02-22 16:12:45,936][11727] Num frames 3300... [2023-02-22 16:12:46,050][11727] Num frames 3400... [2023-02-22 16:12:46,166][11727] Num frames 3500... [2023-02-22 16:12:46,282][11727] Num frames 3600... [2023-02-22 16:12:46,392][11727] Num frames 3700... [2023-02-22 16:12:46,507][11727] Num frames 3800... [2023-02-22 16:12:46,618][11727] Num frames 3900... [2023-02-22 16:12:46,730][11727] Num frames 4000... [2023-02-22 16:12:46,845][11727] Num frames 4100... [2023-02-22 16:12:46,960][11727] Num frames 4200... [2023-02-22 16:12:47,087][11727] Num frames 4300... [2023-02-22 16:12:47,174][11727] Avg episode rewards: #0: 17.376, true rewards: #0: 7.210 [2023-02-22 16:12:47,176][11727] Avg episode reward: 17.376, avg true_objective: 7.210 [2023-02-22 16:12:47,260][11727] Num frames 4400... [2023-02-22 16:12:47,373][11727] Num frames 4500... [2023-02-22 16:12:47,485][11727] Num frames 4600... [2023-02-22 16:12:47,594][11727] Num frames 4700... [2023-02-22 16:12:47,706][11727] Num frames 4800... [2023-02-22 16:12:47,817][11727] Num frames 4900... [2023-02-22 16:12:47,929][11727] Num frames 5000... [2023-02-22 16:12:48,050][11727] Num frames 5100... [2023-02-22 16:12:48,165][11727] Num frames 5200... [2023-02-22 16:12:48,248][11727] Avg episode rewards: #0: 17.603, true rewards: #0: 7.460 [2023-02-22 16:12:48,250][11727] Avg episode reward: 17.603, avg true_objective: 7.460 [2023-02-22 16:12:48,337][11727] Num frames 5300... [2023-02-22 16:12:48,446][11727] Num frames 5400... [2023-02-22 16:12:48,557][11727] Num frames 5500... [2023-02-22 16:12:48,668][11727] Num frames 5600... [2023-02-22 16:12:48,779][11727] Num frames 5700... [2023-02-22 16:12:48,891][11727] Num frames 5800... [2023-02-22 16:12:49,005][11727] Num frames 5900... [2023-02-22 16:12:49,116][11727] Num frames 6000... [2023-02-22 16:12:49,225][11727] Num frames 6100... [2023-02-22 16:12:49,303][11727] Avg episode rewards: #0: 17.522, true rewards: #0: 7.647 [2023-02-22 16:12:49,306][11727] Avg episode reward: 17.522, avg true_objective: 7.647 [2023-02-22 16:12:49,397][11727] Num frames 6200... [2023-02-22 16:12:49,509][11727] Num frames 6300... [2023-02-22 16:12:49,625][11727] Num frames 6400... [2023-02-22 16:12:49,736][11727] Num frames 6500... [2023-02-22 16:12:49,849][11727] Num frames 6600... [2023-02-22 16:12:49,938][11727] Avg episode rewards: #0: 16.478, true rewards: #0: 7.367 [2023-02-22 16:12:49,940][11727] Avg episode reward: 16.478, avg true_objective: 7.367 [2023-02-22 16:12:50,021][11727] Num frames 6700... [2023-02-22 16:12:50,134][11727] Num frames 6800... [2023-02-22 16:12:50,248][11727] Num frames 6900... [2023-02-22 16:12:50,360][11727] Num frames 7000... [2023-02-22 16:12:50,470][11727] Num frames 7100... [2023-02-22 16:12:50,579][11727] Num frames 7200... [2023-02-22 16:12:50,690][11727] Num frames 7300... [2023-02-22 16:12:50,803][11727] Num frames 7400... [2023-02-22 16:12:50,929][11727] Avg episode rewards: #0: 16.461, true rewards: #0: 7.461 [2023-02-22 16:12:50,931][11727] Avg episode reward: 16.461, avg true_objective: 7.461 [2023-02-22 16:13:08,594][11727] Replay video saved to /content/train_dir/default_experiment/replay.mp4!