diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,1100 @@ +[2023-02-23 02:48:16,521][11306] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-23 02:48:16,529][11306] Rollout worker 0 uses device cpu +[2023-02-23 02:48:16,532][11306] Rollout worker 1 uses device cpu +[2023-02-23 02:48:16,533][11306] Rollout worker 2 uses device cpu +[2023-02-23 02:48:16,536][11306] Rollout worker 3 uses device cpu +[2023-02-23 02:48:16,538][11306] Rollout worker 4 uses device cpu +[2023-02-23 02:48:16,540][11306] Rollout worker 5 uses device cpu +[2023-02-23 02:48:16,542][11306] Rollout worker 6 uses device cpu +[2023-02-23 02:48:16,546][11306] Rollout worker 7 uses device cpu +[2023-02-23 02:48:16,746][11306] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-23 02:48:16,749][11306] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-23 02:48:16,779][11306] Starting all processes... +[2023-02-23 02:48:16,783][11306] Starting process learner_proc0 +[2023-02-23 02:48:16,839][11306] Starting all processes... +[2023-02-23 02:48:16,852][11306] Starting process inference_proc0-0 +[2023-02-23 02:48:16,855][11306] Starting process rollout_proc0 +[2023-02-23 02:48:16,855][11306] Starting process rollout_proc1 +[2023-02-23 02:48:16,855][11306] Starting process rollout_proc2 +[2023-02-23 02:48:16,872][11306] Starting process rollout_proc4 +[2023-02-23 02:48:16,873][11306] Starting process rollout_proc5 +[2023-02-23 02:48:16,873][11306] Starting process rollout_proc6 +[2023-02-23 02:48:16,872][11306] Starting process rollout_proc3 +[2023-02-23 02:48:16,873][11306] Starting process rollout_proc7 +[2023-02-23 02:48:28,407][11625] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-23 02:48:28,407][11625] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-23 02:48:28,490][11643] Worker 4 uses CPU cores [0] +[2023-02-23 02:48:28,662][11640] Worker 0 uses CPU cores [0] +[2023-02-23 02:48:28,755][11641] Worker 1 uses CPU cores [1] +[2023-02-23 02:48:28,821][11642] Worker 2 uses CPU cores [0] +[2023-02-23 02:48:28,860][11639] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-23 02:48:28,860][11639] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-23 02:48:28,919][11646] Worker 3 uses CPU cores [1] +[2023-02-23 02:48:29,018][11645] Worker 6 uses CPU cores [0] +[2023-02-23 02:48:29,091][11647] Worker 7 uses CPU cores [1] +[2023-02-23 02:48:29,092][11644] Worker 5 uses CPU cores [1] +[2023-02-23 02:48:29,409][11625] Num visible devices: 1 +[2023-02-23 02:48:29,409][11639] Num visible devices: 1 +[2023-02-23 02:48:29,422][11625] Starting seed is not provided +[2023-02-23 02:48:29,422][11625] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-23 02:48:29,422][11625] Initializing actor-critic model on device cuda:0 +[2023-02-23 02:48:29,423][11625] RunningMeanStd input shape: (3, 72, 128) +[2023-02-23 02:48:29,425][11625] RunningMeanStd input shape: (1,) +[2023-02-23 02:48:29,437][11625] ConvEncoder: input_channels=3 +[2023-02-23 02:48:29,727][11625] Conv encoder output size: 512 +[2023-02-23 02:48:29,727][11625] Policy head output size: 512 +[2023-02-23 02:48:29,783][11625] Created Actor Critic model with architecture: +[2023-02-23 02:48:29,783][11625] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-23 02:48:36,739][11306] Heartbeat connected on Batcher_0 +[2023-02-23 02:48:36,746][11306] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-23 02:48:36,757][11306] Heartbeat connected on RolloutWorker_w0 +[2023-02-23 02:48:36,760][11306] Heartbeat connected on RolloutWorker_w1 +[2023-02-23 02:48:36,764][11306] Heartbeat connected on RolloutWorker_w2 +[2023-02-23 02:48:36,767][11306] Heartbeat connected on RolloutWorker_w3 +[2023-02-23 02:48:36,770][11306] Heartbeat connected on RolloutWorker_w4 +[2023-02-23 02:48:36,773][11306] Heartbeat connected on RolloutWorker_w5 +[2023-02-23 02:48:36,776][11306] Heartbeat connected on RolloutWorker_w6 +[2023-02-23 02:48:36,779][11306] Heartbeat connected on RolloutWorker_w7 +[2023-02-23 02:48:38,535][11625] Using optimizer +[2023-02-23 02:48:38,536][11625] No checkpoints found +[2023-02-23 02:48:38,536][11625] Did not load from checkpoint, starting from scratch! +[2023-02-23 02:48:38,536][11625] Initialized policy 0 weights for model version 0 +[2023-02-23 02:48:38,548][11625] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-23 02:48:38,558][11625] LearnerWorker_p0 finished initialization! +[2023-02-23 02:48:38,560][11306] Heartbeat connected on LearnerWorker_p0 +[2023-02-23 02:48:38,861][11639] RunningMeanStd input shape: (3, 72, 128) +[2023-02-23 02:48:38,862][11639] RunningMeanStd input shape: (1,) +[2023-02-23 02:48:38,882][11639] ConvEncoder: input_channels=3 +[2023-02-23 02:48:39,042][11639] Conv encoder output size: 512 +[2023-02-23 02:48:39,043][11639] Policy head output size: 512 +[2023-02-23 02:48:42,108][11306] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-23 02:48:42,163][11306] Inference worker 0-0 is ready! +[2023-02-23 02:48:42,166][11306] All inference workers are ready! Signal rollout workers to start! +[2023-02-23 02:48:42,312][11640] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,334][11645] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,341][11646] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,343][11647] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,349][11643] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,389][11644] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,395][11641] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:42,393][11642] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 02:48:43,758][11647] Decorrelating experience for 0 frames... +[2023-02-23 02:48:43,757][11646] Decorrelating experience for 0 frames... +[2023-02-23 02:48:43,755][11641] Decorrelating experience for 0 frames... +[2023-02-23 02:48:44,062][11640] Decorrelating experience for 0 frames... +[2023-02-23 02:48:44,064][11645] Decorrelating experience for 0 frames... +[2023-02-23 02:48:44,067][11643] Decorrelating experience for 0 frames... +[2023-02-23 02:48:44,088][11642] Decorrelating experience for 0 frames... +[2023-02-23 02:48:44,616][11646] Decorrelating experience for 32 frames... +[2023-02-23 02:48:44,619][11647] Decorrelating experience for 32 frames... +[2023-02-23 02:48:44,754][11643] Decorrelating experience for 32 frames... +[2023-02-23 02:48:44,758][11640] Decorrelating experience for 32 frames... +[2023-02-23 02:48:45,156][11644] Decorrelating experience for 0 frames... +[2023-02-23 02:48:45,504][11643] Decorrelating experience for 64 frames... +[2023-02-23 02:48:45,627][11641] Decorrelating experience for 32 frames... +[2023-02-23 02:48:45,822][11647] Decorrelating experience for 64 frames... +[2023-02-23 02:48:46,093][11644] Decorrelating experience for 32 frames... +[2023-02-23 02:48:46,239][11640] Decorrelating experience for 64 frames... +[2023-02-23 02:48:46,417][11642] Decorrelating experience for 32 frames... +[2023-02-23 02:48:46,738][11647] Decorrelating experience for 96 frames... +[2023-02-23 02:48:47,108][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-23 02:48:47,159][11644] Decorrelating experience for 64 frames... +[2023-02-23 02:48:47,391][11641] Decorrelating experience for 64 frames... +[2023-02-23 02:48:47,456][11643] Decorrelating experience for 96 frames... +[2023-02-23 02:48:47,592][11640] Decorrelating experience for 96 frames... +[2023-02-23 02:48:48,098][11642] Decorrelating experience for 64 frames... +[2023-02-23 02:48:48,321][11644] Decorrelating experience for 96 frames... +[2023-02-23 02:48:48,551][11646] Decorrelating experience for 64 frames... +[2023-02-23 02:48:48,632][11641] Decorrelating experience for 96 frames... +[2023-02-23 02:48:48,878][11645] Decorrelating experience for 32 frames... +[2023-02-23 02:48:49,082][11646] Decorrelating experience for 96 frames... +[2023-02-23 02:48:49,237][11642] Decorrelating experience for 96 frames... +[2023-02-23 02:48:49,503][11645] Decorrelating experience for 64 frames... +[2023-02-23 02:48:49,811][11645] Decorrelating experience for 96 frames... +[2023-02-23 02:48:52,108][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 4.4. Samples: 44. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-23 02:48:52,115][11306] Avg episode reward: [(0, '0.572')] +[2023-02-23 02:48:54,553][11625] Signal inference workers to stop experience collection... +[2023-02-23 02:48:54,588][11639] InferenceWorker_p0-w0: stopping experience collection +[2023-02-23 02:48:57,108][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 138.1. Samples: 2072. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-23 02:48:57,110][11306] Avg episode reward: [(0, '1.419')] +[2023-02-23 02:48:57,704][11625] Signal inference workers to resume experience collection... +[2023-02-23 02:48:57,705][11639] InferenceWorker_p0-w0: resuming experience collection +[2023-02-23 02:49:02,108][11306] Fps is (10 sec: 1638.4, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 16384. Throughput: 0: 160.3. Samples: 3206. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-02-23 02:49:02,110][11306] Avg episode reward: [(0, '3.307')] +[2023-02-23 02:49:07,061][11639] Updated weights for policy 0, policy_version 10 (0.0025) +[2023-02-23 02:49:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 40960. Throughput: 0: 389.0. Samples: 9724. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:49:07,110][11306] Avg episode reward: [(0, '3.886')] +[2023-02-23 02:49:12,108][11306] Fps is (10 sec: 4505.6, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 61440. Throughput: 0: 435.2. Samples: 13056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:49:12,110][11306] Avg episode reward: [(0, '4.295')] +[2023-02-23 02:49:17,108][11306] Fps is (10 sec: 3276.8, 60 sec: 2106.5, 300 sec: 2106.5). Total num frames: 73728. Throughput: 0: 514.7. Samples: 18016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:49:17,113][11306] Avg episode reward: [(0, '4.416')] +[2023-02-23 02:49:19,314][11639] Updated weights for policy 0, policy_version 20 (0.0026) +[2023-02-23 02:49:22,108][11306] Fps is (10 sec: 2867.2, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 90112. Throughput: 0: 562.6. Samples: 22504. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-23 02:49:22,114][11306] Avg episode reward: [(0, '4.328')] +[2023-02-23 02:49:27,108][11306] Fps is (10 sec: 3686.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 110592. Throughput: 0: 574.4. Samples: 25848. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-23 02:49:27,114][11306] Avg episode reward: [(0, '4.212')] +[2023-02-23 02:49:27,145][11625] Saving new best policy, reward=4.212! +[2023-02-23 02:49:29,011][11639] Updated weights for policy 0, policy_version 30 (0.0021) +[2023-02-23 02:49:32,108][11306] Fps is (10 sec: 4096.1, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 131072. Throughput: 0: 725.5. Samples: 32648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:49:32,113][11306] Avg episode reward: [(0, '4.404')] +[2023-02-23 02:49:32,156][11625] Saving new best policy, reward=4.404! +[2023-02-23 02:49:37,108][11306] Fps is (10 sec: 3686.4, 60 sec: 2681.0, 300 sec: 2681.0). Total num frames: 147456. Throughput: 0: 826.0. Samples: 37216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:49:37,110][11306] Avg episode reward: [(0, '4.422')] +[2023-02-23 02:49:37,122][11625] Saving new best policy, reward=4.422! +[2023-02-23 02:49:41,788][11639] Updated weights for policy 0, policy_version 40 (0.0017) +[2023-02-23 02:49:42,108][11306] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 163840. Throughput: 0: 826.4. Samples: 39260. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:49:42,110][11306] Avg episode reward: [(0, '4.422')] +[2023-02-23 02:49:47,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 2835.7). Total num frames: 184320. Throughput: 0: 937.7. Samples: 45402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:49:47,110][11306] Avg episode reward: [(0, '4.442')] +[2023-02-23 02:49:47,116][11625] Saving new best policy, reward=4.442! +[2023-02-23 02:49:50,993][11639] Updated weights for policy 0, policy_version 50 (0.0030) +[2023-02-23 02:49:52,111][11306] Fps is (10 sec: 4094.6, 60 sec: 3413.1, 300 sec: 2925.6). Total num frames: 204800. Throughput: 0: 940.4. Samples: 52044. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:49:52,117][11306] Avg episode reward: [(0, '4.366')] +[2023-02-23 02:49:57,109][11306] Fps is (10 sec: 3685.9, 60 sec: 3686.3, 300 sec: 2949.1). Total num frames: 221184. Throughput: 0: 913.0. Samples: 54144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:49:57,115][11306] Avg episode reward: [(0, '4.415')] +[2023-02-23 02:50:02,108][11306] Fps is (10 sec: 3277.9, 60 sec: 3686.4, 300 sec: 2969.6). Total num frames: 237568. Throughput: 0: 898.5. Samples: 58448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:50:02,110][11306] Avg episode reward: [(0, '4.383')] +[2023-02-23 02:50:03,577][11639] Updated weights for policy 0, policy_version 60 (0.0014) +[2023-02-23 02:50:07,108][11306] Fps is (10 sec: 3686.9, 60 sec: 3618.1, 300 sec: 3035.9). Total num frames: 258048. Throughput: 0: 943.6. Samples: 64968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:50:07,110][11306] Avg episode reward: [(0, '4.491')] +[2023-02-23 02:50:07,117][11625] Saving new best policy, reward=4.491! +[2023-02-23 02:50:12,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3094.8). Total num frames: 278528. Throughput: 0: 943.0. Samples: 68284. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:50:12,114][11306] Avg episode reward: [(0, '4.372')] +[2023-02-23 02:50:12,121][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000068_278528.pth... +[2023-02-23 02:50:14,105][11639] Updated weights for policy 0, policy_version 70 (0.0025) +[2023-02-23 02:50:17,107][11306] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3104.3). Total num frames: 294912. Throughput: 0: 898.7. Samples: 73088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:50:17,112][11306] Avg episode reward: [(0, '4.434')] +[2023-02-23 02:50:22,108][11306] Fps is (10 sec: 3276.6, 60 sec: 3686.4, 300 sec: 3112.9). Total num frames: 311296. Throughput: 0: 893.8. Samples: 77438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:50:22,114][11306] Avg episode reward: [(0, '4.438')] +[2023-02-23 02:50:25,811][11639] Updated weights for policy 0, policy_version 80 (0.0012) +[2023-02-23 02:50:27,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3159.8). Total num frames: 331776. Throughput: 0: 923.6. Samples: 80824. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:50:27,110][11306] Avg episode reward: [(0, '4.487')] +[2023-02-23 02:50:32,108][11306] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3202.3). Total num frames: 352256. Throughput: 0: 934.7. Samples: 87466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:50:32,113][11306] Avg episode reward: [(0, '4.324')] +[2023-02-23 02:50:36,897][11639] Updated weights for policy 0, policy_version 90 (0.0017) +[2023-02-23 02:50:37,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3205.6). Total num frames: 368640. Throughput: 0: 889.0. Samples: 92044. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:50:37,116][11306] Avg episode reward: [(0, '4.393')] +[2023-02-23 02:50:42,108][11306] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3208.5). Total num frames: 385024. Throughput: 0: 891.1. Samples: 94242. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:50:42,116][11306] Avg episode reward: [(0, '4.409')] +[2023-02-23 02:50:47,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3211.3). Total num frames: 401408. Throughput: 0: 904.3. Samples: 99142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:50:47,113][11306] Avg episode reward: [(0, '4.619')] +[2023-02-23 02:50:47,117][11625] Saving new best policy, reward=4.619! +[2023-02-23 02:50:48,844][11639] Updated weights for policy 0, policy_version 100 (0.0050) +[2023-02-23 02:50:52,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3245.3). Total num frames: 421888. Throughput: 0: 904.0. Samples: 105650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:50:52,126][11306] Avg episode reward: [(0, '4.434')] +[2023-02-23 02:50:57,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3216.1). Total num frames: 434176. Throughput: 0: 877.2. Samples: 107756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:50:57,116][11306] Avg episode reward: [(0, '4.389')] +[2023-02-23 02:51:01,520][11639] Updated weights for policy 0, policy_version 110 (0.0034) +[2023-02-23 02:51:02,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3218.3). Total num frames: 450560. Throughput: 0: 866.5. Samples: 112082. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:51:02,113][11306] Avg episode reward: [(0, '4.349')] +[2023-02-23 02:51:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 475136. Throughput: 0: 917.2. Samples: 118712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:51:07,110][11306] Avg episode reward: [(0, '4.376')] +[2023-02-23 02:51:10,342][11639] Updated weights for policy 0, policy_version 120 (0.0017) +[2023-02-23 02:51:12,110][11306] Fps is (10 sec: 4504.5, 60 sec: 3618.0, 300 sec: 3304.1). Total num frames: 495616. Throughput: 0: 919.3. Samples: 122194. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:51:12,112][11306] Avg episode reward: [(0, '4.460')] +[2023-02-23 02:51:17,109][11306] Fps is (10 sec: 3276.2, 60 sec: 3549.8, 300 sec: 3276.8). Total num frames: 507904. Throughput: 0: 882.3. Samples: 127170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:51:17,116][11306] Avg episode reward: [(0, '4.730')] +[2023-02-23 02:51:17,120][11625] Saving new best policy, reward=4.730! +[2023-02-23 02:51:22,108][11306] Fps is (10 sec: 2867.9, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 524288. Throughput: 0: 878.2. Samples: 131564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:51:22,109][11306] Avg episode reward: [(0, '4.602')] +[2023-02-23 02:51:23,124][11639] Updated weights for policy 0, policy_version 130 (0.0024) +[2023-02-23 02:51:27,108][11306] Fps is (10 sec: 4096.7, 60 sec: 3618.1, 300 sec: 3326.4). Total num frames: 548864. Throughput: 0: 904.8. Samples: 134958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:51:27,110][11306] Avg episode reward: [(0, '4.424')] +[2023-02-23 02:51:32,110][11306] Fps is (10 sec: 4504.6, 60 sec: 3618.0, 300 sec: 3349.0). Total num frames: 569344. Throughput: 0: 946.7. Samples: 141744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:51:32,113][11306] Avg episode reward: [(0, '4.378')] +[2023-02-23 02:51:32,956][11639] Updated weights for policy 0, policy_version 140 (0.0024) +[2023-02-23 02:51:37,115][11306] Fps is (10 sec: 3274.5, 60 sec: 3549.4, 300 sec: 3323.5). Total num frames: 581632. Throughput: 0: 900.6. Samples: 146184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:51:37,118][11306] Avg episode reward: [(0, '4.339')] +[2023-02-23 02:51:42,108][11306] Fps is (10 sec: 2867.9, 60 sec: 3549.9, 300 sec: 3322.3). Total num frames: 598016. Throughput: 0: 902.8. Samples: 148382. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:51:42,109][11306] Avg episode reward: [(0, '4.362')] +[2023-02-23 02:51:44,745][11639] Updated weights for policy 0, policy_version 150 (0.0033) +[2023-02-23 02:51:47,108][11306] Fps is (10 sec: 4098.9, 60 sec: 3686.4, 300 sec: 3365.4). Total num frames: 622592. Throughput: 0: 944.3. Samples: 154576. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:51:47,116][11306] Avg episode reward: [(0, '4.514')] +[2023-02-23 02:51:52,108][11306] Fps is (10 sec: 4505.4, 60 sec: 3686.4, 300 sec: 3384.6). Total num frames: 643072. Throughput: 0: 940.5. Samples: 161036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:51:52,111][11306] Avg episode reward: [(0, '4.419')] +[2023-02-23 02:51:55,864][11639] Updated weights for policy 0, policy_version 160 (0.0026) +[2023-02-23 02:51:57,112][11306] Fps is (10 sec: 3275.5, 60 sec: 3686.2, 300 sec: 3360.8). Total num frames: 655360. Throughput: 0: 910.3. Samples: 163158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:51:57,115][11306] Avg episode reward: [(0, '4.560')] +[2023-02-23 02:52:02,108][11306] Fps is (10 sec: 2867.3, 60 sec: 3686.4, 300 sec: 3358.7). Total num frames: 671744. Throughput: 0: 896.7. Samples: 167518. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:52:02,109][11306] Avg episode reward: [(0, '4.480')] +[2023-02-23 02:52:06,549][11639] Updated weights for policy 0, policy_version 170 (0.0012) +[2023-02-23 02:52:07,108][11306] Fps is (10 sec: 4097.7, 60 sec: 3686.4, 300 sec: 3396.7). Total num frames: 696320. Throughput: 0: 944.7. Samples: 174076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:52:07,111][11306] Avg episode reward: [(0, '4.279')] +[2023-02-23 02:52:12,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3686.5, 300 sec: 3413.3). Total num frames: 716800. Throughput: 0: 947.9. Samples: 177612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:52:12,114][11306] Avg episode reward: [(0, '4.479')] +[2023-02-23 02:52:12,128][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000175_716800.pth... +[2023-02-23 02:52:17,108][11306] Fps is (10 sec: 3686.2, 60 sec: 3754.7, 300 sec: 3410.2). Total num frames: 733184. Throughput: 0: 908.5. Samples: 182624. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:52:17,113][11306] Avg episode reward: [(0, '4.559')] +[2023-02-23 02:52:18,152][11639] Updated weights for policy 0, policy_version 180 (0.0018) +[2023-02-23 02:52:22,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3407.1). Total num frames: 749568. Throughput: 0: 905.0. Samples: 186904. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:52:22,114][11306] Avg episode reward: [(0, '4.479')] +[2023-02-23 02:52:27,108][11306] Fps is (10 sec: 3686.6, 60 sec: 3686.4, 300 sec: 3422.4). Total num frames: 770048. Throughput: 0: 934.8. Samples: 190450. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:52:27,115][11306] Avg episode reward: [(0, '4.594')] +[2023-02-23 02:52:28,165][11639] Updated weights for policy 0, policy_version 190 (0.0012) +[2023-02-23 02:52:32,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3754.8, 300 sec: 3454.9). Total num frames: 794624. Throughput: 0: 953.1. Samples: 197464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:52:32,112][11306] Avg episode reward: [(0, '4.630')] +[2023-02-23 02:52:37,112][11306] Fps is (10 sec: 3684.9, 60 sec: 3754.9, 300 sec: 3433.6). Total num frames: 806912. Throughput: 0: 914.2. Samples: 202176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:52:37,114][11306] Avg episode reward: [(0, '4.687')] +[2023-02-23 02:52:40,403][11639] Updated weights for policy 0, policy_version 200 (0.0016) +[2023-02-23 02:52:42,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3430.4). Total num frames: 823296. Throughput: 0: 914.3. Samples: 204296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:52:42,115][11306] Avg episode reward: [(0, '4.710')] +[2023-02-23 02:52:47,108][11306] Fps is (10 sec: 4097.6, 60 sec: 3754.7, 300 sec: 3460.7). Total num frames: 847872. Throughput: 0: 954.1. Samples: 210454. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:52:47,115][11306] Avg episode reward: [(0, '4.738')] +[2023-02-23 02:52:47,119][11625] Saving new best policy, reward=4.738! +[2023-02-23 02:52:49,552][11639] Updated weights for policy 0, policy_version 210 (0.0012) +[2023-02-23 02:52:52,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3473.4). Total num frames: 868352. Throughput: 0: 962.2. Samples: 217376. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:52:52,114][11306] Avg episode reward: [(0, '4.855')] +[2023-02-23 02:52:52,126][11625] Saving new best policy, reward=4.855! +[2023-02-23 02:52:57,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.9, 300 sec: 3453.5). Total num frames: 880640. Throughput: 0: 929.8. Samples: 219454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:52:57,111][11306] Avg episode reward: [(0, '4.916')] +[2023-02-23 02:52:57,117][11625] Saving new best policy, reward=4.916! +[2023-02-23 02:53:02,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3450.1). Total num frames: 897024. Throughput: 0: 916.1. Samples: 223850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:53:02,114][11306] Avg episode reward: [(0, '4.673')] +[2023-02-23 02:53:02,126][11639] Updated weights for policy 0, policy_version 220 (0.0017) +[2023-02-23 02:53:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3477.7). Total num frames: 921600. Throughput: 0: 969.7. Samples: 230542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:53:07,114][11306] Avg episode reward: [(0, '4.492')] +[2023-02-23 02:53:10,992][11639] Updated weights for policy 0, policy_version 230 (0.0013) +[2023-02-23 02:53:12,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3489.2). Total num frames: 942080. Throughput: 0: 967.7. Samples: 233998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:53:12,114][11306] Avg episode reward: [(0, '4.737')] +[2023-02-23 02:53:17,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3485.3). Total num frames: 958464. Throughput: 0: 926.2. Samples: 239142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:53:17,116][11306] Avg episode reward: [(0, '5.027')] +[2023-02-23 02:53:17,117][11625] Saving new best policy, reward=5.027! +[2023-02-23 02:53:22,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3481.6). Total num frames: 974848. Throughput: 0: 917.6. Samples: 243464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:53:22,109][11306] Avg episode reward: [(0, '4.766')] +[2023-02-23 02:53:23,636][11639] Updated weights for policy 0, policy_version 240 (0.0024) +[2023-02-23 02:53:27,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3492.4). Total num frames: 995328. Throughput: 0: 946.7. Samples: 246898. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:53:27,111][11306] Avg episode reward: [(0, '4.749')] +[2023-02-23 02:53:32,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3516.9). Total num frames: 1019904. Throughput: 0: 965.2. Samples: 253886. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:53:32,113][11306] Avg episode reward: [(0, '4.974')] +[2023-02-23 02:53:33,265][11639] Updated weights for policy 0, policy_version 250 (0.0016) +[2023-02-23 02:53:37,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.9, 300 sec: 3499.0). Total num frames: 1032192. Throughput: 0: 921.1. Samples: 258826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:53:37,116][11306] Avg episode reward: [(0, '4.969')] +[2023-02-23 02:53:42,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3568.4). Total num frames: 1052672. Throughput: 0: 925.4. Samples: 261096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:53:42,109][11306] Avg episode reward: [(0, '4.985')] +[2023-02-23 02:53:44,634][11639] Updated weights for policy 0, policy_version 260 (0.0020) +[2023-02-23 02:53:47,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 1073152. Throughput: 0: 967.9. Samples: 267404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:53:47,109][11306] Avg episode reward: [(0, '5.349')] +[2023-02-23 02:53:47,170][11625] Saving new best policy, reward=5.349! +[2023-02-23 02:53:52,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1097728. Throughput: 0: 972.1. Samples: 274288. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:53:52,111][11306] Avg episode reward: [(0, '5.819')] +[2023-02-23 02:53:52,127][11625] Saving new best policy, reward=5.819! +[2023-02-23 02:53:55,190][11639] Updated weights for policy 0, policy_version 270 (0.0026) +[2023-02-23 02:53:57,110][11306] Fps is (10 sec: 3685.6, 60 sec: 3822.8, 300 sec: 3707.2). Total num frames: 1110016. Throughput: 0: 940.9. Samples: 276342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:53:57,115][11306] Avg episode reward: [(0, '5.725')] +[2023-02-23 02:54:02,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 1126400. Throughput: 0: 925.9. Samples: 280806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:54:02,115][11306] Avg episode reward: [(0, '5.517')] +[2023-02-23 02:54:05,961][11639] Updated weights for policy 0, policy_version 280 (0.0012) +[2023-02-23 02:54:07,108][11306] Fps is (10 sec: 4096.8, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1150976. Throughput: 0: 979.5. Samples: 287540. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:54:07,116][11306] Avg episode reward: [(0, '5.791')] +[2023-02-23 02:54:12,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1171456. Throughput: 0: 980.2. Samples: 291006. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:54:12,112][11306] Avg episode reward: [(0, '5.996')] +[2023-02-23 02:54:12,128][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000286_1171456.pth... +[2023-02-23 02:54:12,299][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000068_278528.pth +[2023-02-23 02:54:12,313][11625] Saving new best policy, reward=5.996! +[2023-02-23 02:54:17,111][11306] Fps is (10 sec: 3275.7, 60 sec: 3754.5, 300 sec: 3707.2). Total num frames: 1183744. Throughput: 0: 936.6. Samples: 296038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:54:17,119][11306] Avg episode reward: [(0, '6.013')] +[2023-02-23 02:54:17,211][11639] Updated weights for policy 0, policy_version 290 (0.0015) +[2023-02-23 02:54:17,206][11625] Saving new best policy, reward=6.013! +[2023-02-23 02:54:22,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1204224. Throughput: 0: 928.8. Samples: 300624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:54:22,112][11306] Avg episode reward: [(0, '5.760')] +[2023-02-23 02:54:27,108][11306] Fps is (10 sec: 4097.3, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1224704. Throughput: 0: 952.4. Samples: 303954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:54:27,110][11306] Avg episode reward: [(0, '5.788')] +[2023-02-23 02:54:27,440][11639] Updated weights for policy 0, policy_version 300 (0.0019) +[2023-02-23 02:54:32,108][11306] Fps is (10 sec: 4095.8, 60 sec: 3754.6, 300 sec: 3721.1). Total num frames: 1245184. Throughput: 0: 965.3. Samples: 310842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:54:32,116][11306] Avg episode reward: [(0, '5.882')] +[2023-02-23 02:54:37,114][11306] Fps is (10 sec: 3684.2, 60 sec: 3822.5, 300 sec: 3721.0). Total num frames: 1261568. Throughput: 0: 914.5. Samples: 315448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:54:37,116][11306] Avg episode reward: [(0, '6.296')] +[2023-02-23 02:54:37,122][11625] Saving new best policy, reward=6.296! +[2023-02-23 02:54:39,796][11639] Updated weights for policy 0, policy_version 310 (0.0029) +[2023-02-23 02:54:42,108][11306] Fps is (10 sec: 3277.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1277952. Throughput: 0: 914.7. Samples: 317502. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:54:42,117][11306] Avg episode reward: [(0, '6.315')] +[2023-02-23 02:54:42,130][11625] Saving new best policy, reward=6.315! +[2023-02-23 02:54:47,108][11306] Fps is (10 sec: 3688.6, 60 sec: 3754.7, 300 sec: 3707.3). Total num frames: 1298432. Throughput: 0: 957.6. Samples: 323896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:54:47,112][11306] Avg episode reward: [(0, '7.032')] +[2023-02-23 02:54:47,145][11625] Saving new best policy, reward=7.032! +[2023-02-23 02:54:48,952][11639] Updated weights for policy 0, policy_version 320 (0.0024) +[2023-02-23 02:54:52,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1323008. Throughput: 0: 957.6. Samples: 330634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:54:52,110][11306] Avg episode reward: [(0, '8.063')] +[2023-02-23 02:54:52,123][11625] Saving new best policy, reward=8.063! +[2023-02-23 02:54:57,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3721.1). Total num frames: 1335296. Throughput: 0: 927.6. Samples: 332750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:54:57,116][11306] Avg episode reward: [(0, '8.957')] +[2023-02-23 02:54:57,122][11625] Saving new best policy, reward=8.957! +[2023-02-23 02:55:01,715][11639] Updated weights for policy 0, policy_version 330 (0.0012) +[2023-02-23 02:55:02,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1351680. Throughput: 0: 909.7. Samples: 336972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:55:02,115][11306] Avg episode reward: [(0, '8.806')] +[2023-02-23 02:55:07,108][11306] Fps is (10 sec: 4095.9, 60 sec: 3754.6, 300 sec: 3721.1). Total num frames: 1376256. Throughput: 0: 961.5. Samples: 343894. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:55:07,110][11306] Avg episode reward: [(0, '8.145')] +[2023-02-23 02:55:10,451][11639] Updated weights for policy 0, policy_version 340 (0.0017) +[2023-02-23 02:55:12,109][11306] Fps is (10 sec: 4505.1, 60 sec: 3754.6, 300 sec: 3735.0). Total num frames: 1396736. Throughput: 0: 965.6. Samples: 347406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:55:12,111][11306] Avg episode reward: [(0, '7.740')] +[2023-02-23 02:55:17,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3721.1). Total num frames: 1409024. Throughput: 0: 921.2. Samples: 352298. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:55:17,112][11306] Avg episode reward: [(0, '7.836')] +[2023-02-23 02:55:22,108][11306] Fps is (10 sec: 3277.2, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1429504. Throughput: 0: 926.3. Samples: 357124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:55:22,110][11306] Avg episode reward: [(0, '7.880')] +[2023-02-23 02:55:22,745][11639] Updated weights for policy 0, policy_version 350 (0.0015) +[2023-02-23 02:55:27,108][11306] Fps is (10 sec: 4505.8, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 1454080. Throughput: 0: 959.2. Samples: 360666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:55:27,117][11306] Avg episode reward: [(0, '7.656')] +[2023-02-23 02:55:32,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1470464. Throughput: 0: 970.6. Samples: 367574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:55:32,114][11306] Avg episode reward: [(0, '7.889')] +[2023-02-23 02:55:32,388][11639] Updated weights for policy 0, policy_version 360 (0.0016) +[2023-02-23 02:55:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3755.1, 300 sec: 3735.0). Total num frames: 1486848. Throughput: 0: 924.0. Samples: 372212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:55:37,111][11306] Avg episode reward: [(0, '8.185')] +[2023-02-23 02:55:42,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1507328. Throughput: 0: 926.0. Samples: 374422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:55:42,109][11306] Avg episode reward: [(0, '8.603')] +[2023-02-23 02:55:43,874][11639] Updated weights for policy 0, policy_version 370 (0.0028) +[2023-02-23 02:55:47,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1527808. Throughput: 0: 978.6. Samples: 381010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:55:47,109][11306] Avg episode reward: [(0, '9.008')] +[2023-02-23 02:55:47,111][11625] Saving new best policy, reward=9.008! +[2023-02-23 02:55:52,108][11306] Fps is (10 sec: 4095.8, 60 sec: 3754.6, 300 sec: 3776.6). Total num frames: 1548288. Throughput: 0: 969.2. Samples: 387510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:55:52,112][11306] Avg episode reward: [(0, '9.523')] +[2023-02-23 02:55:52,136][11625] Saving new best policy, reward=9.523! +[2023-02-23 02:55:54,131][11639] Updated weights for policy 0, policy_version 380 (0.0017) +[2023-02-23 02:55:57,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1564672. Throughput: 0: 935.6. Samples: 389508. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-23 02:55:57,113][11306] Avg episode reward: [(0, '10.340')] +[2023-02-23 02:55:57,120][11625] Saving new best policy, reward=10.340! +[2023-02-23 02:56:02,108][11306] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1581056. Throughput: 0: 925.4. Samples: 393940. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:56:02,111][11306] Avg episode reward: [(0, '9.776')] +[2023-02-23 02:56:05,300][11639] Updated weights for policy 0, policy_version 390 (0.0034) +[2023-02-23 02:56:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 1605632. Throughput: 0: 974.7. Samples: 400984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:56:07,110][11306] Avg episode reward: [(0, '10.056')] +[2023-02-23 02:56:12,112][11306] Fps is (10 sec: 4094.2, 60 sec: 3754.5, 300 sec: 3776.6). Total num frames: 1622016. Throughput: 0: 975.7. Samples: 404576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:56:12,116][11306] Avg episode reward: [(0, '9.942')] +[2023-02-23 02:56:12,132][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000396_1622016.pth... +[2023-02-23 02:56:12,262][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000175_716800.pth +[2023-02-23 02:56:16,512][11639] Updated weights for policy 0, policy_version 400 (0.0011) +[2023-02-23 02:56:17,113][11306] Fps is (10 sec: 3275.1, 60 sec: 3822.6, 300 sec: 3776.6). Total num frames: 1638400. Throughput: 0: 922.7. Samples: 409100. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:56:17,120][11306] Avg episode reward: [(0, '10.573')] +[2023-02-23 02:56:17,128][11625] Saving new best policy, reward=10.573! +[2023-02-23 02:56:22,108][11306] Fps is (10 sec: 3688.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1658880. Throughput: 0: 934.4. Samples: 414258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:56:22,116][11306] Avg episode reward: [(0, '10.632')] +[2023-02-23 02:56:22,132][11625] Saving new best policy, reward=10.632! +[2023-02-23 02:56:26,427][11639] Updated weights for policy 0, policy_version 410 (0.0022) +[2023-02-23 02:56:27,108][11306] Fps is (10 sec: 4098.2, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1679360. Throughput: 0: 961.0. Samples: 417666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:56:27,110][11306] Avg episode reward: [(0, '10.859')] +[2023-02-23 02:56:27,115][11625] Saving new best policy, reward=10.859! +[2023-02-23 02:56:32,109][11306] Fps is (10 sec: 4095.2, 60 sec: 3822.8, 300 sec: 3790.6). Total num frames: 1699840. Throughput: 0: 959.4. Samples: 424186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:56:32,116][11306] Avg episode reward: [(0, '11.006')] +[2023-02-23 02:56:32,127][11625] Saving new best policy, reward=11.006! +[2023-02-23 02:56:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 1712128. Throughput: 0: 910.9. Samples: 428500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:56:37,110][11306] Avg episode reward: [(0, '10.988')] +[2023-02-23 02:56:38,823][11639] Updated weights for policy 0, policy_version 420 (0.0036) +[2023-02-23 02:56:42,108][11306] Fps is (10 sec: 3277.4, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1732608. Throughput: 0: 916.8. Samples: 430764. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:56:42,110][11306] Avg episode reward: [(0, '10.667')] +[2023-02-23 02:56:47,107][11306] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1757184. Throughput: 0: 972.8. Samples: 437714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:56:47,115][11306] Avg episode reward: [(0, '11.042')] +[2023-02-23 02:56:47,118][11625] Saving new best policy, reward=11.042! +[2023-02-23 02:56:47,966][11639] Updated weights for policy 0, policy_version 430 (0.0025) +[2023-02-23 02:56:52,110][11306] Fps is (10 sec: 4095.2, 60 sec: 3754.6, 300 sec: 3790.6). Total num frames: 1773568. Throughput: 0: 952.4. Samples: 443846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:56:52,112][11306] Avg episode reward: [(0, '11.980')] +[2023-02-23 02:56:52,127][11625] Saving new best policy, reward=11.980! +[2023-02-23 02:56:57,111][11306] Fps is (10 sec: 3275.7, 60 sec: 3754.5, 300 sec: 3790.5). Total num frames: 1789952. Throughput: 0: 920.3. Samples: 445988. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-23 02:56:57,116][11306] Avg episode reward: [(0, '12.790')] +[2023-02-23 02:56:57,118][11625] Saving new best policy, reward=12.790! +[2023-02-23 02:57:00,465][11639] Updated weights for policy 0, policy_version 440 (0.0024) +[2023-02-23 02:57:02,108][11306] Fps is (10 sec: 3277.5, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1806336. Throughput: 0: 925.4. Samples: 450736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:57:02,115][11306] Avg episode reward: [(0, '13.472')] +[2023-02-23 02:57:02,126][11625] Saving new best policy, reward=13.472! +[2023-02-23 02:57:07,108][11306] Fps is (10 sec: 4097.3, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1830912. Throughput: 0: 964.0. Samples: 457636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:57:07,111][11306] Avg episode reward: [(0, '13.247')] +[2023-02-23 02:57:09,508][11639] Updated weights for policy 0, policy_version 450 (0.0023) +[2023-02-23 02:57:12,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.9, 300 sec: 3776.7). Total num frames: 1847296. Throughput: 0: 964.5. Samples: 461068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:57:12,113][11306] Avg episode reward: [(0, '13.593')] +[2023-02-23 02:57:12,125][11625] Saving new best policy, reward=13.593! +[2023-02-23 02:57:17,108][11306] Fps is (10 sec: 3276.6, 60 sec: 3755.0, 300 sec: 3776.6). Total num frames: 1863680. Throughput: 0: 914.3. Samples: 465330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:57:17,112][11306] Avg episode reward: [(0, '13.732')] +[2023-02-23 02:57:17,114][11625] Saving new best policy, reward=13.732! +[2023-02-23 02:57:22,016][11639] Updated weights for policy 0, policy_version 460 (0.0026) +[2023-02-23 02:57:22,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1884160. Throughput: 0: 936.6. Samples: 470648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:57:22,110][11306] Avg episode reward: [(0, '14.140')] +[2023-02-23 02:57:22,122][11625] Saving new best policy, reward=14.140! +[2023-02-23 02:57:27,108][11306] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1904640. Throughput: 0: 965.0. Samples: 474188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:57:27,114][11306] Avg episode reward: [(0, '14.514')] +[2023-02-23 02:57:27,122][11625] Saving new best policy, reward=14.514! +[2023-02-23 02:57:31,751][11639] Updated weights for policy 0, policy_version 470 (0.0017) +[2023-02-23 02:57:32,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3790.6). Total num frames: 1925120. Throughput: 0: 955.6. Samples: 480718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:57:32,112][11306] Avg episode reward: [(0, '13.937')] +[2023-02-23 02:57:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 1937408. Throughput: 0: 915.9. Samples: 485062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:57:37,112][11306] Avg episode reward: [(0, '14.303')] +[2023-02-23 02:57:42,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1961984. Throughput: 0: 926.0. Samples: 487656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:57:42,110][11306] Avg episode reward: [(0, '14.038')] +[2023-02-23 02:57:42,935][11639] Updated weights for policy 0, policy_version 480 (0.0020) +[2023-02-23 02:57:47,108][11306] Fps is (10 sec: 4505.3, 60 sec: 3754.6, 300 sec: 3776.6). Total num frames: 1982464. Throughput: 0: 976.5. Samples: 494680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:57:47,111][11306] Avg episode reward: [(0, '14.480')] +[2023-02-23 02:57:52,113][11306] Fps is (10 sec: 4093.8, 60 sec: 3822.7, 300 sec: 3804.4). Total num frames: 2002944. Throughput: 0: 955.0. Samples: 500616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:57:52,117][11306] Avg episode reward: [(0, '14.620')] +[2023-02-23 02:57:52,135][11625] Saving new best policy, reward=14.620! +[2023-02-23 02:57:53,483][11639] Updated weights for policy 0, policy_version 490 (0.0021) +[2023-02-23 02:57:57,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3790.5). Total num frames: 2015232. Throughput: 0: 925.4. Samples: 502714. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 02:57:57,116][11306] Avg episode reward: [(0, '15.227')] +[2023-02-23 02:57:57,120][11625] Saving new best policy, reward=15.227! +[2023-02-23 02:58:02,108][11306] Fps is (10 sec: 3278.6, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2035712. Throughput: 0: 943.5. Samples: 507788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:58:02,110][11306] Avg episode reward: [(0, '16.235')] +[2023-02-23 02:58:02,122][11625] Saving new best policy, reward=16.235! +[2023-02-23 02:58:04,484][11639] Updated weights for policy 0, policy_version 500 (0.0015) +[2023-02-23 02:58:07,107][11306] Fps is (10 sec: 4506.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2060288. Throughput: 0: 976.8. Samples: 514604. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:58:07,110][11306] Avg episode reward: [(0, '15.607')] +[2023-02-23 02:58:12,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2076672. Throughput: 0: 970.1. Samples: 517842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:58:12,116][11306] Avg episode reward: [(0, '15.436')] +[2023-02-23 02:58:12,128][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000507_2076672.pth... +[2023-02-23 02:58:12,263][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000286_1171456.pth +[2023-02-23 02:58:15,967][11639] Updated weights for policy 0, policy_version 510 (0.0015) +[2023-02-23 02:58:17,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 2088960. Throughput: 0: 923.6. Samples: 522280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:58:17,114][11306] Avg episode reward: [(0, '15.437')] +[2023-02-23 02:58:22,109][11306] Fps is (10 sec: 3276.5, 60 sec: 3754.6, 300 sec: 3776.6). Total num frames: 2109440. Throughput: 0: 950.4. Samples: 527830. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:58:22,112][11306] Avg episode reward: [(0, '14.702')] +[2023-02-23 02:58:25,641][11639] Updated weights for policy 0, policy_version 520 (0.0014) +[2023-02-23 02:58:27,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3776.6). Total num frames: 2134016. Throughput: 0: 971.3. Samples: 531364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:58:27,117][11306] Avg episode reward: [(0, '15.566')] +[2023-02-23 02:58:32,108][11306] Fps is (10 sec: 4096.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2150400. Throughput: 0: 954.3. Samples: 537624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:58:32,113][11306] Avg episode reward: [(0, '15.666')] +[2023-02-23 02:58:37,109][11306] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3776.6). Total num frames: 2166784. Throughput: 0: 917.0. Samples: 541878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:58:37,116][11306] Avg episode reward: [(0, '15.810')] +[2023-02-23 02:58:38,094][11639] Updated weights for policy 0, policy_version 530 (0.0024) +[2023-02-23 02:58:42,108][11306] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 2187264. Throughput: 0: 929.7. Samples: 544548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:58:42,111][11306] Avg episode reward: [(0, '16.264')] +[2023-02-23 02:58:42,126][11625] Saving new best policy, reward=16.264! +[2023-02-23 02:58:46,846][11639] Updated weights for policy 0, policy_version 540 (0.0019) +[2023-02-23 02:58:47,108][11306] Fps is (10 sec: 4505.8, 60 sec: 3823.0, 300 sec: 3776.6). Total num frames: 2211840. Throughput: 0: 974.1. Samples: 551622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:58:47,111][11306] Avg episode reward: [(0, '16.331')] +[2023-02-23 02:58:47,117][11625] Saving new best policy, reward=16.331! +[2023-02-23 02:58:52,108][11306] Fps is (10 sec: 4096.1, 60 sec: 3755.0, 300 sec: 3790.6). Total num frames: 2228224. Throughput: 0: 947.1. Samples: 557222. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:58:52,110][11306] Avg episode reward: [(0, '17.765')] +[2023-02-23 02:58:52,117][11625] Saving new best policy, reward=17.765! +[2023-02-23 02:58:57,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2240512. Throughput: 0: 921.2. Samples: 559294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:58:57,117][11306] Avg episode reward: [(0, '17.396')] +[2023-02-23 02:58:59,454][11639] Updated weights for policy 0, policy_version 550 (0.0027) +[2023-02-23 02:59:02,116][11306] Fps is (10 sec: 3683.4, 60 sec: 3822.4, 300 sec: 3776.5). Total num frames: 2265088. Throughput: 0: 944.5. Samples: 564792. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:59:02,118][11306] Avg episode reward: [(0, '17.250')] +[2023-02-23 02:59:07,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 2285568. Throughput: 0: 976.0. Samples: 571748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 02:59:07,110][11306] Avg episode reward: [(0, '18.062')] +[2023-02-23 02:59:07,117][11625] Saving new best policy, reward=18.062! +[2023-02-23 02:59:08,493][11639] Updated weights for policy 0, policy_version 560 (0.0020) +[2023-02-23 02:59:12,108][11306] Fps is (10 sec: 3689.4, 60 sec: 3754.7, 300 sec: 3790.6). Total num frames: 2301952. Throughput: 0: 962.2. Samples: 574662. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:59:12,115][11306] Avg episode reward: [(0, '18.618')] +[2023-02-23 02:59:12,130][11625] Saving new best policy, reward=18.618! +[2023-02-23 02:59:17,108][11306] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3776.6). Total num frames: 2318336. Throughput: 0: 917.1. Samples: 578896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:59:17,113][11306] Avg episode reward: [(0, '18.130')] +[2023-02-23 02:59:20,654][11639] Updated weights for policy 0, policy_version 570 (0.0019) +[2023-02-23 02:59:22,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3776.7). Total num frames: 2338816. Throughput: 0: 951.7. Samples: 584706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 02:59:22,110][11306] Avg episode reward: [(0, '19.614')] +[2023-02-23 02:59:22,119][11625] Saving new best policy, reward=19.614! +[2023-02-23 02:59:27,108][11306] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2359296. Throughput: 0: 966.4. Samples: 588038. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 02:59:27,110][11306] Avg episode reward: [(0, '19.445')] +[2023-02-23 02:59:30,954][11639] Updated weights for policy 0, policy_version 580 (0.0017) +[2023-02-23 02:59:32,112][11306] Fps is (10 sec: 3684.9, 60 sec: 3754.4, 300 sec: 3776.7). Total num frames: 2375680. Throughput: 0: 942.3. Samples: 594030. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:59:32,114][11306] Avg episode reward: [(0, '19.741')] +[2023-02-23 02:59:32,125][11625] Saving new best policy, reward=19.741! +[2023-02-23 02:59:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2392064. Throughput: 0: 915.7. Samples: 598428. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:59:37,110][11306] Avg episode reward: [(0, '19.654')] +[2023-02-23 02:59:42,090][11639] Updated weights for policy 0, policy_version 590 (0.0025) +[2023-02-23 02:59:42,108][11306] Fps is (10 sec: 4097.7, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 2416640. Throughput: 0: 934.8. Samples: 601362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:59:42,115][11306] Avg episode reward: [(0, '18.304')] +[2023-02-23 02:59:47,108][11306] Fps is (10 sec: 4915.1, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2441216. Throughput: 0: 975.5. Samples: 608680. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 02:59:47,114][11306] Avg episode reward: [(0, '17.886')] +[2023-02-23 02:59:51,974][11639] Updated weights for policy 0, policy_version 600 (0.0031) +[2023-02-23 02:59:52,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2457600. Throughput: 0: 946.4. Samples: 614336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 02:59:52,113][11306] Avg episode reward: [(0, '18.348')] +[2023-02-23 02:59:57,111][11306] Fps is (10 sec: 2866.3, 60 sec: 3822.7, 300 sec: 3790.5). Total num frames: 2469888. Throughput: 0: 930.5. Samples: 616538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 02:59:57,114][11306] Avg episode reward: [(0, '17.902')] +[2023-02-23 03:00:02,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3823.5, 300 sec: 3790.5). Total num frames: 2494464. Throughput: 0: 963.7. Samples: 622264. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:00:02,109][11306] Avg episode reward: [(0, '17.193')] +[2023-02-23 03:00:02,746][11639] Updated weights for policy 0, policy_version 610 (0.0016) +[2023-02-23 03:00:07,108][11306] Fps is (10 sec: 4916.8, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2519040. Throughput: 0: 996.9. Samples: 629566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:00:07,115][11306] Avg episode reward: [(0, '17.746')] +[2023-02-23 03:00:12,115][11306] Fps is (10 sec: 4092.9, 60 sec: 3890.7, 300 sec: 3818.2). Total num frames: 2535424. Throughput: 0: 984.2. Samples: 632334. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:00:12,118][11306] Avg episode reward: [(0, '17.601')] +[2023-02-23 03:00:12,136][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000619_2535424.pth... +[2023-02-23 03:00:12,270][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000396_1622016.pth +[2023-02-23 03:00:13,541][11639] Updated weights for policy 0, policy_version 620 (0.0029) +[2023-02-23 03:00:17,107][11306] Fps is (10 sec: 2867.3, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 2547712. Throughput: 0: 949.6. Samples: 636756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:00:17,114][11306] Avg episode reward: [(0, '18.872')] +[2023-02-23 03:00:22,108][11306] Fps is (10 sec: 3689.2, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 2572288. Throughput: 0: 997.6. Samples: 643322. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:00:22,110][11306] Avg episode reward: [(0, '18.246')] +[2023-02-23 03:00:23,046][11639] Updated weights for policy 0, policy_version 630 (0.0020) +[2023-02-23 03:00:27,108][11306] Fps is (10 sec: 4915.1, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 2596864. Throughput: 0: 1013.1. Samples: 646952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:00:27,110][11306] Avg episode reward: [(0, '19.033')] +[2023-02-23 03:00:32,110][11306] Fps is (10 sec: 4094.8, 60 sec: 3959.6, 300 sec: 3818.3). Total num frames: 2613248. Throughput: 0: 976.1. Samples: 652606. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-23 03:00:32,115][11306] Avg episode reward: [(0, '19.875')] +[2023-02-23 03:00:32,129][11625] Saving new best policy, reward=19.875! +[2023-02-23 03:00:34,497][11639] Updated weights for policy 0, policy_version 640 (0.0014) +[2023-02-23 03:00:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3804.4). Total num frames: 2629632. Throughput: 0: 953.7. Samples: 657252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:00:37,114][11306] Avg episode reward: [(0, '19.619')] +[2023-02-23 03:00:42,108][11306] Fps is (10 sec: 4097.1, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 2654208. Throughput: 0: 981.1. Samples: 660682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:00:42,115][11306] Avg episode reward: [(0, '20.792')] +[2023-02-23 03:00:42,127][11625] Saving new best policy, reward=20.792! +[2023-02-23 03:00:43,788][11639] Updated weights for policy 0, policy_version 650 (0.0012) +[2023-02-23 03:00:47,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2674688. Throughput: 0: 1011.7. Samples: 667792. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 03:00:47,114][11306] Avg episode reward: [(0, '21.660')] +[2023-02-23 03:00:47,117][11625] Saving new best policy, reward=21.660! +[2023-02-23 03:00:52,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2691072. Throughput: 0: 963.9. Samples: 672942. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:00:52,111][11306] Avg episode reward: [(0, '21.863')] +[2023-02-23 03:00:52,120][11625] Saving new best policy, reward=21.863! +[2023-02-23 03:00:55,635][11639] Updated weights for policy 0, policy_version 660 (0.0030) +[2023-02-23 03:00:57,108][11306] Fps is (10 sec: 3276.9, 60 sec: 3959.7, 300 sec: 3818.3). Total num frames: 2707456. Throughput: 0: 951.2. Samples: 675130. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:00:57,109][11306] Avg episode reward: [(0, '21.488')] +[2023-02-23 03:01:02,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 2732032. Throughput: 0: 988.4. Samples: 681236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:01:02,110][11306] Avg episode reward: [(0, '22.613')] +[2023-02-23 03:01:02,123][11625] Saving new best policy, reward=22.613! +[2023-02-23 03:01:05,043][11639] Updated weights for policy 0, policy_version 670 (0.0023) +[2023-02-23 03:01:07,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2752512. Throughput: 0: 992.0. Samples: 687962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:01:07,112][11306] Avg episode reward: [(0, '20.567')] +[2023-02-23 03:01:12,109][11306] Fps is (10 sec: 3276.5, 60 sec: 3823.3, 300 sec: 3818.4). Total num frames: 2764800. Throughput: 0: 959.8. Samples: 690142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:01:12,115][11306] Avg episode reward: [(0, '20.493')] +[2023-02-23 03:01:17,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2781184. Throughput: 0: 929.1. Samples: 694412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:01:17,110][11306] Avg episode reward: [(0, '21.776')] +[2023-02-23 03:01:17,714][11639] Updated weights for policy 0, policy_version 680 (0.0022) +[2023-02-23 03:01:22,108][11306] Fps is (10 sec: 4096.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2805760. Throughput: 0: 972.5. Samples: 701016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:01:22,110][11306] Avg episode reward: [(0, '21.877')] +[2023-02-23 03:01:26,404][11639] Updated weights for policy 0, policy_version 690 (0.0017) +[2023-02-23 03:01:27,108][11306] Fps is (10 sec: 4505.3, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2826240. Throughput: 0: 977.0. Samples: 704646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:01:27,115][11306] Avg episode reward: [(0, '20.151')] +[2023-02-23 03:01:32,114][11306] Fps is (10 sec: 3684.0, 60 sec: 3822.7, 300 sec: 3832.1). Total num frames: 2842624. Throughput: 0: 937.7. Samples: 709994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:01:32,123][11306] Avg episode reward: [(0, '21.069')] +[2023-02-23 03:01:37,108][11306] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2859008. Throughput: 0: 932.1. Samples: 714888. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:01:37,115][11306] Avg episode reward: [(0, '20.497')] +[2023-02-23 03:01:38,322][11639] Updated weights for policy 0, policy_version 700 (0.0016) +[2023-02-23 03:01:42,108][11306] Fps is (10 sec: 4098.6, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2883584. Throughput: 0: 962.8. Samples: 718454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:01:42,110][11306] Avg episode reward: [(0, '18.757')] +[2023-02-23 03:01:47,115][11306] Fps is (10 sec: 4502.4, 60 sec: 3822.5, 300 sec: 3832.1). Total num frames: 2904064. Throughput: 0: 986.6. Samples: 725642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:01:47,117][11306] Avg episode reward: [(0, '19.347')] +[2023-02-23 03:01:47,438][11639] Updated weights for policy 0, policy_version 710 (0.0019) +[2023-02-23 03:01:52,115][11306] Fps is (10 sec: 3683.8, 60 sec: 3822.5, 300 sec: 3832.1). Total num frames: 2920448. Throughput: 0: 943.5. Samples: 730424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:01:52,120][11306] Avg episode reward: [(0, '19.637')] +[2023-02-23 03:01:57,108][11306] Fps is (10 sec: 3279.1, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2936832. Throughput: 0: 946.7. Samples: 732742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:01:57,116][11306] Avg episode reward: [(0, '19.938')] +[2023-02-23 03:01:59,051][11639] Updated weights for policy 0, policy_version 720 (0.0013) +[2023-02-23 03:02:02,108][11306] Fps is (10 sec: 4098.9, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2961408. Throughput: 0: 991.9. Samples: 739048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:02:02,115][11306] Avg episode reward: [(0, '20.171')] +[2023-02-23 03:02:07,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2981888. Throughput: 0: 996.8. Samples: 745872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:02:07,116][11306] Avg episode reward: [(0, '21.231')] +[2023-02-23 03:02:09,297][11639] Updated weights for policy 0, policy_version 730 (0.0024) +[2023-02-23 03:02:12,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3846.1). Total num frames: 2998272. Throughput: 0: 963.0. Samples: 747980. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:02:12,115][11306] Avg episode reward: [(0, '21.599')] +[2023-02-23 03:02:12,126][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000732_2998272.pth... +[2023-02-23 03:02:12,281][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000507_2076672.pth +[2023-02-23 03:02:17,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3014656. Throughput: 0: 942.9. Samples: 752420. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:02:17,113][11306] Avg episode reward: [(0, '21.066')] +[2023-02-23 03:02:20,267][11639] Updated weights for policy 0, policy_version 740 (0.0022) +[2023-02-23 03:02:22,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3039232. Throughput: 0: 990.4. Samples: 759454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:02:22,113][11306] Avg episode reward: [(0, '20.078')] +[2023-02-23 03:02:27,108][11306] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3059712. Throughput: 0: 990.8. Samples: 763040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:02:27,111][11306] Avg episode reward: [(0, '19.600')] +[2023-02-23 03:02:31,332][11639] Updated weights for policy 0, policy_version 750 (0.0020) +[2023-02-23 03:02:32,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3823.3, 300 sec: 3846.1). Total num frames: 3072000. Throughput: 0: 935.3. Samples: 767724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:02:32,111][11306] Avg episode reward: [(0, '20.218')] +[2023-02-23 03:02:37,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3092480. Throughput: 0: 946.1. Samples: 772990. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:02:37,116][11306] Avg episode reward: [(0, '19.662')] +[2023-02-23 03:02:41,168][11639] Updated weights for policy 0, policy_version 760 (0.0026) +[2023-02-23 03:02:42,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3117056. Throughput: 0: 974.7. Samples: 776604. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:02:42,115][11306] Avg episode reward: [(0, '19.637')] +[2023-02-23 03:02:47,107][11306] Fps is (10 sec: 4505.7, 60 sec: 3891.7, 300 sec: 3846.1). Total num frames: 3137536. Throughput: 0: 989.5. Samples: 783576. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:02:47,112][11306] Avg episode reward: [(0, '20.197')] +[2023-02-23 03:02:52,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3823.4, 300 sec: 3846.1). Total num frames: 3149824. Throughput: 0: 939.6. Samples: 788152. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:02:52,114][11306] Avg episode reward: [(0, '18.991')] +[2023-02-23 03:02:52,510][11639] Updated weights for policy 0, policy_version 770 (0.0021) +[2023-02-23 03:02:57,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3170304. Throughput: 0: 942.4. Samples: 790386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:02:57,110][11306] Avg episode reward: [(0, '18.942')] +[2023-02-23 03:03:01,632][11639] Updated weights for policy 0, policy_version 780 (0.0016) +[2023-02-23 03:03:02,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3194880. Throughput: 0: 1001.2. Samples: 797476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:03:02,110][11306] Avg episode reward: [(0, '20.298')] +[2023-02-23 03:03:07,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3215360. Throughput: 0: 984.9. Samples: 803774. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 03:03:07,117][11306] Avg episode reward: [(0, '20.733')] +[2023-02-23 03:03:12,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3227648. Throughput: 0: 955.6. Samples: 806042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:03:12,112][11306] Avg episode reward: [(0, '21.010')] +[2023-02-23 03:03:13,662][11639] Updated weights for policy 0, policy_version 790 (0.0015) +[2023-02-23 03:03:17,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3248128. Throughput: 0: 962.1. Samples: 811020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:03:17,113][11306] Avg episode reward: [(0, '22.622')] +[2023-02-23 03:03:17,120][11625] Saving new best policy, reward=22.622! +[2023-02-23 03:03:22,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3272704. Throughput: 0: 1002.7. Samples: 818112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:03:22,110][11306] Avg episode reward: [(0, '21.351')] +[2023-02-23 03:03:22,585][11639] Updated weights for policy 0, policy_version 800 (0.0016) +[2023-02-23 03:03:27,107][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3293184. Throughput: 0: 1004.0. Samples: 821784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:03:27,115][11306] Avg episode reward: [(0, '21.023')] +[2023-02-23 03:03:32,109][11306] Fps is (10 sec: 3276.4, 60 sec: 3891.1, 300 sec: 3859.9). Total num frames: 3305472. Throughput: 0: 949.3. Samples: 826294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:03:32,112][11306] Avg episode reward: [(0, '21.154')] +[2023-02-23 03:03:34,546][11639] Updated weights for policy 0, policy_version 810 (0.0015) +[2023-02-23 03:03:37,107][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3330048. Throughput: 0: 973.2. Samples: 831948. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:03:37,110][11306] Avg episode reward: [(0, '21.427')] +[2023-02-23 03:03:42,108][11306] Fps is (10 sec: 4506.3, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3350528. Throughput: 0: 1005.8. Samples: 835646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:03:42,109][11306] Avg episode reward: [(0, '20.992')] +[2023-02-23 03:03:43,118][11639] Updated weights for policy 0, policy_version 820 (0.0016) +[2023-02-23 03:03:47,108][11306] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 3371008. Throughput: 0: 996.1. Samples: 842302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:03:47,114][11306] Avg episode reward: [(0, '21.712')] +[2023-02-23 03:03:52,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3387392. Throughput: 0: 957.4. Samples: 846858. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-23 03:03:52,116][11306] Avg episode reward: [(0, '24.217')] +[2023-02-23 03:03:52,128][11625] Saving new best policy, reward=24.217! +[2023-02-23 03:03:55,088][11639] Updated weights for policy 0, policy_version 830 (0.0022) +[2023-02-23 03:03:57,107][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3874.0). Total num frames: 3407872. Throughput: 0: 964.0. Samples: 849422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:03:57,114][11306] Avg episode reward: [(0, '25.418')] +[2023-02-23 03:03:57,122][11625] Saving new best policy, reward=25.418! +[2023-02-23 03:04:02,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3432448. Throughput: 0: 1011.8. Samples: 856552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:02,116][11306] Avg episode reward: [(0, '25.409')] +[2023-02-23 03:04:03,786][11639] Updated weights for policy 0, policy_version 840 (0.0014) +[2023-02-23 03:04:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3448832. Throughput: 0: 986.4. Samples: 862500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:04:07,117][11306] Avg episode reward: [(0, '26.474')] +[2023-02-23 03:04:07,123][11625] Saving new best policy, reward=26.474! +[2023-02-23 03:04:12,108][11306] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3465216. Throughput: 0: 954.3. Samples: 864728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:04:12,114][11306] Avg episode reward: [(0, '26.589')] +[2023-02-23 03:04:12,135][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000846_3465216.pth... +[2023-02-23 03:04:12,296][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000619_2535424.pth +[2023-02-23 03:04:12,332][11625] Saving new best policy, reward=26.589! +[2023-02-23 03:04:16,032][11639] Updated weights for policy 0, policy_version 850 (0.0026) +[2023-02-23 03:04:17,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3485696. Throughput: 0: 971.1. Samples: 869994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:17,114][11306] Avg episode reward: [(0, '26.143')] +[2023-02-23 03:04:22,108][11306] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3510272. Throughput: 0: 1005.3. Samples: 877188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:22,115][11306] Avg episode reward: [(0, '25.596')] +[2023-02-23 03:04:25,242][11639] Updated weights for policy 0, policy_version 860 (0.0021) +[2023-02-23 03:04:27,107][11306] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3901.7). Total num frames: 3526656. Throughput: 0: 995.4. Samples: 880440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:04:27,117][11306] Avg episode reward: [(0, '25.139')] +[2023-02-23 03:04:32,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3959.6, 300 sec: 3901.6). Total num frames: 3543040. Throughput: 0: 948.0. Samples: 884962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:04:32,113][11306] Avg episode reward: [(0, '25.706')] +[2023-02-23 03:04:36,642][11639] Updated weights for policy 0, policy_version 870 (0.0019) +[2023-02-23 03:04:37,108][11306] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3563520. Throughput: 0: 979.2. Samples: 890924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:04:37,113][11306] Avg episode reward: [(0, '23.890')] +[2023-02-23 03:04:42,108][11306] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3588096. Throughput: 0: 1004.0. Samples: 894602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:42,113][11306] Avg episode reward: [(0, '23.122')] +[2023-02-23 03:04:46,419][11639] Updated weights for policy 0, policy_version 880 (0.0012) +[2023-02-23 03:04:47,116][11306] Fps is (10 sec: 4092.9, 60 sec: 3890.7, 300 sec: 3887.6). Total num frames: 3604480. Throughput: 0: 983.6. Samples: 900822. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:47,118][11306] Avg episode reward: [(0, '24.426')] +[2023-02-23 03:04:52,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3901.7). Total num frames: 3620864. Throughput: 0: 950.6. Samples: 905278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:04:52,112][11306] Avg episode reward: [(0, '23.705')] +[2023-02-23 03:04:57,107][11306] Fps is (10 sec: 3689.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3641344. Throughput: 0: 966.5. Samples: 908220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:04:57,116][11306] Avg episode reward: [(0, '23.383')] +[2023-02-23 03:04:57,152][11639] Updated weights for policy 0, policy_version 890 (0.0030) +[2023-02-23 03:05:02,108][11306] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3665920. Throughput: 0: 1012.6. Samples: 915562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-23 03:05:02,115][11306] Avg episode reward: [(0, '24.645')] +[2023-02-23 03:05:07,107][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 3682304. Throughput: 0: 973.9. Samples: 921014. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 03:05:07,111][11306] Avg episode reward: [(0, '24.293')] +[2023-02-23 03:05:07,538][11639] Updated weights for policy 0, policy_version 900 (0.0012) +[2023-02-23 03:05:12,108][11306] Fps is (10 sec: 3276.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3698688. Throughput: 0: 952.9. Samples: 923322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 03:05:12,113][11306] Avg episode reward: [(0, '25.905')] +[2023-02-23 03:05:17,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3723264. Throughput: 0: 982.2. Samples: 929162. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-23 03:05:17,110][11306] Avg episode reward: [(0, '24.567')] +[2023-02-23 03:05:17,822][11639] Updated weights for policy 0, policy_version 910 (0.0026) +[2023-02-23 03:05:22,108][11306] Fps is (10 sec: 4506.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3743744. Throughput: 0: 1009.5. Samples: 936352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:05:22,115][11306] Avg episode reward: [(0, '24.206')] +[2023-02-23 03:05:27,107][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 3760128. Throughput: 0: 989.4. Samples: 939126. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:05:27,110][11306] Avg episode reward: [(0, '26.041')] +[2023-02-23 03:05:28,711][11639] Updated weights for policy 0, policy_version 920 (0.0011) +[2023-02-23 03:05:32,108][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3776512. Throughput: 0: 953.7. Samples: 943732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:05:32,114][11306] Avg episode reward: [(0, '25.905')] +[2023-02-23 03:05:37,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3801088. Throughput: 0: 995.9. Samples: 950094. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-23 03:05:37,114][11306] Avg episode reward: [(0, '27.600')] +[2023-02-23 03:05:37,117][11625] Saving new best policy, reward=27.600! +[2023-02-23 03:05:38,492][11639] Updated weights for policy 0, policy_version 930 (0.0018) +[2023-02-23 03:05:42,108][11306] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3825664. Throughput: 0: 1009.8. Samples: 953662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:05:42,111][11306] Avg episode reward: [(0, '28.253')] +[2023-02-23 03:05:42,126][11625] Saving new best policy, reward=28.253! +[2023-02-23 03:05:47,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.7, 300 sec: 3887.7). Total num frames: 3837952. Throughput: 0: 975.4. Samples: 959456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:05:47,115][11306] Avg episode reward: [(0, '28.127')] +[2023-02-23 03:05:50,108][11639] Updated weights for policy 0, policy_version 940 (0.0014) +[2023-02-23 03:05:52,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3854336. Throughput: 0: 955.8. Samples: 964026. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-23 03:05:52,115][11306] Avg episode reward: [(0, '29.751')] +[2023-02-23 03:05:52,129][11625] Saving new best policy, reward=29.751! +[2023-02-23 03:05:57,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3878912. Throughput: 0: 977.9. Samples: 967326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:05:57,114][11306] Avg episode reward: [(0, '29.877')] +[2023-02-23 03:05:57,120][11625] Saving new best policy, reward=29.877! +[2023-02-23 03:05:59,080][11639] Updated weights for policy 0, policy_version 950 (0.0018) +[2023-02-23 03:06:02,108][11306] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3903488. Throughput: 0: 1009.5. Samples: 974588. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-23 03:06:02,113][11306] Avg episode reward: [(0, '29.682')] +[2023-02-23 03:06:07,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3919872. Throughput: 0: 967.6. Samples: 979896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:06:07,111][11306] Avg episode reward: [(0, '28.126')] +[2023-02-23 03:06:11,140][11639] Updated weights for policy 0, policy_version 960 (0.0026) +[2023-02-23 03:06:12,108][11306] Fps is (10 sec: 2867.2, 60 sec: 3891.3, 300 sec: 3901.6). Total num frames: 3932160. Throughput: 0: 955.9. Samples: 982140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:06:12,114][11306] Avg episode reward: [(0, '27.667')] +[2023-02-23 03:06:12,124][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000961_3936256.pth... +[2023-02-23 03:06:12,235][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000732_2998272.pth +[2023-02-23 03:06:17,108][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3956736. Throughput: 0: 988.6. Samples: 988220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:06:17,111][11306] Avg episode reward: [(0, '25.817')] +[2023-02-23 03:06:19,966][11639] Updated weights for policy 0, policy_version 970 (0.0013) +[2023-02-23 03:06:22,108][11306] Fps is (10 sec: 4915.0, 60 sec: 3959.4, 300 sec: 3915.5). Total num frames: 3981312. Throughput: 0: 1007.1. Samples: 995414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:06:22,116][11306] Avg episode reward: [(0, '24.938')] +[2023-02-23 03:06:27,108][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.6). Total num frames: 3997696. Throughput: 0: 982.2. Samples: 997862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-23 03:06:27,114][11306] Avg episode reward: [(0, '25.349')] +[2023-02-23 03:06:29,570][11625] Stopping Batcher_0... +[2023-02-23 03:06:29,570][11625] Loop batcher_evt_loop terminating... +[2023-02-23 03:06:29,572][11306] Component Batcher_0 stopped! +[2023-02-23 03:06:29,590][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-23 03:06:29,635][11639] Weights refcount: 2 0 +[2023-02-23 03:06:29,646][11306] Component InferenceWorker_p0-w0 stopped! +[2023-02-23 03:06:29,651][11639] Stopping InferenceWorker_p0-w0... +[2023-02-23 03:06:29,652][11639] Loop inference_proc0-0_evt_loop terminating... +[2023-02-23 03:06:29,659][11647] Stopping RolloutWorker_w7... +[2023-02-23 03:06:29,660][11306] Component RolloutWorker_w7 stopped! +[2023-02-23 03:06:29,685][11647] Loop rollout_proc7_evt_loop terminating... +[2023-02-23 03:06:29,706][11306] Component RolloutWorker_w0 stopped! +[2023-02-23 03:06:29,708][11646] Stopping RolloutWorker_w3... +[2023-02-23 03:06:29,711][11306] Component RolloutWorker_w3 stopped! +[2023-02-23 03:06:29,716][11640] Stopping RolloutWorker_w0... +[2023-02-23 03:06:29,717][11640] Loop rollout_proc0_evt_loop terminating... +[2023-02-23 03:06:29,726][11641] Stopping RolloutWorker_w1... +[2023-02-23 03:06:29,726][11306] Component RolloutWorker_w1 stopped! +[2023-02-23 03:06:29,712][11646] Loop rollout_proc3_evt_loop terminating... +[2023-02-23 03:06:29,726][11641] Loop rollout_proc1_evt_loop terminating... +[2023-02-23 03:06:29,758][11306] Component RolloutWorker_w4 stopped! +[2023-02-23 03:06:29,762][11643] Stopping RolloutWorker_w4... +[2023-02-23 03:06:29,763][11643] Loop rollout_proc4_evt_loop terminating... +[2023-02-23 03:06:29,775][11644] Stopping RolloutWorker_w5... +[2023-02-23 03:06:29,775][11644] Loop rollout_proc5_evt_loop terminating... +[2023-02-23 03:06:29,775][11306] Component RolloutWorker_w5 stopped! +[2023-02-23 03:06:29,797][11306] Component RolloutWorker_w6 stopped! +[2023-02-23 03:06:29,801][11645] Stopping RolloutWorker_w6... +[2023-02-23 03:06:29,801][11645] Loop rollout_proc6_evt_loop terminating... +[2023-02-23 03:06:29,830][11625] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000846_3465216.pth +[2023-02-23 03:06:29,832][11306] Component RolloutWorker_w2 stopped! +[2023-02-23 03:06:29,837][11642] Stopping RolloutWorker_w2... +[2023-02-23 03:06:29,838][11642] Loop rollout_proc2_evt_loop terminating... +[2023-02-23 03:06:29,860][11625] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-23 03:06:30,199][11625] Stopping LearnerWorker_p0... +[2023-02-23 03:06:30,200][11625] Loop learner_proc0_evt_loop terminating... +[2023-02-23 03:06:30,199][11306] Component LearnerWorker_p0 stopped! +[2023-02-23 03:06:30,203][11306] Waiting for process learner_proc0 to stop... +[2023-02-23 03:06:32,312][11306] Waiting for process inference_proc0-0 to join... +[2023-02-23 03:06:32,772][11306] Waiting for process rollout_proc0 to join... +[2023-02-23 03:06:32,774][11306] Waiting for process rollout_proc1 to join... +[2023-02-23 03:06:33,069][11306] Waiting for process rollout_proc2 to join... +[2023-02-23 03:06:33,071][11306] Waiting for process rollout_proc3 to join... +[2023-02-23 03:06:33,072][11306] Waiting for process rollout_proc4 to join... +[2023-02-23 03:06:33,073][11306] Waiting for process rollout_proc5 to join... +[2023-02-23 03:06:33,076][11306] Waiting for process rollout_proc6 to join... +[2023-02-23 03:06:33,077][11306] Waiting for process rollout_proc7 to join... +[2023-02-23 03:06:33,079][11306] Batcher 0 profile tree view: +batching: 25.9267, releasing_batches: 0.0263 +[2023-02-23 03:06:33,081][11306] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0148 + wait_policy_total: 519.4592 +update_model: 7.3728 + weight_update: 0.0016 +one_step: 0.0056 + handle_policy_step: 498.5726 + deserialize: 14.8475, stack: 2.8204, obs_to_device_normalize: 112.0643, forward: 237.8206, send_messages: 25.5025 + prepare_outputs: 81.0395 + to_cpu: 51.0453 +[2023-02-23 03:06:33,083][11306] Learner 0 profile tree view: +misc: 0.0062, prepare_batch: 17.4333 +train: 75.4588 + epoch_init: 0.0065, minibatch_init: 0.0124, losses_postprocess: 0.6580, kl_divergence: 0.5962, after_optimizer: 32.6432 + calculate_losses: 26.5689 + losses_init: 0.0074, forward_head: 1.7279, bptt_initial: 17.5859, tail: 1.0301, advantages_returns: 0.2981, losses: 3.4035 + bptt: 2.2014 + bptt_forward_core: 2.1301 + update: 14.2825 + clip: 1.4519 +[2023-02-23 03:06:33,086][11306] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3066, enqueue_policy_requests: 139.6339, env_step: 803.1146, overhead: 19.8892, complete_rollouts: 6.6275 +save_policy_outputs: 19.2569 + split_output_tensors: 9.1538 +[2023-02-23 03:06:33,088][11306] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3534, enqueue_policy_requests: 141.0261, env_step: 802.2677, overhead: 19.0204, complete_rollouts: 6.2990 +save_policy_outputs: 19.2686 + split_output_tensors: 9.4728 +[2023-02-23 03:06:33,093][11306] Loop Runner_EvtLoop terminating... +[2023-02-23 03:06:33,095][11306] Runner profile tree view: +main_loop: 1096.3166 +[2023-02-23 03:06:33,097][11306] Collected {0: 4005888}, FPS: 3654.0 +[2023-02-23 03:06:33,272][11306] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-23 03:06:33,273][11306] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-23 03:06:33,276][11306] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-23 03:06:33,278][11306] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-23 03:06:33,279][11306] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-23 03:06:33,283][11306] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-23 03:06:33,285][11306] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-23 03:06:33,286][11306] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-23 03:06:33,287][11306] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-23 03:06:33,288][11306] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-23 03:06:33,289][11306] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-23 03:06:33,290][11306] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-23 03:06:33,297][11306] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-23 03:06:33,298][11306] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-23 03:06:33,299][11306] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-23 03:06:33,326][11306] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-23 03:06:33,329][11306] RunningMeanStd input shape: (3, 72, 128) +[2023-02-23 03:06:33,331][11306] RunningMeanStd input shape: (1,) +[2023-02-23 03:06:33,354][11306] ConvEncoder: input_channels=3 +[2023-02-23 03:06:34,071][11306] Conv encoder output size: 512 +[2023-02-23 03:06:34,073][11306] Policy head output size: 512 +[2023-02-23 03:06:36,492][11306] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-23 03:06:37,762][11306] Num frames 100... +[2023-02-23 03:06:37,874][11306] Num frames 200... +[2023-02-23 03:06:37,985][11306] Num frames 300... +[2023-02-23 03:06:38,098][11306] Num frames 400... +[2023-02-23 03:06:38,245][11306] Avg episode rewards: #0: 7.800, true rewards: #0: 4.800 +[2023-02-23 03:06:38,247][11306] Avg episode reward: 7.800, avg true_objective: 4.800 +[2023-02-23 03:06:38,275][11306] Num frames 500... +[2023-02-23 03:06:38,392][11306] Num frames 600... +[2023-02-23 03:06:38,510][11306] Num frames 700... +[2023-02-23 03:06:38,621][11306] Num frames 800... +[2023-02-23 03:06:38,736][11306] Num frames 900... +[2023-02-23 03:06:38,849][11306] Num frames 1000... +[2023-02-23 03:06:38,964][11306] Num frames 1100... +[2023-02-23 03:06:39,073][11306] Num frames 1200... +[2023-02-23 03:06:39,194][11306] Num frames 1300... +[2023-02-23 03:06:39,263][11306] Avg episode rewards: #0: 14.060, true rewards: #0: 6.560 +[2023-02-23 03:06:39,265][11306] Avg episode reward: 14.060, avg true_objective: 6.560 +[2023-02-23 03:06:39,367][11306] Num frames 1400... +[2023-02-23 03:06:39,488][11306] Num frames 1500... +[2023-02-23 03:06:39,614][11306] Num frames 1600... +[2023-02-23 03:06:39,735][11306] Num frames 1700... +[2023-02-23 03:06:39,845][11306] Num frames 1800... +[2023-02-23 03:06:39,956][11306] Num frames 1900... +[2023-02-23 03:06:40,072][11306] Num frames 2000... +[2023-02-23 03:06:40,183][11306] Num frames 2100... +[2023-02-23 03:06:40,303][11306] Num frames 2200... +[2023-02-23 03:06:40,418][11306] Num frames 2300... +[2023-02-23 03:06:40,519][11306] Avg episode rewards: #0: 16.787, true rewards: #0: 7.787 +[2023-02-23 03:06:40,522][11306] Avg episode reward: 16.787, avg true_objective: 7.787 +[2023-02-23 03:06:40,602][11306] Num frames 2400... +[2023-02-23 03:06:40,719][11306] Num frames 2500... +[2023-02-23 03:06:40,828][11306] Num frames 2600... +[2023-02-23 03:06:40,938][11306] Num frames 2700... +[2023-02-23 03:06:41,047][11306] Num frames 2800... +[2023-02-23 03:06:41,164][11306] Num frames 2900... +[2023-02-23 03:06:41,277][11306] Num frames 3000... +[2023-02-23 03:06:41,396][11306] Num frames 3100... +[2023-02-23 03:06:41,536][11306] Num frames 3200... +[2023-02-23 03:06:41,710][11306] Num frames 3300... +[2023-02-23 03:06:41,873][11306] Num frames 3400... +[2023-02-23 03:06:42,029][11306] Num frames 3500... +[2023-02-23 03:06:42,194][11306] Num frames 3600... +[2023-02-23 03:06:42,357][11306] Num frames 3700... +[2023-02-23 03:06:42,517][11306] Num frames 3800... +[2023-02-23 03:06:42,681][11306] Num frames 3900... +[2023-02-23 03:06:42,842][11306] Num frames 4000... +[2023-02-23 03:06:43,004][11306] Num frames 4100... +[2023-02-23 03:06:43,168][11306] Num frames 4200... +[2023-02-23 03:06:43,329][11306] Num frames 4300... +[2023-02-23 03:06:43,490][11306] Num frames 4400... +[2023-02-23 03:06:43,601][11306] Avg episode rewards: #0: 28.340, true rewards: #0: 11.090 +[2023-02-23 03:06:43,603][11306] Avg episode reward: 28.340, avg true_objective: 11.090 +[2023-02-23 03:06:43,724][11306] Num frames 4500... +[2023-02-23 03:06:43,885][11306] Num frames 4600... +[2023-02-23 03:06:44,051][11306] Num frames 4700... +[2023-02-23 03:06:44,222][11306] Num frames 4800... +[2023-02-23 03:06:44,394][11306] Num frames 4900... +[2023-02-23 03:06:44,558][11306] Num frames 5000... +[2023-02-23 03:06:44,721][11306] Num frames 5100... +[2023-02-23 03:06:44,903][11306] Avg episode rewards: #0: 25.944, true rewards: #0: 10.344 +[2023-02-23 03:06:44,905][11306] Avg episode reward: 25.944, avg true_objective: 10.344 +[2023-02-23 03:06:44,955][11306] Num frames 5200... +[2023-02-23 03:06:45,089][11306] Num frames 5300... +[2023-02-23 03:06:45,200][11306] Num frames 5400... +[2023-02-23 03:06:45,311][11306] Num frames 5500... +[2023-02-23 03:06:45,446][11306] Avg episode rewards: #0: 23.112, true rewards: #0: 9.278 +[2023-02-23 03:06:45,449][11306] Avg episode reward: 23.112, avg true_objective: 9.278 +[2023-02-23 03:06:45,486][11306] Num frames 5600... +[2023-02-23 03:06:45,596][11306] Num frames 5700... +[2023-02-23 03:06:45,719][11306] Num frames 5800... +[2023-02-23 03:06:45,832][11306] Num frames 5900... +[2023-02-23 03:06:45,942][11306] Num frames 6000... +[2023-02-23 03:06:46,054][11306] Num frames 6100... +[2023-02-23 03:06:46,168][11306] Num frames 6200... +[2023-02-23 03:06:46,245][11306] Avg episode rewards: #0: 21.590, true rewards: #0: 8.876 +[2023-02-23 03:06:46,247][11306] Avg episode reward: 21.590, avg true_objective: 8.876 +[2023-02-23 03:06:46,347][11306] Num frames 6300... +[2023-02-23 03:06:46,458][11306] Num frames 6400... +[2023-02-23 03:06:46,569][11306] Num frames 6500... +[2023-02-23 03:06:46,685][11306] Num frames 6600... +[2023-02-23 03:06:46,802][11306] Num frames 6700... +[2023-02-23 03:06:46,914][11306] Num frames 6800... +[2023-02-23 03:06:47,029][11306] Num frames 6900... +[2023-02-23 03:06:47,142][11306] Num frames 7000... +[2023-02-23 03:06:47,253][11306] Num frames 7100... +[2023-02-23 03:06:47,366][11306] Num frames 7200... +[2023-02-23 03:06:47,484][11306] Num frames 7300... +[2023-02-23 03:06:47,603][11306] Num frames 7400... +[2023-02-23 03:06:47,713][11306] Num frames 7500... +[2023-02-23 03:06:47,803][11306] Avg episode rewards: #0: 22.781, true rewards: #0: 9.406 +[2023-02-23 03:06:47,806][11306] Avg episode reward: 22.781, avg true_objective: 9.406 +[2023-02-23 03:06:47,899][11306] Num frames 7600... +[2023-02-23 03:06:48,010][11306] Num frames 7700... +[2023-02-23 03:06:48,121][11306] Num frames 7800... +[2023-02-23 03:06:48,233][11306] Num frames 7900... +[2023-02-23 03:06:48,348][11306] Num frames 8000... +[2023-02-23 03:06:48,467][11306] Num frames 8100... +[2023-02-23 03:06:48,574][11306] Num frames 8200... +[2023-02-23 03:06:48,662][11306] Avg episode rewards: #0: 21.588, true rewards: #0: 9.143 +[2023-02-23 03:06:48,664][11306] Avg episode reward: 21.588, avg true_objective: 9.143 +[2023-02-23 03:06:48,765][11306] Num frames 8300... +[2023-02-23 03:06:48,876][11306] Num frames 8400... +[2023-02-23 03:06:48,986][11306] Num frames 8500... +[2023-02-23 03:06:49,096][11306] Num frames 8600... +[2023-02-23 03:06:49,208][11306] Num frames 8700... +[2023-02-23 03:06:49,316][11306] Num frames 8800... +[2023-02-23 03:06:49,439][11306] Num frames 8900... +[2023-02-23 03:06:49,548][11306] Num frames 9000... +[2023-02-23 03:06:49,657][11306] Num frames 9100... +[2023-02-23 03:06:49,796][11306] Avg episode rewards: #0: 21.376, true rewards: #0: 9.176 +[2023-02-23 03:06:49,798][11306] Avg episode reward: 21.376, avg true_objective: 9.176 +[2023-02-23 03:07:43,727][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-23 03:27:59,235][11306] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-23 03:27:59,237][11306] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-23 03:27:59,239][11306] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-23 03:27:59,242][11306] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-23 03:27:59,245][11306] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-23 03:27:59,249][11306] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-23 03:27:59,250][11306] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-23 03:27:59,253][11306] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-23 03:27:59,255][11306] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-23 03:27:59,256][11306] Adding new argument 'hf_repository'='keshan/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-23 03:27:59,260][11306] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-23 03:27:59,263][11306] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-23 03:27:59,270][11306] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-23 03:27:59,271][11306] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-23 03:27:59,272][11306] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-23 03:27:59,295][11306] RunningMeanStd input shape: (3, 72, 128) +[2023-02-23 03:27:59,298][11306] RunningMeanStd input shape: (1,) +[2023-02-23 03:27:59,312][11306] ConvEncoder: input_channels=3 +[2023-02-23 03:27:59,347][11306] Conv encoder output size: 512 +[2023-02-23 03:27:59,349][11306] Policy head output size: 512 +[2023-02-23 03:27:59,371][11306] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-23 03:27:59,802][11306] Num frames 100... +[2023-02-23 03:27:59,915][11306] Num frames 200... +[2023-02-23 03:28:00,025][11306] Num frames 300... +[2023-02-23 03:28:00,156][11306] Num frames 400... +[2023-02-23 03:28:00,297][11306] Avg episode rewards: #0: 9.750, true rewards: #0: 4.750 +[2023-02-23 03:28:00,300][11306] Avg episode reward: 9.750, avg true_objective: 4.750 +[2023-02-23 03:28:00,331][11306] Num frames 500... +[2023-02-23 03:28:00,442][11306] Num frames 600... +[2023-02-23 03:28:00,606][11306] Num frames 700... +[2023-02-23 03:28:00,766][11306] Num frames 800... +[2023-02-23 03:28:00,929][11306] Num frames 900... +[2023-02-23 03:28:01,083][11306] Num frames 1000... +[2023-02-23 03:28:01,242][11306] Num frames 1100... +[2023-02-23 03:28:01,396][11306] Num frames 1200... +[2023-02-23 03:28:01,562][11306] Num frames 1300... +[2023-02-23 03:28:01,726][11306] Num frames 1400... +[2023-02-23 03:28:01,887][11306] Avg episode rewards: #0: 14.835, true rewards: #0: 7.335 +[2023-02-23 03:28:01,889][11306] Avg episode reward: 14.835, avg true_objective: 7.335 +[2023-02-23 03:28:01,951][11306] Num frames 1500... +[2023-02-23 03:28:02,120][11306] Num frames 1600... +[2023-02-23 03:28:02,275][11306] Num frames 1700... +[2023-02-23 03:28:02,431][11306] Num frames 1800... +[2023-02-23 03:28:02,593][11306] Num frames 1900... +[2023-02-23 03:28:02,753][11306] Num frames 2000... +[2023-02-23 03:28:02,914][11306] Num frames 2100... +[2023-02-23 03:28:03,074][11306] Num frames 2200... +[2023-02-23 03:28:03,232][11306] Num frames 2300... +[2023-02-23 03:28:03,398][11306] Num frames 2400... +[2023-02-23 03:28:03,566][11306] Num frames 2500... +[2023-02-23 03:28:03,731][11306] Num frames 2600... +[2023-02-23 03:28:03,894][11306] Num frames 2700... +[2023-02-23 03:28:04,068][11306] Avg episode rewards: #0: 20.227, true rewards: #0: 9.227 +[2023-02-23 03:28:04,069][11306] Avg episode reward: 20.227, avg true_objective: 9.227 +[2023-02-23 03:28:04,112][11306] Num frames 2800... +[2023-02-23 03:28:04,225][11306] Num frames 2900... +[2023-02-23 03:28:04,343][11306] Num frames 3000... +[2023-02-23 03:28:04,459][11306] Num frames 3100... +[2023-02-23 03:28:04,584][11306] Num frames 3200... +[2023-02-23 03:28:04,696][11306] Num frames 3300... +[2023-02-23 03:28:04,809][11306] Num frames 3400... +[2023-02-23 03:28:04,930][11306] Num frames 3500... +[2023-02-23 03:28:05,048][11306] Num frames 3600... +[2023-02-23 03:28:05,165][11306] Num frames 3700... +[2023-02-23 03:28:05,278][11306] Num frames 3800... +[2023-02-23 03:28:05,392][11306] Num frames 3900... +[2023-02-23 03:28:05,510][11306] Num frames 4000... +[2023-02-23 03:28:05,631][11306] Num frames 4100... +[2023-02-23 03:28:05,752][11306] Num frames 4200... +[2023-02-23 03:28:05,867][11306] Num frames 4300... +[2023-02-23 03:28:05,992][11306] Num frames 4400... +[2023-02-23 03:28:06,117][11306] Num frames 4500... +[2023-02-23 03:28:06,238][11306] Num frames 4600... +[2023-02-23 03:28:06,322][11306] Avg episode rewards: #0: 28.310, true rewards: #0: 11.560 +[2023-02-23 03:28:06,325][11306] Avg episode reward: 28.310, avg true_objective: 11.560 +[2023-02-23 03:28:06,418][11306] Num frames 4700... +[2023-02-23 03:28:06,532][11306] Num frames 4800... +[2023-02-23 03:28:06,647][11306] Num frames 4900... +[2023-02-23 03:28:06,764][11306] Num frames 5000... +[2023-02-23 03:28:06,878][11306] Num frames 5100... +[2023-02-23 03:28:07,006][11306] Num frames 5200... +[2023-02-23 03:28:07,134][11306] Avg episode rewards: #0: 25.928, true rewards: #0: 10.528 +[2023-02-23 03:28:07,137][11306] Avg episode reward: 25.928, avg true_objective: 10.528 +[2023-02-23 03:28:07,188][11306] Num frames 5300... +[2023-02-23 03:28:07,318][11306] Num frames 5400... +[2023-02-23 03:28:07,432][11306] Num frames 5500... +[2023-02-23 03:28:07,548][11306] Num frames 5600... +[2023-02-23 03:28:07,662][11306] Num frames 5700... +[2023-02-23 03:28:07,779][11306] Num frames 5800... +[2023-02-23 03:28:07,898][11306] Num frames 5900... +[2023-02-23 03:28:08,034][11306] Num frames 6000... +[2023-02-23 03:28:08,161][11306] Num frames 6100... +[2023-02-23 03:28:08,277][11306] Num frames 6200... +[2023-02-23 03:28:08,398][11306] Num frames 6300... +[2023-02-23 03:28:08,516][11306] Num frames 6400... +[2023-02-23 03:28:08,639][11306] Avg episode rewards: #0: 25.917, true rewards: #0: 10.750 +[2023-02-23 03:28:08,641][11306] Avg episode reward: 25.917, avg true_objective: 10.750 +[2023-02-23 03:28:08,703][11306] Num frames 6500... +[2023-02-23 03:28:08,823][11306] Num frames 6600... +[2023-02-23 03:28:08,942][11306] Num frames 6700... +[2023-02-23 03:28:09,168][11306] Num frames 6800... +[2023-02-23 03:28:09,283][11306] Num frames 6900... +[2023-02-23 03:28:09,398][11306] Num frames 7000... +[2023-02-23 03:28:09,521][11306] Num frames 7100... +[2023-02-23 03:28:09,602][11306] Avg episode rewards: #0: 24.031, true rewards: #0: 10.174 +[2023-02-23 03:28:09,603][11306] Avg episode reward: 24.031, avg true_objective: 10.174 +[2023-02-23 03:28:09,784][11306] Num frames 7200... +[2023-02-23 03:28:09,904][11306] Num frames 7300... +[2023-02-23 03:28:10,023][11306] Num frames 7400... +[2023-02-23 03:28:10,146][11306] Num frames 7500... +[2023-02-23 03:28:10,272][11306] Num frames 7600... +[2023-02-23 03:28:10,443][11306] Num frames 7700... +[2023-02-23 03:28:10,605][11306] Num frames 7800... +[2023-02-23 03:28:10,674][11306] Avg episode rewards: #0: 22.761, true rewards: #0: 9.761 +[2023-02-23 03:28:10,676][11306] Avg episode reward: 22.761, avg true_objective: 9.761 +[2023-02-23 03:28:10,784][11306] Num frames 7900... +[2023-02-23 03:28:10,907][11306] Num frames 8000... +[2023-02-23 03:28:11,029][11306] Num frames 8100... +[2023-02-23 03:28:11,146][11306] Num frames 8200... +[2023-02-23 03:28:11,307][11306] Avg episode rewards: #0: 21.654, true rewards: #0: 9.210 +[2023-02-23 03:28:11,308][11306] Avg episode reward: 21.654, avg true_objective: 9.210 +[2023-02-23 03:28:11,327][11306] Num frames 8300... +[2023-02-23 03:28:11,444][11306] Num frames 8400... +[2023-02-23 03:28:11,570][11306] Num frames 8500... +[2023-02-23 03:28:11,685][11306] Num frames 8600... +[2023-02-23 03:28:11,811][11306] Num frames 8700... +[2023-02-23 03:28:11,926][11306] Num frames 8800... +[2023-02-23 03:28:12,047][11306] Num frames 8900... +[2023-02-23 03:28:12,172][11306] Num frames 9000... +[2023-02-23 03:28:12,258][11306] Avg episode rewards: #0: 20.825, true rewards: #0: 9.025 +[2023-02-23 03:28:12,259][11306] Avg episode reward: 20.825, avg true_objective: 9.025 +[2023-02-23 03:29:06,647][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4!