diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,983 @@ +[2023-02-26 15:58:46,589][00820] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-26 15:58:46,592][00820] Rollout worker 0 uses device cpu +[2023-02-26 15:58:46,594][00820] Rollout worker 1 uses device cpu +[2023-02-26 15:58:46,596][00820] Rollout worker 2 uses device cpu +[2023-02-26 15:58:46,598][00820] Rollout worker 3 uses device cpu +[2023-02-26 15:58:46,603][00820] Rollout worker 4 uses device cpu +[2023-02-26 15:58:46,604][00820] Rollout worker 5 uses device cpu +[2023-02-26 15:58:46,606][00820] Rollout worker 6 uses device cpu +[2023-02-26 15:58:46,613][00820] Rollout worker 7 uses device cpu +[2023-02-26 15:58:46,827][00820] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 15:58:46,829][00820] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-26 15:58:46,870][00820] Starting all processes... +[2023-02-26 15:58:46,872][00820] Starting process learner_proc0 +[2023-02-26 15:58:46,944][00820] Starting all processes... +[2023-02-26 15:58:46,953][00820] Starting process inference_proc0-0 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc0 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc1 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc2 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc3 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc4 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc5 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc6 +[2023-02-26 15:58:46,954][00820] Starting process rollout_proc7 +[2023-02-26 15:58:57,190][10763] Worker 1 uses CPU cores [1] +[2023-02-26 15:58:57,312][10764] Worker 2 uses CPU cores [0] +[2023-02-26 15:58:57,341][10766] Worker 3 uses CPU cores [1] +[2023-02-26 15:58:57,348][10767] Worker 5 uses CPU cores [1] +[2023-02-26 15:58:57,370][10760] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 15:58:57,377][10765] Worker 4 uses CPU cores [0] +[2023-02-26 15:58:57,377][10760] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-26 15:58:57,655][10761] Worker 0 uses CPU cores [0] +[2023-02-26 15:58:57,666][10747] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 15:58:57,666][10747] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-26 15:58:57,900][10769] Worker 7 uses CPU cores [1] +[2023-02-26 15:58:58,018][10768] Worker 6 uses CPU cores [0] +[2023-02-26 15:58:58,417][10760] Num visible devices: 1 +[2023-02-26 15:58:58,416][10747] Num visible devices: 1 +[2023-02-26 15:58:58,444][10747] Starting seed is not provided +[2023-02-26 15:58:58,445][10747] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 15:58:58,446][10747] Initializing actor-critic model on device cuda:0 +[2023-02-26 15:58:58,447][10747] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 15:58:58,450][10747] RunningMeanStd input shape: (1,) +[2023-02-26 15:58:58,502][10747] ConvEncoder: input_channels=3 +[2023-02-26 15:58:59,098][10747] Conv encoder output size: 512 +[2023-02-26 15:58:59,099][10747] Policy head output size: 512 +[2023-02-26 15:58:59,234][10747] Created Actor Critic model with architecture: +[2023-02-26 15:58:59,234][10747] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-26 15:59:06,818][00820] Heartbeat connected on Batcher_0 +[2023-02-26 15:59:06,828][00820] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-26 15:59:06,839][00820] Heartbeat connected on RolloutWorker_w0 +[2023-02-26 15:59:06,843][00820] Heartbeat connected on RolloutWorker_w1 +[2023-02-26 15:59:06,848][00820] Heartbeat connected on RolloutWorker_w2 +[2023-02-26 15:59:06,852][00820] Heartbeat connected on RolloutWorker_w3 +[2023-02-26 15:59:06,855][00820] Heartbeat connected on RolloutWorker_w4 +[2023-02-26 15:59:06,864][00820] Heartbeat connected on RolloutWorker_w5 +[2023-02-26 15:59:06,867][00820] Heartbeat connected on RolloutWorker_w6 +[2023-02-26 15:59:06,870][00820] Heartbeat connected on RolloutWorker_w7 +[2023-02-26 15:59:08,377][10747] Using optimizer +[2023-02-26 15:59:08,378][10747] No checkpoints found +[2023-02-26 15:59:08,379][10747] Did not load from checkpoint, starting from scratch! +[2023-02-26 15:59:08,380][10747] Initialized policy 0 weights for model version 0 +[2023-02-26 15:59:08,382][10747] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 15:59:08,389][10747] LearnerWorker_p0 finished initialization! +[2023-02-26 15:59:08,392][00820] Heartbeat connected on LearnerWorker_p0 +[2023-02-26 15:59:08,483][10760] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 15:59:08,484][10760] RunningMeanStd input shape: (1,) +[2023-02-26 15:59:08,503][10760] ConvEncoder: input_channels=3 +[2023-02-26 15:59:08,599][10760] Conv encoder output size: 512 +[2023-02-26 15:59:08,599][10760] Policy head output size: 512 +[2023-02-26 15:59:10,831][00820] Inference worker 0-0 is ready! +[2023-02-26 15:59:10,832][00820] All inference workers are ready! Signal rollout workers to start! +[2023-02-26 15:59:10,945][10767] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:10,959][10766] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:10,964][10763] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:10,976][10768] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:10,975][10765] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:10,990][10769] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:11,005][10761] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:11,014][10764] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 15:59:11,493][00820] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 15:59:11,825][10768] Decorrelating experience for 0 frames... +[2023-02-26 15:59:11,823][10761] Decorrelating experience for 0 frames... +[2023-02-26 15:59:12,082][10766] Decorrelating experience for 0 frames... +[2023-02-26 15:59:12,084][10763] Decorrelating experience for 0 frames... +[2023-02-26 15:59:12,086][10767] Decorrelating experience for 0 frames... +[2023-02-26 15:59:12,516][10761] Decorrelating experience for 32 frames... +[2023-02-26 15:59:12,515][10768] Decorrelating experience for 32 frames... +[2023-02-26 15:59:13,019][10763] Decorrelating experience for 32 frames... +[2023-02-26 15:59:13,025][10767] Decorrelating experience for 32 frames... +[2023-02-26 15:59:13,088][10769] Decorrelating experience for 0 frames... +[2023-02-26 15:59:13,789][10765] Decorrelating experience for 0 frames... +[2023-02-26 15:59:13,858][10766] Decorrelating experience for 32 frames... +[2023-02-26 15:59:13,892][10768] Decorrelating experience for 64 frames... +[2023-02-26 15:59:14,680][10769] Decorrelating experience for 32 frames... +[2023-02-26 15:59:14,923][10763] Decorrelating experience for 64 frames... +[2023-02-26 15:59:15,036][10765] Decorrelating experience for 32 frames... +[2023-02-26 15:59:15,452][10768] Decorrelating experience for 96 frames... +[2023-02-26 15:59:15,458][10767] Decorrelating experience for 64 frames... +[2023-02-26 15:59:16,322][10769] Decorrelating experience for 64 frames... +[2023-02-26 15:59:16,391][10763] Decorrelating experience for 96 frames... +[2023-02-26 15:59:16,493][00820] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 15:59:16,735][10761] Decorrelating experience for 64 frames... +[2023-02-26 15:59:16,824][10766] Decorrelating experience for 64 frames... +[2023-02-26 15:59:17,711][10767] Decorrelating experience for 96 frames... +[2023-02-26 15:59:17,764][10769] Decorrelating experience for 96 frames... +[2023-02-26 15:59:18,041][10766] Decorrelating experience for 96 frames... +[2023-02-26 15:59:18,216][10765] Decorrelating experience for 64 frames... +[2023-02-26 15:59:19,154][10764] Decorrelating experience for 0 frames... +[2023-02-26 15:59:19,301][10761] Decorrelating experience for 96 frames... +[2023-02-26 15:59:19,773][10765] Decorrelating experience for 96 frames... +[2023-02-26 15:59:20,056][10764] Decorrelating experience for 32 frames... +[2023-02-26 15:59:20,597][10764] Decorrelating experience for 64 frames... +[2023-02-26 15:59:21,318][10764] Decorrelating experience for 96 frames... +[2023-02-26 15:59:21,496][00820] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 4.0. Samples: 40. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 15:59:21,501][00820] Avg episode reward: [(0, '0.320')] +[2023-02-26 15:59:23,954][10747] Signal inference workers to stop experience collection... +[2023-02-26 15:59:23,970][10760] InferenceWorker_p0-w0: stopping experience collection +[2023-02-26 15:59:26,494][00820] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 165.1. Samples: 2476. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 15:59:26,498][00820] Avg episode reward: [(0, '2.099')] +[2023-02-26 15:59:26,624][10747] Signal inference workers to resume experience collection... +[2023-02-26 15:59:26,628][10760] InferenceWorker_p0-w0: resuming experience collection +[2023-02-26 15:59:31,494][00820] Fps is (10 sec: 2048.4, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 198.2. Samples: 3964. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) +[2023-02-26 15:59:31,499][00820] Avg episode reward: [(0, '3.608')] +[2023-02-26 15:59:36,494][00820] Fps is (10 sec: 3686.3, 60 sec: 1474.5, 300 sec: 1474.5). Total num frames: 36864. Throughput: 0: 364.7. Samples: 9118. Policy #0 lag: (min: 0.0, avg: 0.8, max: 1.0) +[2023-02-26 15:59:36,503][00820] Avg episode reward: [(0, '4.059')] +[2023-02-26 15:59:37,301][10760] Updated weights for policy 0, policy_version 10 (0.0384) +[2023-02-26 15:59:41,496][00820] Fps is (10 sec: 3276.1, 60 sec: 1774.8, 300 sec: 1774.8). Total num frames: 53248. Throughput: 0: 451.4. Samples: 13542. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 15:59:41,503][00820] Avg episode reward: [(0, '4.430')] +[2023-02-26 15:59:46,493][00820] Fps is (10 sec: 3686.6, 60 sec: 2106.5, 300 sec: 2106.5). Total num frames: 73728. Throughput: 0: 469.1. Samples: 16418. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 15:59:46,498][00820] Avg episode reward: [(0, '4.373')] +[2023-02-26 15:59:47,892][10760] Updated weights for policy 0, policy_version 20 (0.0022) +[2023-02-26 15:59:51,494][00820] Fps is (10 sec: 4097.0, 60 sec: 2355.2, 300 sec: 2355.2). Total num frames: 94208. Throughput: 0: 579.2. Samples: 23170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 15:59:51,498][00820] Avg episode reward: [(0, '4.443')] +[2023-02-26 15:59:56,493][00820] Fps is (10 sec: 3686.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 110592. Throughput: 0: 628.2. Samples: 28270. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 15:59:56,500][00820] Avg episode reward: [(0, '4.347')] +[2023-02-26 15:59:56,503][10747] Saving new best policy, reward=4.347! +[2023-02-26 16:00:00,254][10760] Updated weights for policy 0, policy_version 30 (0.0015) +[2023-02-26 16:00:01,494][00820] Fps is (10 sec: 2867.2, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 122880. Throughput: 0: 673.4. Samples: 30302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:00:01,505][00820] Avg episode reward: [(0, '4.445')] +[2023-02-26 16:00:01,541][10747] Saving new best policy, reward=4.445! +[2023-02-26 16:00:06,493][00820] Fps is (10 sec: 3276.8, 60 sec: 2606.5, 300 sec: 2606.5). Total num frames: 143360. Throughput: 0: 774.9. Samples: 34908. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:00:06,500][00820] Avg episode reward: [(0, '4.536')] +[2023-02-26 16:00:06,503][10747] Saving new best policy, reward=4.536! +[2023-02-26 16:00:10,697][10760] Updated weights for policy 0, policy_version 40 (0.0023) +[2023-02-26 16:00:11,493][00820] Fps is (10 sec: 4096.0, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 163840. Throughput: 0: 869.0. Samples: 41582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:00:11,504][00820] Avg episode reward: [(0, '4.452')] +[2023-02-26 16:00:16,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 900.2. Samples: 44474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:00:16,497][00820] Avg episode reward: [(0, '4.320')] +[2023-02-26 16:00:21,494][00820] Fps is (10 sec: 3276.6, 60 sec: 3276.9, 300 sec: 2808.7). Total num frames: 196608. Throughput: 0: 878.2. Samples: 48638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:00:21,500][00820] Avg episode reward: [(0, '4.226')] +[2023-02-26 16:00:23,854][10760] Updated weights for policy 0, policy_version 50 (0.0018) +[2023-02-26 16:00:26,496][00820] Fps is (10 sec: 3685.3, 60 sec: 3618.0, 300 sec: 2894.4). Total num frames: 217088. Throughput: 0: 900.0. Samples: 54042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:00:26,501][00820] Avg episode reward: [(0, '4.361')] +[2023-02-26 16:00:31,494][00820] Fps is (10 sec: 4096.3, 60 sec: 3618.1, 300 sec: 2969.6). Total num frames: 237568. Throughput: 0: 909.6. Samples: 57352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:00:31,500][00820] Avg episode reward: [(0, '4.473')] +[2023-02-26 16:00:33,138][10760] Updated weights for policy 0, policy_version 60 (0.0016) +[2023-02-26 16:00:36,495][00820] Fps is (10 sec: 3687.3, 60 sec: 3618.1, 300 sec: 2987.7). Total num frames: 253952. Throughput: 0: 888.7. Samples: 63162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:00:36,500][00820] Avg episode reward: [(0, '4.462')] +[2023-02-26 16:00:41,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3550.0, 300 sec: 2958.2). Total num frames: 266240. Throughput: 0: 866.8. Samples: 67276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:00:41,504][00820] Avg episode reward: [(0, '4.357')] +[2023-02-26 16:00:41,518][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000065_266240.pth... +[2023-02-26 16:00:46,137][10760] Updated weights for policy 0, policy_version 70 (0.0022) +[2023-02-26 16:00:46,494][00820] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3018.1). Total num frames: 286720. Throughput: 0: 869.5. Samples: 69430. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:00:46,498][00820] Avg episode reward: [(0, '4.403')] +[2023-02-26 16:00:51,494][00820] Fps is (10 sec: 4095.9, 60 sec: 3549.9, 300 sec: 3072.0). Total num frames: 307200. Throughput: 0: 917.8. Samples: 76210. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:00:51,496][00820] Avg episode reward: [(0, '4.568')] +[2023-02-26 16:00:51,508][10747] Saving new best policy, reward=4.568! +[2023-02-26 16:00:56,429][10760] Updated weights for policy 0, policy_version 80 (0.0045) +[2023-02-26 16:00:56,497][00820] Fps is (10 sec: 4094.4, 60 sec: 3617.9, 300 sec: 3120.6). Total num frames: 327680. Throughput: 0: 899.3. Samples: 82054. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:00:56,505][00820] Avg episode reward: [(0, '4.869')] +[2023-02-26 16:00:56,511][10747] Saving new best policy, reward=4.869! +[2023-02-26 16:01:01,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3090.6). Total num frames: 339968. Throughput: 0: 881.4. Samples: 84136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:01:01,501][00820] Avg episode reward: [(0, '4.767')] +[2023-02-26 16:01:06,493][00820] Fps is (10 sec: 2868.3, 60 sec: 3549.9, 300 sec: 3098.7). Total num frames: 356352. Throughput: 0: 887.4. Samples: 88570. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:01:06,501][00820] Avg episode reward: [(0, '4.510')] +[2023-02-26 16:01:08,549][10760] Updated weights for policy 0, policy_version 90 (0.0030) +[2023-02-26 16:01:11,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3174.4). Total num frames: 380928. Throughput: 0: 919.4. Samples: 95412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:01:11,498][00820] Avg episode reward: [(0, '4.491')] +[2023-02-26 16:01:16,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3178.5). Total num frames: 397312. Throughput: 0: 920.0. Samples: 98752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:01:16,499][00820] Avg episode reward: [(0, '4.608')] +[2023-02-26 16:01:19,550][10760] Updated weights for policy 0, policy_version 100 (0.0016) +[2023-02-26 16:01:21,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3182.3). Total num frames: 413696. Throughput: 0: 888.5. Samples: 103142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:01:21,502][00820] Avg episode reward: [(0, '4.594')] +[2023-02-26 16:01:26,495][00820] Fps is (10 sec: 3276.4, 60 sec: 3550.0, 300 sec: 3185.8). Total num frames: 430080. Throughput: 0: 901.2. Samples: 107832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:01:26,499][00820] Avg episode reward: [(0, '4.708')] +[2023-02-26 16:01:30,839][10760] Updated weights for policy 0, policy_version 110 (0.0029) +[2023-02-26 16:01:31,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3218.3). Total num frames: 450560. Throughput: 0: 926.4. Samples: 111120. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:01:31,503][00820] Avg episode reward: [(0, '4.449')] +[2023-02-26 16:01:36,494][00820] Fps is (10 sec: 4096.4, 60 sec: 3618.2, 300 sec: 3248.6). Total num frames: 471040. Throughput: 0: 923.0. Samples: 117746. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:01:36,496][00820] Avg episode reward: [(0, '4.458')] +[2023-02-26 16:01:41,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3249.5). Total num frames: 487424. Throughput: 0: 887.5. Samples: 121988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:01:41,502][00820] Avg episode reward: [(0, '4.560')] +[2023-02-26 16:01:43,020][10760] Updated weights for policy 0, policy_version 120 (0.0041) +[2023-02-26 16:01:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3223.9). Total num frames: 499712. Throughput: 0: 887.3. Samples: 124064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:01:46,499][00820] Avg episode reward: [(0, '4.640')] +[2023-02-26 16:01:51,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 524288. Throughput: 0: 923.6. Samples: 130130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:01:51,497][00820] Avg episode reward: [(0, '4.731')] +[2023-02-26 16:01:53,016][10760] Updated weights for policy 0, policy_version 130 (0.0021) +[2023-02-26 16:01:56,494][00820] Fps is (10 sec: 4505.6, 60 sec: 3618.4, 300 sec: 3301.6). Total num frames: 544768. Throughput: 0: 915.1. Samples: 136590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:01:56,502][00820] Avg episode reward: [(0, '4.736')] +[2023-02-26 16:02:01,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 557056. Throughput: 0: 889.5. Samples: 138780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:02:01,500][00820] Avg episode reward: [(0, '4.859')] +[2023-02-26 16:02:06,167][10760] Updated weights for policy 0, policy_version 140 (0.0013) +[2023-02-26 16:02:06,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 573440. Throughput: 0: 883.0. Samples: 142876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:02:06,497][00820] Avg episode reward: [(0, '4.655')] +[2023-02-26 16:02:11,494][00820] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3299.6). Total num frames: 593920. Throughput: 0: 917.9. Samples: 149136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:02:11,502][00820] Avg episode reward: [(0, '4.551')] +[2023-02-26 16:02:15,384][10760] Updated weights for policy 0, policy_version 150 (0.0021) +[2023-02-26 16:02:16,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3321.1). Total num frames: 614400. Throughput: 0: 917.3. Samples: 152398. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:02:16,501][00820] Avg episode reward: [(0, '4.584')] +[2023-02-26 16:02:21,494][00820] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3319.9). Total num frames: 630784. Throughput: 0: 884.4. Samples: 157546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:02:21,501][00820] Avg episode reward: [(0, '4.467')] +[2023-02-26 16:02:26,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3318.8). Total num frames: 647168. Throughput: 0: 885.4. Samples: 161832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:02:26,499][00820] Avg episode reward: [(0, '4.710')] +[2023-02-26 16:02:28,288][10760] Updated weights for policy 0, policy_version 160 (0.0032) +[2023-02-26 16:02:31,494][00820] Fps is (10 sec: 3686.7, 60 sec: 3618.1, 300 sec: 3338.2). Total num frames: 667648. Throughput: 0: 910.4. Samples: 165034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:02:31,503][00820] Avg episode reward: [(0, '4.773')] +[2023-02-26 16:02:36,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3356.7). Total num frames: 688128. Throughput: 0: 921.7. Samples: 171606. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:02:36,500][00820] Avg episode reward: [(0, '4.626')] +[2023-02-26 16:02:38,439][10760] Updated weights for policy 0, policy_version 170 (0.0013) +[2023-02-26 16:02:41,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3354.8). Total num frames: 704512. Throughput: 0: 884.4. Samples: 176386. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:02:41,502][00820] Avg episode reward: [(0, '4.750')] +[2023-02-26 16:02:41,510][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth... +[2023-02-26 16:02:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3334.0). Total num frames: 716800. Throughput: 0: 882.9. Samples: 178512. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:02:46,502][00820] Avg episode reward: [(0, '4.580')] +[2023-02-26 16:02:50,845][10760] Updated weights for policy 0, policy_version 180 (0.0036) +[2023-02-26 16:02:51,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3351.3). Total num frames: 737280. Throughput: 0: 911.9. Samples: 183912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:02:51,497][00820] Avg episode reward: [(0, '4.355')] +[2023-02-26 16:02:56,496][00820] Fps is (10 sec: 4504.4, 60 sec: 3618.0, 300 sec: 3386.0). Total num frames: 761856. Throughput: 0: 922.7. Samples: 190662. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:02:56,499][00820] Avg episode reward: [(0, '4.439')] +[2023-02-26 16:03:01,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3365.8). Total num frames: 774144. Throughput: 0: 904.7. Samples: 193110. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:03:01,500][00820] Avg episode reward: [(0, '4.546')] +[2023-02-26 16:03:01,817][10760] Updated weights for policy 0, policy_version 190 (0.0019) +[2023-02-26 16:03:06,494][00820] Fps is (10 sec: 2867.9, 60 sec: 3618.1, 300 sec: 3363.9). Total num frames: 790528. Throughput: 0: 883.0. Samples: 197280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:03:06,504][00820] Avg episode reward: [(0, '4.609')] +[2023-02-26 16:03:11,494][00820] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3379.2). Total num frames: 811008. Throughput: 0: 911.8. Samples: 202864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:03:11,504][00820] Avg episode reward: [(0, '4.828')] +[2023-02-26 16:03:13,156][10760] Updated weights for policy 0, policy_version 200 (0.0024) +[2023-02-26 16:03:16,494][00820] Fps is (10 sec: 4096.1, 60 sec: 3618.1, 300 sec: 3393.8). Total num frames: 831488. Throughput: 0: 914.8. Samples: 206202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:03:16,501][00820] Avg episode reward: [(0, '4.533')] +[2023-02-26 16:03:21,494][00820] Fps is (10 sec: 3686.6, 60 sec: 3618.2, 300 sec: 3391.5). Total num frames: 847872. Throughput: 0: 893.8. Samples: 211826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:03:21,500][00820] Avg episode reward: [(0, '4.343')] +[2023-02-26 16:03:25,217][10760] Updated weights for policy 0, policy_version 210 (0.0025) +[2023-02-26 16:03:26,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3373.2). Total num frames: 860160. Throughput: 0: 882.4. Samples: 216094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:03:26,505][00820] Avg episode reward: [(0, '4.550')] +[2023-02-26 16:03:31,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3387.1). Total num frames: 880640. Throughput: 0: 893.5. Samples: 218718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:03:31,504][00820] Avg episode reward: [(0, '4.584')] +[2023-02-26 16:03:35,703][10760] Updated weights for policy 0, policy_version 220 (0.0018) +[2023-02-26 16:03:36,493][00820] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3400.5). Total num frames: 901120. Throughput: 0: 916.0. Samples: 225130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:03:36,496][00820] Avg episode reward: [(0, '4.456')] +[2023-02-26 16:03:41,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3398.2). Total num frames: 917504. Throughput: 0: 880.1. Samples: 230266. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:03:41,497][00820] Avg episode reward: [(0, '4.467')] +[2023-02-26 16:03:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3381.1). Total num frames: 929792. Throughput: 0: 871.4. Samples: 232324. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:03:46,501][00820] Avg episode reward: [(0, '4.445')] +[2023-02-26 16:03:49,059][10760] Updated weights for policy 0, policy_version 230 (0.0012) +[2023-02-26 16:03:51,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3393.8). Total num frames: 950272. Throughput: 0: 887.5. Samples: 237216. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:03:51,496][00820] Avg episode reward: [(0, '4.590')] +[2023-02-26 16:03:56,494][00820] Fps is (10 sec: 4505.6, 60 sec: 3550.0, 300 sec: 3420.5). Total num frames: 974848. Throughput: 0: 912.5. Samples: 243924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:03:56,499][00820] Avg episode reward: [(0, '4.887')] +[2023-02-26 16:03:56,502][10747] Saving new best policy, reward=4.887! +[2023-02-26 16:03:58,609][10760] Updated weights for policy 0, policy_version 240 (0.0022) +[2023-02-26 16:04:01,494][00820] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3418.0). Total num frames: 991232. Throughput: 0: 899.1. Samples: 246662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:04:01,497][00820] Avg episode reward: [(0, '4.870')] +[2023-02-26 16:04:06,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 1003520. Throughput: 0: 865.8. Samples: 250788. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:04:06,500][00820] Avg episode reward: [(0, '4.715')] +[2023-02-26 16:04:11,493][00820] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 1019904. Throughput: 0: 882.8. Samples: 255822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:04:11,502][00820] Avg episode reward: [(0, '4.780')] +[2023-02-26 16:04:11,548][10760] Updated weights for policy 0, policy_version 250 (0.0020) +[2023-02-26 16:04:16,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1044480. Throughput: 0: 897.2. Samples: 259090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:04:16,496][00820] Avg episode reward: [(0, '4.657')] +[2023-02-26 16:04:21,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1060864. Throughput: 0: 885.2. Samples: 264966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:04:21,500][00820] Avg episode reward: [(0, '4.581')] +[2023-02-26 16:04:22,888][10760] Updated weights for policy 0, policy_version 260 (0.0028) +[2023-02-26 16:04:26,499][00820] Fps is (10 sec: 2865.5, 60 sec: 3549.5, 300 sec: 3568.3). Total num frames: 1073152. Throughput: 0: 858.6. Samples: 268906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:04:26,502][00820] Avg episode reward: [(0, '4.655')] +[2023-02-26 16:04:31,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 1089536. Throughput: 0: 859.5. Samples: 271000. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:04:31,498][00820] Avg episode reward: [(0, '4.872')] +[2023-02-26 16:04:34,422][10760] Updated weights for policy 0, policy_version 270 (0.0020) +[2023-02-26 16:04:36,494][00820] Fps is (10 sec: 4098.3, 60 sec: 3549.9, 300 sec: 3596.2). Total num frames: 1114112. Throughput: 0: 894.0. Samples: 277446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:04:36,504][00820] Avg episode reward: [(0, '4.710')] +[2023-02-26 16:04:41,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1130496. Throughput: 0: 874.6. Samples: 283280. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:04:41,497][00820] Avg episode reward: [(0, '4.333')] +[2023-02-26 16:04:41,511][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000276_1130496.pth... +[2023-02-26 16:04:41,667][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000065_266240.pth +[2023-02-26 16:04:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 1142784. Throughput: 0: 858.1. Samples: 285276. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:04:46,505][00820] Avg episode reward: [(0, '4.432')] +[2023-02-26 16:04:46,609][10760] Updated weights for policy 0, policy_version 280 (0.0020) +[2023-02-26 16:04:51,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1163264. Throughput: 0: 862.7. Samples: 289608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:04:51,501][00820] Avg episode reward: [(0, '4.492')] +[2023-02-26 16:04:56,494][00820] Fps is (10 sec: 4095.8, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 1183744. Throughput: 0: 896.7. Samples: 296172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:04:56,498][00820] Avg episode reward: [(0, '4.638')] +[2023-02-26 16:04:56,985][10760] Updated weights for policy 0, policy_version 290 (0.0018) +[2023-02-26 16:05:01,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1204224. Throughput: 0: 898.6. Samples: 299528. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:05:01,501][00820] Avg episode reward: [(0, '4.873')] +[2023-02-26 16:05:06,494][00820] Fps is (10 sec: 3277.0, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1216512. Throughput: 0: 870.4. Samples: 304134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:05:06,502][00820] Avg episode reward: [(0, '4.814')] +[2023-02-26 16:05:10,132][10760] Updated weights for policy 0, policy_version 300 (0.0027) +[2023-02-26 16:05:11,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 1232896. Throughput: 0: 881.4. Samples: 308562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:05:11,497][00820] Avg episode reward: [(0, '4.891')] +[2023-02-26 16:05:11,507][10747] Saving new best policy, reward=4.891! +[2023-02-26 16:05:16,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1253376. Throughput: 0: 906.1. Samples: 311774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:05:16,497][00820] Avg episode reward: [(0, '4.637')] +[2023-02-26 16:05:19,670][10760] Updated weights for policy 0, policy_version 310 (0.0016) +[2023-02-26 16:05:21,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1273856. Throughput: 0: 908.2. Samples: 318314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:05:21,499][00820] Avg episode reward: [(0, '4.652')] +[2023-02-26 16:05:26,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3618.5, 300 sec: 3568.4). Total num frames: 1290240. Throughput: 0: 877.8. Samples: 322782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:05:26,500][00820] Avg episode reward: [(0, '4.722')] +[2023-02-26 16:05:31,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 1302528. Throughput: 0: 879.8. Samples: 324866. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:05:31,497][00820] Avg episode reward: [(0, '4.686')] +[2023-02-26 16:05:32,742][10760] Updated weights for policy 0, policy_version 320 (0.0019) +[2023-02-26 16:05:36,494][00820] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1327104. Throughput: 0: 910.4. Samples: 330576. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:05:36,495][00820] Avg episode reward: [(0, '4.637')] +[2023-02-26 16:05:41,494][00820] Fps is (10 sec: 4505.3, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 1347584. Throughput: 0: 911.4. Samples: 337186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:05:41,501][00820] Avg episode reward: [(0, '4.720')] +[2023-02-26 16:05:42,656][10760] Updated weights for policy 0, policy_version 330 (0.0025) +[2023-02-26 16:05:46,495][00820] Fps is (10 sec: 3276.2, 60 sec: 3618.0, 300 sec: 3568.4). Total num frames: 1359872. Throughput: 0: 881.7. Samples: 339208. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:05:46,509][00820] Avg episode reward: [(0, '4.659')] +[2023-02-26 16:05:51,495][00820] Fps is (10 sec: 2457.3, 60 sec: 3481.5, 300 sec: 3540.6). Total num frames: 1372160. Throughput: 0: 871.6. Samples: 343358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:05:51,502][00820] Avg episode reward: [(0, '4.604')] +[2023-02-26 16:05:55,510][10760] Updated weights for policy 0, policy_version 340 (0.0025) +[2023-02-26 16:05:56,495][00820] Fps is (10 sec: 3686.7, 60 sec: 3549.8, 300 sec: 3582.3). Total num frames: 1396736. Throughput: 0: 903.3. Samples: 349212. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:05:56,499][00820] Avg episode reward: [(0, '4.487')] +[2023-02-26 16:06:01,496][00820] Fps is (10 sec: 4095.8, 60 sec: 3481.5, 300 sec: 3582.2). Total num frames: 1413120. Throughput: 0: 904.0. Samples: 352458. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:06:01,498][00820] Avg episode reward: [(0, '4.528')] +[2023-02-26 16:06:06,494][00820] Fps is (10 sec: 2867.5, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1425408. Throughput: 0: 842.1. Samples: 356210. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:06:06,496][00820] Avg episode reward: [(0, '4.416')] +[2023-02-26 16:06:09,931][10760] Updated weights for policy 0, policy_version 350 (0.0015) +[2023-02-26 16:06:11,495][00820] Fps is (10 sec: 2048.1, 60 sec: 3345.0, 300 sec: 3512.8). Total num frames: 1433600. Throughput: 0: 817.3. Samples: 359562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:06:11,498][00820] Avg episode reward: [(0, '4.491')] +[2023-02-26 16:06:16,494][00820] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 1449984. Throughput: 0: 811.2. Samples: 361370. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:06:16,503][00820] Avg episode reward: [(0, '4.446')] +[2023-02-26 16:06:21,493][00820] Fps is (10 sec: 3687.1, 60 sec: 3276.8, 300 sec: 3526.7). Total num frames: 1470464. Throughput: 0: 813.1. Samples: 367164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:06:21,499][00820] Avg episode reward: [(0, '4.477')] +[2023-02-26 16:06:21,691][10760] Updated weights for policy 0, policy_version 360 (0.0024) +[2023-02-26 16:06:26,496][00820] Fps is (10 sec: 4504.5, 60 sec: 3413.2, 300 sec: 3540.6). Total num frames: 1495040. Throughput: 0: 816.4. Samples: 373926. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:06:26,498][00820] Avg episode reward: [(0, '4.525')] +[2023-02-26 16:06:31,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 1507328. Throughput: 0: 817.5. Samples: 375994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:06:31,502][00820] Avg episode reward: [(0, '4.620')] +[2023-02-26 16:06:33,673][10760] Updated weights for policy 0, policy_version 370 (0.0027) +[2023-02-26 16:06:36,493][00820] Fps is (10 sec: 2867.9, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 1523712. Throughput: 0: 819.9. Samples: 380250. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:06:36,498][00820] Avg episode reward: [(0, '4.486')] +[2023-02-26 16:06:41,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3540.6). Total num frames: 1544192. Throughput: 0: 820.1. Samples: 386114. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:06:41,505][00820] Avg episode reward: [(0, '4.364')] +[2023-02-26 16:06:41,518][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000377_1544192.pth... +[2023-02-26 16:06:41,640][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth +[2023-02-26 16:06:44,107][10760] Updated weights for policy 0, policy_version 380 (0.0029) +[2023-02-26 16:06:46,494][00820] Fps is (10 sec: 4095.7, 60 sec: 3413.4, 300 sec: 3526.7). Total num frames: 1564672. Throughput: 0: 819.0. Samples: 389312. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:06:46,500][00820] Avg episode reward: [(0, '4.503')] +[2023-02-26 16:06:51,495][00820] Fps is (10 sec: 3276.3, 60 sec: 3413.3, 300 sec: 3498.9). Total num frames: 1576960. Throughput: 0: 848.7. Samples: 394404. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:06:51,499][00820] Avg episode reward: [(0, '4.408')] +[2023-02-26 16:06:56,494][00820] Fps is (10 sec: 2867.3, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 1593344. Throughput: 0: 866.8. Samples: 398566. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:06:56,500][00820] Avg episode reward: [(0, '4.585')] +[2023-02-26 16:06:57,267][10760] Updated weights for policy 0, policy_version 390 (0.0015) +[2023-02-26 16:07:01,493][00820] Fps is (10 sec: 3687.0, 60 sec: 3345.2, 300 sec: 3526.7). Total num frames: 1613824. Throughput: 0: 893.6. Samples: 401584. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:07:01,500][00820] Avg episode reward: [(0, '4.619')] +[2023-02-26 16:07:06,494][00820] Fps is (10 sec: 4096.2, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1634304. Throughput: 0: 906.0. Samples: 407932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:07:06,501][00820] Avg episode reward: [(0, '4.556')] +[2023-02-26 16:07:07,269][10760] Updated weights for policy 0, policy_version 400 (0.0020) +[2023-02-26 16:07:11,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3499.0). Total num frames: 1646592. Throughput: 0: 857.5. Samples: 412512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:07:11,496][00820] Avg episode reward: [(0, '4.865')] +[2023-02-26 16:07:16,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1662976. Throughput: 0: 858.3. Samples: 414618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:07:16,501][00820] Avg episode reward: [(0, '4.868')] +[2023-02-26 16:07:20,363][10760] Updated weights for policy 0, policy_version 410 (0.0014) +[2023-02-26 16:07:21,494][00820] Fps is (10 sec: 3686.3, 60 sec: 3549.8, 300 sec: 3512.8). Total num frames: 1683456. Throughput: 0: 882.6. Samples: 419966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:07:21,496][00820] Avg episode reward: [(0, '4.715')] +[2023-02-26 16:07:26,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3512.8). Total num frames: 1703936. Throughput: 0: 893.8. Samples: 426336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:07:26,502][00820] Avg episode reward: [(0, '4.439')] +[2023-02-26 16:07:31,494][00820] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1716224. Throughput: 0: 874.5. Samples: 428664. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:07:31,504][00820] Avg episode reward: [(0, '4.556')] +[2023-02-26 16:07:31,523][10760] Updated weights for policy 0, policy_version 420 (0.0030) +[2023-02-26 16:07:36,494][00820] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1732608. Throughput: 0: 851.8. Samples: 432734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:07:36,498][00820] Avg episode reward: [(0, '4.716')] +[2023-02-26 16:07:41,494][00820] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 1748992. Throughput: 0: 879.9. Samples: 438162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:07:41,496][00820] Avg episode reward: [(0, '4.855')] +[2023-02-26 16:07:43,378][10760] Updated weights for policy 0, policy_version 430 (0.0019) +[2023-02-26 16:07:46,493][00820] Fps is (10 sec: 4096.3, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 1773568. Throughput: 0: 885.6. Samples: 441434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:07:46,496][00820] Avg episode reward: [(0, '4.728')] +[2023-02-26 16:07:51,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3471.2). Total num frames: 1785856. Throughput: 0: 867.6. Samples: 446976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:07:51,501][00820] Avg episode reward: [(0, '4.591')] +[2023-02-26 16:07:56,121][10760] Updated weights for policy 0, policy_version 440 (0.0023) +[2023-02-26 16:07:56,494][00820] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1802240. Throughput: 0: 855.4. Samples: 451004. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:07:56,496][00820] Avg episode reward: [(0, '4.565')] +[2023-02-26 16:08:01,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 1818624. Throughput: 0: 858.6. Samples: 453256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:08:01,499][00820] Avg episode reward: [(0, '4.636')] +[2023-02-26 16:08:06,358][10760] Updated weights for policy 0, policy_version 450 (0.0024) +[2023-02-26 16:08:06,494][00820] Fps is (10 sec: 4096.2, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 1843200. Throughput: 0: 884.8. Samples: 459780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:08:06,500][00820] Avg episode reward: [(0, '4.712')] +[2023-02-26 16:08:11,494][00820] Fps is (10 sec: 4095.8, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 1859584. Throughput: 0: 865.9. Samples: 465304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:08:11,506][00820] Avg episode reward: [(0, '4.697')] +[2023-02-26 16:08:16,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 1871872. Throughput: 0: 861.4. Samples: 467426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:08:16,496][00820] Avg episode reward: [(0, '4.879')] +[2023-02-26 16:08:19,502][10760] Updated weights for policy 0, policy_version 460 (0.0022) +[2023-02-26 16:08:21,493][00820] Fps is (10 sec: 3277.0, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 1892352. Throughput: 0: 871.1. Samples: 471932. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:08:21,502][00820] Avg episode reward: [(0, '4.964')] +[2023-02-26 16:08:21,511][10747] Saving new best policy, reward=4.964! +[2023-02-26 16:08:26,494][00820] Fps is (10 sec: 4095.8, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 1912832. Throughput: 0: 895.5. Samples: 478462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:08:26,496][00820] Avg episode reward: [(0, '4.771')] +[2023-02-26 16:08:28,950][10760] Updated weights for policy 0, policy_version 470 (0.0024) +[2023-02-26 16:08:31,498][00820] Fps is (10 sec: 3684.7, 60 sec: 3549.6, 300 sec: 3485.0). Total num frames: 1929216. Throughput: 0: 894.3. Samples: 481682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:08:31,505][00820] Avg episode reward: [(0, '4.874')] +[2023-02-26 16:08:36,496][00820] Fps is (10 sec: 3276.1, 60 sec: 3549.7, 300 sec: 3485.0). Total num frames: 1945600. Throughput: 0: 866.8. Samples: 485986. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:08:36,506][00820] Avg episode reward: [(0, '4.934')] +[2023-02-26 16:08:41,494][00820] Fps is (10 sec: 3278.3, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1961984. Throughput: 0: 883.3. Samples: 490752. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:08:41,503][00820] Avg episode reward: [(0, '5.152')] +[2023-02-26 16:08:41,517][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth... +[2023-02-26 16:08:41,710][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000276_1130496.pth +[2023-02-26 16:08:41,718][10747] Saving new best policy, reward=5.152! +[2023-02-26 16:08:42,429][10760] Updated weights for policy 0, policy_version 480 (0.0017) +[2023-02-26 16:08:46,493][00820] Fps is (10 sec: 3687.4, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 1982464. Throughput: 0: 900.8. Samples: 493794. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-26 16:08:46,497][00820] Avg episode reward: [(0, '5.055')] +[2023-02-26 16:08:51,497][00820] Fps is (10 sec: 4094.5, 60 sec: 3617.9, 300 sec: 3485.0). Total num frames: 2002944. Throughput: 0: 901.3. Samples: 500342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:08:51,499][00820] Avg episode reward: [(0, '4.980')] +[2023-02-26 16:08:52,659][10760] Updated weights for policy 0, policy_version 490 (0.0033) +[2023-02-26 16:08:56,494][00820] Fps is (10 sec: 3276.6, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 2015232. Throughput: 0: 873.9. Samples: 504628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:08:56,499][00820] Avg episode reward: [(0, '5.008')] +[2023-02-26 16:09:01,494][00820] Fps is (10 sec: 2868.3, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2031616. Throughput: 0: 876.2. Samples: 506856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:09:01,505][00820] Avg episode reward: [(0, '5.340')] +[2023-02-26 16:09:01,516][10747] Saving new best policy, reward=5.340! +[2023-02-26 16:09:04,602][10760] Updated weights for policy 0, policy_version 500 (0.0023) +[2023-02-26 16:09:06,493][00820] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 2056192. Throughput: 0: 910.4. Samples: 512900. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:09:06,499][00820] Avg episode reward: [(0, '5.267')] +[2023-02-26 16:09:11,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2072576. Throughput: 0: 905.3. Samples: 519200. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:09:11,495][00820] Avg episode reward: [(0, '4.769')] +[2023-02-26 16:09:16,244][10760] Updated weights for policy 0, policy_version 510 (0.0020) +[2023-02-26 16:09:16,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2088960. Throughput: 0: 877.3. Samples: 521156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:09:16,503][00820] Avg episode reward: [(0, '4.855')] +[2023-02-26 16:09:21,497][00820] Fps is (10 sec: 2866.2, 60 sec: 3481.4, 300 sec: 3485.1). Total num frames: 2101248. Throughput: 0: 873.3. Samples: 525284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:09:21,505][00820] Avg episode reward: [(0, '4.763')] +[2023-02-26 16:09:26,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 2125824. Throughput: 0: 903.6. Samples: 531414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:09:26,497][00820] Avg episode reward: [(0, '4.606')] +[2023-02-26 16:09:27,405][10760] Updated weights for policy 0, policy_version 520 (0.0022) +[2023-02-26 16:09:31,493][00820] Fps is (10 sec: 4507.2, 60 sec: 3618.4, 300 sec: 3499.0). Total num frames: 2146304. Throughput: 0: 907.7. Samples: 534642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:09:31,503][00820] Avg episode reward: [(0, '4.611')] +[2023-02-26 16:09:36,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3485.1). Total num frames: 2158592. Throughput: 0: 873.4. Samples: 539640. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:09:36,500][00820] Avg episode reward: [(0, '4.819')] +[2023-02-26 16:09:40,074][10760] Updated weights for policy 0, policy_version 530 (0.0013) +[2023-02-26 16:09:41,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2174976. Throughput: 0: 871.8. Samples: 543858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:09:41,495][00820] Avg episode reward: [(0, '4.958')] +[2023-02-26 16:09:46,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2195456. Throughput: 0: 892.3. Samples: 547010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:09:46,500][00820] Avg episode reward: [(0, '4.900')] +[2023-02-26 16:09:49,830][10760] Updated weights for policy 0, policy_version 540 (0.0022) +[2023-02-26 16:09:51,494][00820] Fps is (10 sec: 4095.9, 60 sec: 3550.1, 300 sec: 3499.0). Total num frames: 2215936. Throughput: 0: 902.5. Samples: 553512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:09:51,499][00820] Avg episode reward: [(0, '4.784')] +[2023-02-26 16:09:56,496][00820] Fps is (10 sec: 3685.5, 60 sec: 3618.0, 300 sec: 3485.0). Total num frames: 2232320. Throughput: 0: 873.8. Samples: 558522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:09:56,498][00820] Avg episode reward: [(0, '4.805')] +[2023-02-26 16:10:01,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2244608. Throughput: 0: 879.0. Samples: 560710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:10:01,504][00820] Avg episode reward: [(0, '5.160')] +[2023-02-26 16:10:02,588][10760] Updated weights for policy 0, policy_version 550 (0.0015) +[2023-02-26 16:10:06,493][00820] Fps is (10 sec: 3687.3, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 2269184. Throughput: 0: 914.1. Samples: 566416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:10:06,501][00820] Avg episode reward: [(0, '5.385')] +[2023-02-26 16:10:06,504][10747] Saving new best policy, reward=5.385! +[2023-02-26 16:10:11,494][00820] Fps is (10 sec: 4505.3, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2289664. Throughput: 0: 921.0. Samples: 572858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:11,498][00820] Avg episode reward: [(0, '5.265')] +[2023-02-26 16:10:12,095][10760] Updated weights for policy 0, policy_version 560 (0.0017) +[2023-02-26 16:10:16,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2301952. Throughput: 0: 902.1. Samples: 575238. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:16,502][00820] Avg episode reward: [(0, '4.991')] +[2023-02-26 16:10:21,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3618.3, 300 sec: 3485.1). Total num frames: 2318336. Throughput: 0: 882.9. Samples: 579370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:21,503][00820] Avg episode reward: [(0, '4.851')] +[2023-02-26 16:10:25,112][10760] Updated weights for policy 0, policy_version 570 (0.0017) +[2023-02-26 16:10:26,494][00820] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 2338816. Throughput: 0: 916.7. Samples: 585108. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:26,503][00820] Avg episode reward: [(0, '5.314')] +[2023-02-26 16:10:31,493][00820] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2359296. Throughput: 0: 919.4. Samples: 588382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:10:31,496][00820] Avg episode reward: [(0, '5.054')] +[2023-02-26 16:10:35,687][10760] Updated weights for policy 0, policy_version 580 (0.0016) +[2023-02-26 16:10:36,493][00820] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2375680. Throughput: 0: 898.9. Samples: 593962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:10:36,499][00820] Avg episode reward: [(0, '4.749')] +[2023-02-26 16:10:41,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2392064. Throughput: 0: 879.9. Samples: 598114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:10:41,499][00820] Avg episode reward: [(0, '4.641')] +[2023-02-26 16:10:41,523][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000584_2392064.pth... +[2023-02-26 16:10:41,680][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000377_1544192.pth +[2023-02-26 16:10:46,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3512.9). Total num frames: 2408448. Throughput: 0: 886.7. Samples: 600612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:46,502][00820] Avg episode reward: [(0, '4.762')] +[2023-02-26 16:10:47,777][10760] Updated weights for policy 0, policy_version 590 (0.0019) +[2023-02-26 16:10:51,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3512.9). Total num frames: 2433024. Throughput: 0: 905.6. Samples: 607170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:10:51,502][00820] Avg episode reward: [(0, '4.624')] +[2023-02-26 16:10:56,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.3, 300 sec: 3512.9). Total num frames: 2449408. Throughput: 0: 881.7. Samples: 612536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:10:56,500][00820] Avg episode reward: [(0, '4.761')] +[2023-02-26 16:10:59,499][10760] Updated weights for policy 0, policy_version 600 (0.0012) +[2023-02-26 16:11:01,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2461696. Throughput: 0: 872.9. Samples: 614520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:11:01,504][00820] Avg episode reward: [(0, '4.662')] +[2023-02-26 16:11:06,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2482176. Throughput: 0: 887.1. Samples: 619288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:11:06,496][00820] Avg episode reward: [(0, '4.487')] +[2023-02-26 16:11:10,231][10760] Updated weights for policy 0, policy_version 610 (0.0021) +[2023-02-26 16:11:11,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2502656. Throughput: 0: 908.4. Samples: 625986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:11,497][00820] Avg episode reward: [(0, '4.517')] +[2023-02-26 16:11:16,494][00820] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2519040. Throughput: 0: 908.7. Samples: 629276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:11:16,503][00820] Avg episode reward: [(0, '4.654')] +[2023-02-26 16:11:21,496][00820] Fps is (10 sec: 3275.9, 60 sec: 3618.0, 300 sec: 3526.7). Total num frames: 2535424. Throughput: 0: 876.5. Samples: 633406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:21,504][00820] Avg episode reward: [(0, '4.758')] +[2023-02-26 16:11:22,800][10760] Updated weights for policy 0, policy_version 620 (0.0021) +[2023-02-26 16:11:26,493][00820] Fps is (10 sec: 3277.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2551808. Throughput: 0: 892.1. Samples: 638258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:26,506][00820] Avg episode reward: [(0, '4.750')] +[2023-02-26 16:11:31,494][00820] Fps is (10 sec: 3687.3, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2572288. Throughput: 0: 911.3. Samples: 641620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:31,498][00820] Avg episode reward: [(0, '4.728')] +[2023-02-26 16:11:32,716][10760] Updated weights for policy 0, policy_version 630 (0.0014) +[2023-02-26 16:11:36,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2592768. Throughput: 0: 907.2. Samples: 647996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:36,496][00820] Avg episode reward: [(0, '4.487')] +[2023-02-26 16:11:41,494][00820] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 2605056. Throughput: 0: 883.6. Samples: 652298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:41,498][00820] Avg episode reward: [(0, '4.475')] +[2023-02-26 16:11:45,703][10760] Updated weights for policy 0, policy_version 640 (0.0027) +[2023-02-26 16:11:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2621440. Throughput: 0: 886.3. Samples: 654402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:11:46,498][00820] Avg episode reward: [(0, '4.658')] +[2023-02-26 16:11:51,494][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2646016. Throughput: 0: 916.7. Samples: 660540. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:11:51,502][00820] Avg episode reward: [(0, '4.889')] +[2023-02-26 16:11:54,927][10760] Updated weights for policy 0, policy_version 650 (0.0015) +[2023-02-26 16:11:56,494][00820] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 2666496. Throughput: 0: 907.3. Samples: 666814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:11:56,496][00820] Avg episode reward: [(0, '4.826')] +[2023-02-26 16:12:01,494][00820] Fps is (10 sec: 3276.5, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 2678784. Throughput: 0: 879.3. Samples: 668844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:12:01,506][00820] Avg episode reward: [(0, '4.810')] +[2023-02-26 16:12:06,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2695168. Throughput: 0: 879.3. Samples: 672970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:06,499][00820] Avg episode reward: [(0, '4.772')] +[2023-02-26 16:12:08,314][10760] Updated weights for policy 0, policy_version 660 (0.0025) +[2023-02-26 16:12:11,494][00820] Fps is (10 sec: 3686.7, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2715648. Throughput: 0: 911.0. Samples: 679254. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:12:11,497][00820] Avg episode reward: [(0, '4.916')] +[2023-02-26 16:12:16,496][00820] Fps is (10 sec: 4094.8, 60 sec: 3618.0, 300 sec: 3568.3). Total num frames: 2736128. Throughput: 0: 908.6. Samples: 682508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:16,504][00820] Avg episode reward: [(0, '4.905')] +[2023-02-26 16:12:19,247][10760] Updated weights for policy 0, policy_version 670 (0.0013) +[2023-02-26 16:12:21,499][00820] Fps is (10 sec: 3275.0, 60 sec: 3549.7, 300 sec: 3540.5). Total num frames: 2748416. Throughput: 0: 868.2. Samples: 687068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:12:21,504][00820] Avg episode reward: [(0, '4.880')] +[2023-02-26 16:12:26,494][00820] Fps is (10 sec: 2868.0, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2764800. Throughput: 0: 863.7. Samples: 691166. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:12:26,503][00820] Avg episode reward: [(0, '4.532')] +[2023-02-26 16:12:31,196][10760] Updated weights for policy 0, policy_version 680 (0.0013) +[2023-02-26 16:12:31,494][00820] Fps is (10 sec: 3688.4, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2785280. Throughput: 0: 889.0. Samples: 694406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:12:31,497][00820] Avg episode reward: [(0, '4.612')] +[2023-02-26 16:12:36,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2805760. Throughput: 0: 894.8. Samples: 700806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:36,500][00820] Avg episode reward: [(0, '4.658')] +[2023-02-26 16:12:41,497][00820] Fps is (10 sec: 3275.7, 60 sec: 3549.7, 300 sec: 3540.6). Total num frames: 2818048. Throughput: 0: 855.8. Samples: 705328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:41,503][00820] Avg episode reward: [(0, '4.751')] +[2023-02-26 16:12:41,520][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000688_2818048.pth... +[2023-02-26 16:12:41,675][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth +[2023-02-26 16:12:43,537][10760] Updated weights for policy 0, policy_version 690 (0.0033) +[2023-02-26 16:12:46,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2834432. Throughput: 0: 853.5. Samples: 707250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:46,499][00820] Avg episode reward: [(0, '4.997')] +[2023-02-26 16:12:51,495][00820] Fps is (10 sec: 3687.1, 60 sec: 3481.5, 300 sec: 3568.4). Total num frames: 2854912. Throughput: 0: 883.3. Samples: 712722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:12:51,502][00820] Avg episode reward: [(0, '4.668')] +[2023-02-26 16:12:54,222][10760] Updated weights for policy 0, policy_version 700 (0.0021) +[2023-02-26 16:12:56,494][00820] Fps is (10 sec: 4095.7, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 2875392. Throughput: 0: 885.2. Samples: 719090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:12:56,497][00820] Avg episode reward: [(0, '4.335')] +[2023-02-26 16:13:01,494][00820] Fps is (10 sec: 3277.3, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 2887680. Throughput: 0: 861.4. Samples: 721268. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:13:01,501][00820] Avg episode reward: [(0, '4.477')] +[2023-02-26 16:13:06,493][00820] Fps is (10 sec: 2457.8, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 2899968. Throughput: 0: 851.9. Samples: 725398. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:13:06,505][00820] Avg episode reward: [(0, '4.425')] +[2023-02-26 16:13:07,693][10760] Updated weights for policy 0, policy_version 710 (0.0020) +[2023-02-26 16:13:11,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 2924544. Throughput: 0: 891.1. Samples: 731264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:13:11,502][00820] Avg episode reward: [(0, '4.479')] +[2023-02-26 16:13:16,494][00820] Fps is (10 sec: 4505.6, 60 sec: 3481.8, 300 sec: 3568.4). Total num frames: 2945024. Throughput: 0: 891.1. Samples: 734506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:13:16,503][00820] Avg episode reward: [(0, '4.578')] +[2023-02-26 16:13:17,417][10760] Updated weights for policy 0, policy_version 720 (0.0012) +[2023-02-26 16:13:21,495][00820] Fps is (10 sec: 3276.3, 60 sec: 3481.8, 300 sec: 3540.6). Total num frames: 2957312. Throughput: 0: 860.9. Samples: 739546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:13:21,501][00820] Avg episode reward: [(0, '4.586')] +[2023-02-26 16:13:26,494][00820] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3526.8). Total num frames: 2969600. Throughput: 0: 853.0. Samples: 743712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:13:26,498][00820] Avg episode reward: [(0, '4.487')] +[2023-02-26 16:13:30,405][10760] Updated weights for policy 0, policy_version 730 (0.0023) +[2023-02-26 16:13:31,494][00820] Fps is (10 sec: 3687.0, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 2994176. Throughput: 0: 868.6. Samples: 746336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:13:31,502][00820] Avg episode reward: [(0, '4.648')] +[2023-02-26 16:13:36,494][00820] Fps is (10 sec: 4505.4, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 3014656. Throughput: 0: 895.5. Samples: 753018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:13:36,497][00820] Avg episode reward: [(0, '4.792')] +[2023-02-26 16:13:41,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3540.6). Total num frames: 3026944. Throughput: 0: 837.6. Samples: 756780. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 16:13:41,498][00820] Avg episode reward: [(0, '4.779')] +[2023-02-26 16:13:43,309][10760] Updated weights for policy 0, policy_version 740 (0.0019) +[2023-02-26 16:13:46,493][00820] Fps is (10 sec: 2048.1, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 3035136. Throughput: 0: 824.7. Samples: 758380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 16:13:46,496][00820] Avg episode reward: [(0, '4.701')] +[2023-02-26 16:13:51,494][00820] Fps is (10 sec: 2048.0, 60 sec: 3208.6, 300 sec: 3499.0). Total num frames: 3047424. Throughput: 0: 808.1. Samples: 761762. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:13:51,503][00820] Avg episode reward: [(0, '4.810')] +[2023-02-26 16:13:56,494][00820] Fps is (10 sec: 3276.7, 60 sec: 3208.6, 300 sec: 3512.8). Total num frames: 3067904. Throughput: 0: 810.4. Samples: 767734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:13:56,497][00820] Avg episode reward: [(0, '4.872')] +[2023-02-26 16:13:56,558][10760] Updated weights for policy 0, policy_version 750 (0.0027) +[2023-02-26 16:14:01,493][00820] Fps is (10 sec: 4505.7, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3092480. Throughput: 0: 814.1. Samples: 771140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:14:01,496][00820] Avg episode reward: [(0, '4.495')] +[2023-02-26 16:14:06,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 3104768. Throughput: 0: 820.7. Samples: 776478. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 16:14:06,500][00820] Avg episode reward: [(0, '4.424')] +[2023-02-26 16:14:08,062][10760] Updated weights for policy 0, policy_version 760 (0.0012) +[2023-02-26 16:14:11,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3499.0). Total num frames: 3121152. Throughput: 0: 822.0. Samples: 780704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:14:11,500][00820] Avg episode reward: [(0, '4.360')] +[2023-02-26 16:14:16,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3526.8). Total num frames: 3141632. Throughput: 0: 825.8. Samples: 783498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:14:16,502][00820] Avg episode reward: [(0, '4.577')] +[2023-02-26 16:14:18,855][10760] Updated weights for policy 0, policy_version 770 (0.0020) +[2023-02-26 16:14:21,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3512.8). Total num frames: 3162112. Throughput: 0: 821.2. Samples: 789972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:14:21,499][00820] Avg episode reward: [(0, '4.713')] +[2023-02-26 16:14:26,497][00820] Fps is (10 sec: 3685.0, 60 sec: 3481.4, 300 sec: 3498.9). Total num frames: 3178496. Throughput: 0: 853.1. Samples: 795172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:14:26,505][00820] Avg episode reward: [(0, '4.748')] +[2023-02-26 16:14:31,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3499.0). Total num frames: 3190784. Throughput: 0: 861.7. Samples: 797158. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:14:31,503][00820] Avg episode reward: [(0, '4.672')] +[2023-02-26 16:14:31,780][10760] Updated weights for policy 0, policy_version 780 (0.0021) +[2023-02-26 16:14:36,493][00820] Fps is (10 sec: 3278.1, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 3211264. Throughput: 0: 900.8. Samples: 802300. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:14:36,503][00820] Avg episode reward: [(0, '4.703')] +[2023-02-26 16:14:41,432][10760] Updated weights for policy 0, policy_version 790 (0.0015) +[2023-02-26 16:14:41,494][00820] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3235840. Throughput: 0: 916.7. Samples: 808986. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:14:41,497][00820] Avg episode reward: [(0, '4.596')] +[2023-02-26 16:14:41,514][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000790_3235840.pth... +[2023-02-26 16:14:41,649][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000584_2392064.pth +[2023-02-26 16:14:46,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 3252224. Throughput: 0: 905.8. Samples: 811902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:14:46,502][00820] Avg episode reward: [(0, '4.643')] +[2023-02-26 16:14:51,494][00820] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 3264512. Throughput: 0: 879.8. Samples: 816068. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:14:51,500][00820] Avg episode reward: [(0, '4.692')] +[2023-02-26 16:14:54,573][10760] Updated weights for policy 0, policy_version 800 (0.0033) +[2023-02-26 16:14:56,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3284992. Throughput: 0: 902.0. Samples: 821292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:14:56,504][00820] Avg episode reward: [(0, '4.584')] +[2023-02-26 16:15:01,493][00820] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3305472. Throughput: 0: 913.2. Samples: 824592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:01,495][00820] Avg episode reward: [(0, '4.641')] +[2023-02-26 16:15:03,903][10760] Updated weights for policy 0, policy_version 810 (0.0026) +[2023-02-26 16:15:06,495][00820] Fps is (10 sec: 3685.9, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 3321856. Throughput: 0: 904.6. Samples: 830680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:15:06,503][00820] Avg episode reward: [(0, '4.735')] +[2023-02-26 16:15:11,497][00820] Fps is (10 sec: 3275.6, 60 sec: 3617.9, 300 sec: 3512.8). Total num frames: 3338240. Throughput: 0: 883.4. Samples: 834924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:15:11,504][00820] Avg episode reward: [(0, '4.531')] +[2023-02-26 16:15:16,493][00820] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3354624. Throughput: 0: 885.6. Samples: 837010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:16,496][00820] Avg episode reward: [(0, '4.573')] +[2023-02-26 16:15:16,962][10760] Updated weights for policy 0, policy_version 820 (0.0018) +[2023-02-26 16:15:21,494][00820] Fps is (10 sec: 3687.7, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3375104. Throughput: 0: 911.5. Samples: 843318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:21,503][00820] Avg episode reward: [(0, '4.641')] +[2023-02-26 16:15:26,495][00820] Fps is (10 sec: 4095.4, 60 sec: 3618.3, 300 sec: 3512.8). Total num frames: 3395584. Throughput: 0: 897.0. Samples: 849352. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 16:15:26,500][00820] Avg episode reward: [(0, '4.717')] +[2023-02-26 16:15:27,457][10760] Updated weights for policy 0, policy_version 830 (0.0015) +[2023-02-26 16:15:31,495][00820] Fps is (10 sec: 3276.2, 60 sec: 3618.0, 300 sec: 3498.9). Total num frames: 3407872. Throughput: 0: 879.9. Samples: 851500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:15:31,503][00820] Avg episode reward: [(0, '4.762')] +[2023-02-26 16:15:36,494][00820] Fps is (10 sec: 2867.6, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3424256. Throughput: 0: 882.3. Samples: 855770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:36,501][00820] Avg episode reward: [(0, '4.680')] +[2023-02-26 16:15:39,398][10760] Updated weights for policy 0, policy_version 840 (0.0018) +[2023-02-26 16:15:41,494][00820] Fps is (10 sec: 4096.7, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3448832. Throughput: 0: 912.2. Samples: 862342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:41,497][00820] Avg episode reward: [(0, '4.531')] +[2023-02-26 16:15:46,501][00820] Fps is (10 sec: 4502.2, 60 sec: 3617.7, 300 sec: 3512.8). Total num frames: 3469312. Throughput: 0: 913.8. Samples: 865720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:15:46,506][00820] Avg episode reward: [(0, '4.779')] +[2023-02-26 16:15:50,889][10760] Updated weights for policy 0, policy_version 850 (0.0044) +[2023-02-26 16:15:51,495][00820] Fps is (10 sec: 3276.3, 60 sec: 3618.0, 300 sec: 3498.9). Total num frames: 3481600. Throughput: 0: 879.9. Samples: 870276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:15:51,499][00820] Avg episode reward: [(0, '4.693')] +[2023-02-26 16:15:56,493][00820] Fps is (10 sec: 2869.4, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3497984. Throughput: 0: 887.0. Samples: 874836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:15:56,504][00820] Avg episode reward: [(0, '4.753')] +[2023-02-26 16:16:01,493][00820] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3518464. Throughput: 0: 913.0. Samples: 878094. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:16:01,502][00820] Avg episode reward: [(0, '4.841')] +[2023-02-26 16:16:01,880][10760] Updated weights for policy 0, policy_version 860 (0.0036) +[2023-02-26 16:16:06,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 3538944. Throughput: 0: 916.1. Samples: 884544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:16:06,500][00820] Avg episode reward: [(0, '4.880')] +[2023-02-26 16:16:11,496][00820] Fps is (10 sec: 3685.4, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 3555328. Throughput: 0: 881.5. Samples: 889022. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:16:11,499][00820] Avg episode reward: [(0, '4.720')] +[2023-02-26 16:16:14,329][10760] Updated weights for policy 0, policy_version 870 (0.0013) +[2023-02-26 16:16:16,493][00820] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3567616. Throughput: 0: 880.0. Samples: 891100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:16:16,497][00820] Avg episode reward: [(0, '4.660')] +[2023-02-26 16:16:21,493][00820] Fps is (10 sec: 3687.4, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3592192. Throughput: 0: 916.6. Samples: 897016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:16:21,497][00820] Avg episode reward: [(0, '4.812')] +[2023-02-26 16:16:24,224][10760] Updated weights for policy 0, policy_version 880 (0.0023) +[2023-02-26 16:16:26,499][00820] Fps is (10 sec: 4503.0, 60 sec: 3617.9, 300 sec: 3526.7). Total num frames: 3612672. Throughput: 0: 913.1. Samples: 903436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:16:26,502][00820] Avg episode reward: [(0, '4.682')] +[2023-02-26 16:16:31,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3499.0). Total num frames: 3624960. Throughput: 0: 885.3. Samples: 905550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:16:31,498][00820] Avg episode reward: [(0, '4.434')] +[2023-02-26 16:16:36,493][00820] Fps is (10 sec: 2868.9, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 3641344. Throughput: 0: 878.7. Samples: 909816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:16:36,500][00820] Avg episode reward: [(0, '4.451')] +[2023-02-26 16:16:37,211][10760] Updated weights for policy 0, policy_version 890 (0.0013) +[2023-02-26 16:16:41,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3661824. Throughput: 0: 910.4. Samples: 915806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:16:41,503][00820] Avg episode reward: [(0, '4.463')] +[2023-02-26 16:16:41,517][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000894_3661824.pth... +[2023-02-26 16:16:41,638][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000688_2818048.pth +[2023-02-26 16:16:46,494][00820] Fps is (10 sec: 4095.9, 60 sec: 3550.3, 300 sec: 3512.8). Total num frames: 3682304. Throughput: 0: 908.1. Samples: 918960. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:16:46,500][00820] Avg episode reward: [(0, '4.277')] +[2023-02-26 16:16:46,857][10760] Updated weights for policy 0, policy_version 900 (0.0014) +[2023-02-26 16:16:51,494][00820] Fps is (10 sec: 3686.3, 60 sec: 3618.2, 300 sec: 3499.0). Total num frames: 3698688. Throughput: 0: 879.6. Samples: 924128. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:16:51,497][00820] Avg episode reward: [(0, '4.390')] +[2023-02-26 16:16:56,493][00820] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3710976. Throughput: 0: 871.3. Samples: 928230. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:16:56,502][00820] Avg episode reward: [(0, '4.508')] +[2023-02-26 16:17:00,284][10760] Updated weights for policy 0, policy_version 910 (0.0015) +[2023-02-26 16:17:01,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3731456. Throughput: 0: 884.8. Samples: 930918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:17:01,504][00820] Avg episode reward: [(0, '4.494')] +[2023-02-26 16:17:06,504][00820] Fps is (10 sec: 4091.6, 60 sec: 3549.2, 300 sec: 3512.7). Total num frames: 3751936. Throughput: 0: 894.9. Samples: 937298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 16:17:06,507][00820] Avg episode reward: [(0, '4.408')] +[2023-02-26 16:17:11,233][10760] Updated weights for policy 0, policy_version 920 (0.0031) +[2023-02-26 16:17:11,493][00820] Fps is (10 sec: 3686.5, 60 sec: 3550.0, 300 sec: 3499.0). Total num frames: 3768320. Throughput: 0: 866.0. Samples: 942402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:17:11,502][00820] Avg episode reward: [(0, '4.542')] +[2023-02-26 16:17:16,495][00820] Fps is (10 sec: 2870.0, 60 sec: 3549.8, 300 sec: 3499.0). Total num frames: 3780608. Throughput: 0: 865.2. Samples: 944486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 16:17:16,504][00820] Avg episode reward: [(0, '4.544')] +[2023-02-26 16:17:21,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3801088. Throughput: 0: 882.1. Samples: 949510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:17:21,497][00820] Avg episode reward: [(0, '4.580')] +[2023-02-26 16:17:23,222][10760] Updated weights for policy 0, policy_version 930 (0.0019) +[2023-02-26 16:17:26,494][00820] Fps is (10 sec: 4096.4, 60 sec: 3481.9, 300 sec: 3512.8). Total num frames: 3821568. Throughput: 0: 889.9. Samples: 955852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:17:26,501][00820] Avg episode reward: [(0, '4.516')] +[2023-02-26 16:17:31,495][00820] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3498.9). Total num frames: 3837952. Throughput: 0: 883.2. Samples: 958706. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:17:31,502][00820] Avg episode reward: [(0, '4.572')] +[2023-02-26 16:17:35,382][10760] Updated weights for policy 0, policy_version 940 (0.0015) +[2023-02-26 16:17:36,493][00820] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3850240. Throughput: 0: 858.4. Samples: 962754. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:17:36,503][00820] Avg episode reward: [(0, '4.562')] +[2023-02-26 16:17:41,493][00820] Fps is (10 sec: 3277.2, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 3870720. Throughput: 0: 882.5. Samples: 967944. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:17:41,500][00820] Avg episode reward: [(0, '4.607')] +[2023-02-26 16:17:46,034][10760] Updated weights for policy 0, policy_version 950 (0.0016) +[2023-02-26 16:17:46,493][00820] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3512.9). Total num frames: 3891200. Throughput: 0: 895.9. Samples: 971234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:17:46,499][00820] Avg episode reward: [(0, '4.642')] +[2023-02-26 16:17:51,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3907584. Throughput: 0: 882.3. Samples: 976990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:17:51,502][00820] Avg episode reward: [(0, '4.621')] +[2023-02-26 16:17:56,493][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3923968. Throughput: 0: 860.3. Samples: 981116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:17:56,502][00820] Avg episode reward: [(0, '4.676')] +[2023-02-26 16:17:59,328][10760] Updated weights for policy 0, policy_version 960 (0.0036) +[2023-02-26 16:18:01,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3940352. Throughput: 0: 858.1. Samples: 983098. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 16:18:01,497][00820] Avg episode reward: [(0, '4.775')] +[2023-02-26 16:18:06,494][00820] Fps is (10 sec: 3686.4, 60 sec: 3482.2, 300 sec: 3512.8). Total num frames: 3960832. Throughput: 0: 891.3. Samples: 989618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 16:18:06,496][00820] Avg episode reward: [(0, '4.603')] +[2023-02-26 16:18:08,702][10760] Updated weights for policy 0, policy_version 970 (0.0019) +[2023-02-26 16:18:11,493][00820] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3977216. Throughput: 0: 876.0. Samples: 995270. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:18:11,501][00820] Avg episode reward: [(0, '4.441')] +[2023-02-26 16:18:16,494][00820] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3512.9). Total num frames: 3993600. Throughput: 0: 859.7. Samples: 997392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 16:18:16,503][00820] Avg episode reward: [(0, '4.492')] +[2023-02-26 16:18:20,050][10747] Stopping Batcher_0... +[2023-02-26 16:18:20,050][10747] Loop batcher_evt_loop terminating... +[2023-02-26 16:18:20,060][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 16:18:20,051][00820] Component Batcher_0 stopped! +[2023-02-26 16:18:20,135][10760] Weights refcount: 2 0 +[2023-02-26 16:18:20,143][10765] Stopping RolloutWorker_w4... +[2023-02-26 16:18:20,139][00820] Component RolloutWorker_w7 stopped! +[2023-02-26 16:18:20,143][10760] Stopping InferenceWorker_p0-w0... +[2023-02-26 16:18:20,146][10760] Loop inference_proc0-0_evt_loop terminating... +[2023-02-26 16:18:20,141][10764] Stopping RolloutWorker_w2... +[2023-02-26 16:18:20,146][00820] Component RolloutWorker_w2 stopped! +[2023-02-26 16:18:20,152][10769] Stopping RolloutWorker_w7... +[2023-02-26 16:18:20,153][10769] Loop rollout_proc7_evt_loop terminating... +[2023-02-26 16:18:20,144][10765] Loop rollout_proc4_evt_loop terminating... +[2023-02-26 16:18:20,152][00820] Component InferenceWorker_p0-w0 stopped! +[2023-02-26 16:18:20,155][00820] Component RolloutWorker_w4 stopped! +[2023-02-26 16:18:20,156][10764] Loop rollout_proc2_evt_loop terminating... +[2023-02-26 16:18:20,173][10761] Stopping RolloutWorker_w0... +[2023-02-26 16:18:20,174][00820] Component RolloutWorker_w0 stopped! +[2023-02-26 16:18:20,184][10763] Stopping RolloutWorker_w1... +[2023-02-26 16:18:20,185][10768] Stopping RolloutWorker_w6... +[2023-02-26 16:18:20,185][10768] Loop rollout_proc6_evt_loop terminating... +[2023-02-26 16:18:20,186][10763] Loop rollout_proc1_evt_loop terminating... +[2023-02-26 16:18:20,183][10766] Stopping RolloutWorker_w3... +[2023-02-26 16:18:20,188][10767] Stopping RolloutWorker_w5... +[2023-02-26 16:18:20,189][10767] Loop rollout_proc5_evt_loop terminating... +[2023-02-26 16:18:20,192][10766] Loop rollout_proc3_evt_loop terminating... +[2023-02-26 16:18:20,192][00820] Component RolloutWorker_w6 stopped! +[2023-02-26 16:18:20,194][00820] Component RolloutWorker_w3 stopped! +[2023-02-26 16:18:20,195][00820] Component RolloutWorker_w1 stopped! +[2023-02-26 16:18:20,176][10761] Loop rollout_proc0_evt_loop terminating... +[2023-02-26 16:18:20,197][00820] Component RolloutWorker_w5 stopped! +[2023-02-26 16:18:20,257][10747] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000790_3235840.pth +[2023-02-26 16:18:20,266][10747] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 16:18:20,457][00820] Component LearnerWorker_p0 stopped! +[2023-02-26 16:18:20,460][00820] Waiting for process learner_proc0 to stop... +[2023-02-26 16:18:20,467][10747] Stopping LearnerWorker_p0... +[2023-02-26 16:18:20,468][10747] Loop learner_proc0_evt_loop terminating... +[2023-02-26 16:18:22,373][00820] Waiting for process inference_proc0-0 to join... +[2023-02-26 16:18:22,613][00820] Waiting for process rollout_proc0 to join... +[2023-02-26 16:18:23,125][00820] Waiting for process rollout_proc1 to join... +[2023-02-26 16:18:23,127][00820] Waiting for process rollout_proc2 to join... +[2023-02-26 16:18:23,128][00820] Waiting for process rollout_proc3 to join... +[2023-02-26 16:18:23,129][00820] Waiting for process rollout_proc4 to join... +[2023-02-26 16:18:23,134][00820] Waiting for process rollout_proc5 to join... +[2023-02-26 16:18:23,135][00820] Waiting for process rollout_proc6 to join... +[2023-02-26 16:18:23,146][00820] Waiting for process rollout_proc7 to join... +[2023-02-26 16:18:23,147][00820] Batcher 0 profile tree view: +batching: 25.8458, releasing_batches: 0.0279 +[2023-02-26 16:18:23,149][00820] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 559.0926 +update_model: 8.2708 + weight_update: 0.0028 +one_step: 0.0134 + handle_policy_step: 538.5310 + deserialize: 15.1035, stack: 2.9584, obs_to_device_normalize: 118.4939, forward: 261.5085, send_messages: 25.7866 + prepare_outputs: 88.1680 + to_cpu: 55.6878 +[2023-02-26 16:18:23,150][00820] Learner 0 profile tree view: +misc: 0.0064, prepare_batch: 15.7143 +train: 77.4247 + epoch_init: 0.0059, minibatch_init: 0.0062, losses_postprocess: 0.6541, kl_divergence: 0.5889, after_optimizer: 33.2373 + calculate_losses: 27.8392 + losses_init: 0.0069, forward_head: 1.8539, bptt_initial: 18.5386, tail: 1.0872, advantages_returns: 0.2802, losses: 3.4995 + bptt: 2.2485 + bptt_forward_core: 2.1684 + update: 14.4564 + clip: 1.3980 +[2023-02-26 16:18:23,151][00820] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.4021, enqueue_policy_requests: 154.5585, env_step: 858.9613, overhead: 22.5351, complete_rollouts: 7.8434 +save_policy_outputs: 21.6133 + split_output_tensors: 10.7261 +[2023-02-26 16:18:23,154][00820] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3287, enqueue_policy_requests: 149.5213, env_step: 865.7037, overhead: 22.3840, complete_rollouts: 6.5599 +save_policy_outputs: 21.0776 + split_output_tensors: 10.0857 +[2023-02-26 16:18:23,155][00820] Loop Runner_EvtLoop terminating... +[2023-02-26 16:18:23,157][00820] Runner profile tree view: +main_loop: 1176.2878 +[2023-02-26 16:18:23,159][00820] Collected {0: 4005888}, FPS: 3405.5 +[2023-02-26 16:18:23,265][00820] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 16:18:23,267][00820] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 16:18:23,269][00820] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 16:18:23,272][00820] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 16:18:23,275][00820] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 16:18:23,277][00820] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 16:18:23,282][00820] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 16:18:23,285][00820] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 16:18:23,287][00820] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-26 16:18:23,288][00820] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-26 16:18:23,290][00820] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 16:18:23,293][00820] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 16:18:23,296][00820] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 16:18:23,297][00820] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 16:18:23,298][00820] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 16:18:23,317][00820] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 16:18:23,319][00820] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 16:18:23,323][00820] RunningMeanStd input shape: (1,) +[2023-02-26 16:18:23,342][00820] ConvEncoder: input_channels=3 +[2023-02-26 16:18:23,997][00820] Conv encoder output size: 512 +[2023-02-26 16:18:23,999][00820] Policy head output size: 512 +[2023-02-26 16:18:26,248][00820] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 16:18:27,481][00820] Num frames 100... +[2023-02-26 16:18:27,589][00820] Num frames 200... +[2023-02-26 16:18:27,704][00820] Num frames 300... +[2023-02-26 16:18:27,850][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:27,852][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:27,875][00820] Num frames 400... +[2023-02-26 16:18:27,992][00820] Num frames 500... +[2023-02-26 16:18:28,117][00820] Num frames 600... +[2023-02-26 16:18:28,229][00820] Num frames 700... +[2023-02-26 16:18:28,360][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:28,362][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:28,402][00820] Num frames 800... +[2023-02-26 16:18:28,528][00820] Num frames 900... +[2023-02-26 16:18:28,665][00820] Num frames 1000... +[2023-02-26 16:18:28,777][00820] Num frames 1100... +[2023-02-26 16:18:28,889][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:28,890][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:28,947][00820] Num frames 1200... +[2023-02-26 16:18:29,062][00820] Num frames 1300... +[2023-02-26 16:18:29,171][00820] Num frames 1400... +[2023-02-26 16:18:29,286][00820] Num frames 1500... +[2023-02-26 16:18:29,399][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:29,401][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:29,509][00820] Num frames 1600... +[2023-02-26 16:18:29,663][00820] Num frames 1700... +[2023-02-26 16:18:29,817][00820] Num frames 1800... +[2023-02-26 16:18:29,975][00820] Num frames 1900... +[2023-02-26 16:18:30,069][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:30,072][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:30,228][00820] Num frames 2000... +[2023-02-26 16:18:30,389][00820] Num frames 2100... +[2023-02-26 16:18:30,549][00820] Num frames 2200... +[2023-02-26 16:18:30,700][00820] Num frames 2300... +[2023-02-26 16:18:30,762][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:18:30,765][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:18:30,928][00820] Num frames 2400... +[2023-02-26 16:18:31,108][00820] Num frames 2500... +[2023-02-26 16:18:31,262][00820] Avg episode rewards: #0: 3.657, true rewards: #0: 3.657 +[2023-02-26 16:18:31,265][00820] Avg episode reward: 3.657, avg true_objective: 3.657 +[2023-02-26 16:18:31,328][00820] Num frames 2600... +[2023-02-26 16:18:31,487][00820] Num frames 2700... +[2023-02-26 16:18:31,651][00820] Num frames 2800... +[2023-02-26 16:18:31,814][00820] Num frames 2900... +[2023-02-26 16:18:31,947][00820] Avg episode rewards: #0: 3.680, true rewards: #0: 3.680 +[2023-02-26 16:18:31,950][00820] Avg episode reward: 3.680, avg true_objective: 3.680 +[2023-02-26 16:18:32,050][00820] Num frames 3000... +[2023-02-26 16:18:32,209][00820] Num frames 3100... +[2023-02-26 16:18:32,373][00820] Num frames 3200... +[2023-02-26 16:18:32,527][00820] Num frames 3300... +[2023-02-26 16:18:32,679][00820] Num frames 3400... +[2023-02-26 16:18:32,826][00820] Avg episode rewards: #0: 4.062, true rewards: #0: 3.840 +[2023-02-26 16:18:32,828][00820] Avg episode reward: 4.062, avg true_objective: 3.840 +[2023-02-26 16:18:32,881][00820] Num frames 3500... +[2023-02-26 16:18:32,991][00820] Num frames 3600... +[2023-02-26 16:18:33,108][00820] Num frames 3700... +[2023-02-26 16:18:33,222][00820] Num frames 3800... +[2023-02-26 16:18:33,357][00820] Avg episode rewards: #0: 4.172, true rewards: #0: 3.872 +[2023-02-26 16:18:33,360][00820] Avg episode reward: 4.172, avg true_objective: 3.872 +[2023-02-26 16:18:52,212][00820] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-26 16:20:02,793][00820] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 16:20:02,795][00820] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 16:20:02,797][00820] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 16:20:02,798][00820] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 16:20:02,801][00820] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 16:20:02,803][00820] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 16:20:02,805][00820] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-26 16:20:02,806][00820] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 16:20:02,807][00820] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-26 16:20:02,808][00820] Adding new argument 'hf_repository'='mlewand/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-26 16:20:02,809][00820] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 16:20:02,810][00820] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 16:20:02,812][00820] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 16:20:02,814][00820] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 16:20:02,815][00820] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 16:20:02,842][00820] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 16:20:02,845][00820] RunningMeanStd input shape: (1,) +[2023-02-26 16:20:02,859][00820] ConvEncoder: input_channels=3 +[2023-02-26 16:20:02,895][00820] Conv encoder output size: 512 +[2023-02-26 16:20:02,896][00820] Policy head output size: 512 +[2023-02-26 16:20:02,916][00820] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 16:20:03,339][00820] Num frames 100... +[2023-02-26 16:20:03,451][00820] Num frames 200... +[2023-02-26 16:20:03,569][00820] Num frames 300... +[2023-02-26 16:20:03,717][00820] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-26 16:20:03,719][00820] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-26 16:20:03,742][00820] Num frames 400... +[2023-02-26 16:20:03,861][00820] Num frames 500... +[2023-02-26 16:20:03,973][00820] Num frames 600... +[2023-02-26 16:20:04,091][00820] Num frames 700... +[2023-02-26 16:20:04,207][00820] Num frames 800... +[2023-02-26 16:20:04,384][00820] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480 +[2023-02-26 16:20:04,387][00820] Avg episode reward: 5.480, avg true_objective: 4.480 +[2023-02-26 16:20:04,396][00820] Num frames 900... +[2023-02-26 16:20:04,516][00820] Num frames 1000... +[2023-02-26 16:20:04,629][00820] Num frames 1100... +[2023-02-26 16:20:04,743][00820] Num frames 1200... +[2023-02-26 16:20:04,896][00820] Avg episode rewards: #0: 4.933, true rewards: #0: 4.267 +[2023-02-26 16:20:04,898][00820] Avg episode reward: 4.933, avg true_objective: 4.267 +[2023-02-26 16:20:04,927][00820] Num frames 1300... +[2023-02-26 16:20:05,049][00820] Num frames 1400... +[2023-02-26 16:20:05,170][00820] Num frames 1500... +[2023-02-26 16:20:05,292][00820] Num frames 1600... +[2023-02-26 16:20:05,455][00820] Avg episode rewards: #0: 4.990, true rewards: #0: 4.240 +[2023-02-26 16:20:05,457][00820] Avg episode reward: 4.990, avg true_objective: 4.240 +[2023-02-26 16:20:05,468][00820] Num frames 1700... +[2023-02-26 16:20:05,598][00820] Num frames 1800... +[2023-02-26 16:20:05,719][00820] Num frames 1900... +[2023-02-26 16:20:05,830][00820] Num frames 2000... +[2023-02-26 16:20:05,946][00820] Num frames 2100... +[2023-02-26 16:20:06,016][00820] Avg episode rewards: #0: 5.024, true rewards: #0: 4.224 +[2023-02-26 16:20:06,020][00820] Avg episode reward: 5.024, avg true_objective: 4.224 +[2023-02-26 16:20:06,124][00820] Num frames 2200... +[2023-02-26 16:20:06,242][00820] Num frames 2300... +[2023-02-26 16:20:06,359][00820] Num frames 2400... +[2023-02-26 16:20:06,484][00820] Num frames 2500... +[2023-02-26 16:20:06,599][00820] Num frames 2600... +[2023-02-26 16:20:06,716][00820] Avg episode rewards: #0: 5.427, true rewards: #0: 4.427 +[2023-02-26 16:20:06,718][00820] Avg episode reward: 5.427, avg true_objective: 4.427 +[2023-02-26 16:20:06,776][00820] Num frames 2700... +[2023-02-26 16:20:06,892][00820] Num frames 2800... +[2023-02-26 16:20:07,012][00820] Num frames 2900... +[2023-02-26 16:20:07,122][00820] Num frames 3000... +[2023-02-26 16:20:07,222][00820] Avg episode rewards: #0: 5.200, true rewards: #0: 4.343 +[2023-02-26 16:20:07,224][00820] Avg episode reward: 5.200, avg true_objective: 4.343 +[2023-02-26 16:20:07,299][00820] Num frames 3100... +[2023-02-26 16:20:07,411][00820] Num frames 3200... +[2023-02-26 16:20:07,548][00820] Num frames 3300... +[2023-02-26 16:20:07,660][00820] Num frames 3400... +[2023-02-26 16:20:07,743][00820] Avg episode rewards: #0: 5.030, true rewards: #0: 4.280 +[2023-02-26 16:20:07,745][00820] Avg episode reward: 5.030, avg true_objective: 4.280 +[2023-02-26 16:20:07,835][00820] Num frames 3500... +[2023-02-26 16:20:07,949][00820] Num frames 3600... +[2023-02-26 16:20:08,062][00820] Num frames 3700... +[2023-02-26 16:20:08,177][00820] Num frames 3800... +[2023-02-26 16:20:08,277][00820] Avg episode rewards: #0: 5.156, true rewards: #0: 4.267 +[2023-02-26 16:20:08,278][00820] Avg episode reward: 5.156, avg true_objective: 4.267 +[2023-02-26 16:20:08,356][00820] Num frames 3900... +[2023-02-26 16:20:08,472][00820] Num frames 4000... +[2023-02-26 16:20:08,607][00820] Num frames 4100... +[2023-02-26 16:20:08,724][00820] Num frames 4200... +[2023-02-26 16:20:08,819][00820] Avg episode rewards: #0: 5.024, true rewards: #0: 4.224 +[2023-02-26 16:20:08,822][00820] Avg episode reward: 5.024, avg true_objective: 4.224 +[2023-02-26 16:20:31,279][00820] Replay video saved to /content/train_dir/default_experiment/replay.mp4!