[2023-11-20 15:25:07,428][00638] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-11-20 15:25:07,430][00638] Rollout worker 0 uses device cpu [2023-11-20 15:25:07,433][00638] Rollout worker 1 uses device cpu [2023-11-20 15:25:07,435][00638] Rollout worker 2 uses device cpu [2023-11-20 15:25:07,440][00638] Rollout worker 3 uses device cpu [2023-11-20 15:25:07,441][00638] Rollout worker 4 uses device cpu [2023-11-20 15:25:07,442][00638] Rollout worker 5 uses device cpu [2023-11-20 15:25:07,447][00638] Rollout worker 6 uses device cpu [2023-11-20 15:25:07,448][00638] Rollout worker 7 uses device cpu [2023-11-20 15:25:07,622][00638] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-11-20 15:25:07,623][00638] InferenceWorker_p0-w0: min num requests: 2 [2023-11-20 15:25:07,668][00638] Starting all processes... [2023-11-20 15:25:07,672][00638] Starting process learner_proc0 [2023-11-20 15:25:07,746][00638] Starting all processes... [2023-11-20 15:25:07,761][00638] Starting process inference_proc0-0 [2023-11-20 15:25:07,773][00638] Starting process rollout_proc0 [2023-11-20 15:25:07,774][00638] Starting process rollout_proc1 [2023-11-20 15:25:07,774][00638] Starting process rollout_proc2 [2023-11-20 15:25:07,781][00638] Starting process rollout_proc4 [2023-11-20 15:25:07,781][00638] Starting process rollout_proc5 [2023-11-20 15:25:07,781][00638] Starting process rollout_proc6 [2023-11-20 15:25:07,781][00638] Starting process rollout_proc7 [2023-11-20 15:25:07,781][00638] Starting process rollout_proc3 [2023-11-20 15:25:24,510][02578] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-11-20 15:25:24,512][02578] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-11-20 15:25:24,571][02578] Num visible devices: 1 [2023-11-20 15:25:24,608][02578] Starting seed is not provided [2023-11-20 15:25:24,609][02578] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-11-20 15:25:24,610][02578] Initializing actor-critic model on device cuda:0 [2023-11-20 15:25:24,611][02578] RunningMeanStd input shape: (3, 72, 128) [2023-11-20 15:25:24,615][02578] RunningMeanStd input shape: (1,) [2023-11-20 15:25:24,774][02578] ConvEncoder: input_channels=3 [2023-11-20 15:25:24,818][02594] Worker 2 uses CPU cores [0] [2023-11-20 15:25:25,051][02592] Worker 0 uses CPU cores [0] [2023-11-20 15:25:25,075][02595] Worker 4 uses CPU cores [0] [2023-11-20 15:25:25,236][02596] Worker 5 uses CPU cores [1] [2023-11-20 15:25:25,290][02599] Worker 3 uses CPU cores [1] [2023-11-20 15:25:25,342][02591] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-11-20 15:25:25,345][02598] Worker 7 uses CPU cores [1] [2023-11-20 15:25:25,351][02591] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-11-20 15:25:25,400][02591] Num visible devices: 1 [2023-11-20 15:25:25,412][02593] Worker 1 uses CPU cores [1] [2023-11-20 15:25:25,484][02597] Worker 6 uses CPU cores [0] [2023-11-20 15:25:25,486][02578] Conv encoder output size: 512 [2023-11-20 15:25:25,486][02578] Policy head output size: 512 [2023-11-20 15:25:25,550][02578] Created Actor Critic model with architecture: [2023-11-20 15:25:25,550][02578] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-11-20 15:25:26,052][02578] Using optimizer [2023-11-20 15:25:27,289][02578] No checkpoints found [2023-11-20 15:25:27,289][02578] Did not load from checkpoint, starting from scratch! [2023-11-20 15:25:27,289][02578] Initialized policy 0 weights for model version 0 [2023-11-20 15:25:27,294][02578] LearnerWorker_p0 finished initialization! [2023-11-20 15:25:27,295][02578] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-11-20 15:25:27,485][02591] RunningMeanStd input shape: (3, 72, 128) [2023-11-20 15:25:27,487][02591] RunningMeanStd input shape: (1,) [2023-11-20 15:25:27,499][02591] ConvEncoder: input_channels=3 [2023-11-20 15:25:27,599][02591] Conv encoder output size: 512 [2023-11-20 15:25:27,600][02591] Policy head output size: 512 [2023-11-20 15:25:27,610][00638] Heartbeat connected on Batcher_0 [2023-11-20 15:25:27,618][00638] Heartbeat connected on LearnerWorker_p0 [2023-11-20 15:25:27,635][00638] Heartbeat connected on RolloutWorker_w0 [2023-11-20 15:25:27,644][00638] Heartbeat connected on RolloutWorker_w1 [2023-11-20 15:25:27,648][00638] Heartbeat connected on RolloutWorker_w2 [2023-11-20 15:25:27,651][00638] Heartbeat connected on RolloutWorker_w3 [2023-11-20 15:25:27,655][00638] Heartbeat connected on RolloutWorker_w4 [2023-11-20 15:25:27,659][00638] Heartbeat connected on RolloutWorker_w5 [2023-11-20 15:25:27,665][00638] Heartbeat connected on RolloutWorker_w6 [2023-11-20 15:25:27,669][00638] Heartbeat connected on RolloutWorker_w7 [2023-11-20 15:25:27,685][00638] Inference worker 0-0 is ready! [2023-11-20 15:25:27,686][00638] All inference workers are ready! Signal rollout workers to start! [2023-11-20 15:25:27,689][00638] Heartbeat connected on InferenceWorker_p0-w0 [2023-11-20 15:25:27,887][02599] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,893][02594] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,889][02592] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,891][02595] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,889][02593] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,891][02598] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,896][02597] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:27,898][02596] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:25:29,294][02597] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,294][02592] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,296][02594] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,584][02598] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,587][02596] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,589][02593] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,596][02599] Decorrelating experience for 0 frames... [2023-11-20 15:25:29,758][02594] Decorrelating experience for 32 frames... [2023-11-20 15:25:30,185][02597] Decorrelating experience for 32 frames... [2023-11-20 15:25:30,683][02596] Decorrelating experience for 32 frames... [2023-11-20 15:25:30,686][02593] Decorrelating experience for 32 frames... [2023-11-20 15:25:30,690][02598] Decorrelating experience for 32 frames... [2023-11-20 15:25:31,175][00638] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-11-20 15:25:31,303][02595] Decorrelating experience for 0 frames... [2023-11-20 15:25:31,586][02597] Decorrelating experience for 64 frames... [2023-11-20 15:25:31,942][02594] Decorrelating experience for 64 frames... [2023-11-20 15:25:31,968][02592] Decorrelating experience for 32 frames... [2023-11-20 15:25:32,194][02596] Decorrelating experience for 64 frames... [2023-11-20 15:25:32,215][02598] Decorrelating experience for 64 frames... [2023-11-20 15:25:32,227][02593] Decorrelating experience for 64 frames... [2023-11-20 15:25:33,102][02592] Decorrelating experience for 64 frames... [2023-11-20 15:25:33,294][02599] Decorrelating experience for 32 frames... [2023-11-20 15:25:33,345][02594] Decorrelating experience for 96 frames... [2023-11-20 15:25:33,389][02597] Decorrelating experience for 96 frames... [2023-11-20 15:25:33,726][02593] Decorrelating experience for 96 frames... [2023-11-20 15:25:33,744][02598] Decorrelating experience for 96 frames... [2023-11-20 15:25:34,041][02595] Decorrelating experience for 32 frames... [2023-11-20 15:25:34,408][02596] Decorrelating experience for 96 frames... [2023-11-20 15:25:34,684][02599] Decorrelating experience for 64 frames... [2023-11-20 15:25:34,970][02595] Decorrelating experience for 64 frames... [2023-11-20 15:25:35,135][02599] Decorrelating experience for 96 frames... [2023-11-20 15:25:35,475][02592] Decorrelating experience for 96 frames... [2023-11-20 15:25:36,175][00638] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 5.6. Samples: 28. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-11-20 15:25:36,178][00638] Avg episode reward: [(0, '0.747')] [2023-11-20 15:25:36,528][02595] Decorrelating experience for 96 frames... [2023-11-20 15:25:41,175][00638] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 198.4. Samples: 1984. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-11-20 15:25:41,185][00638] Avg episode reward: [(0, '1.833')] [2023-11-20 15:25:42,323][02578] Signal inference workers to stop experience collection... [2023-11-20 15:25:42,363][02591] InferenceWorker_p0-w0: stopping experience collection [2023-11-20 15:25:42,945][02578] Signal inference workers to resume experience collection... [2023-11-20 15:25:42,945][02591] InferenceWorker_p0-w0: resuming experience collection [2023-11-20 15:25:46,175][00638] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 264.8. Samples: 3972. Policy #0 lag: (min: 0.0, avg: 1.4, max: 2.0) [2023-11-20 15:25:46,181][00638] Avg episode reward: [(0, '3.013')] [2023-11-20 15:25:51,175][00638] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 259.4. Samples: 5188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:25:51,179][00638] Avg episode reward: [(0, '3.471')] [2023-11-20 15:25:55,425][02591] Updated weights for policy 0, policy_version 10 (0.0314) [2023-11-20 15:25:56,175][00638] Fps is (10 sec: 2867.2, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 40960. Throughput: 0: 402.1. Samples: 10052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:25:56,183][00638] Avg episode reward: [(0, '4.162')] [2023-11-20 15:26:01,175][00638] Fps is (10 sec: 4505.6, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 549.3. Samples: 16480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:26:01,177][00638] Avg episode reward: [(0, '4.481')] [2023-11-20 15:26:06,182][00638] Fps is (10 sec: 3683.8, 60 sec: 2223.1, 300 sec: 2223.1). Total num frames: 77824. Throughput: 0: 539.0. Samples: 18870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:26:06,184][00638] Avg episode reward: [(0, '4.518')] [2023-11-20 15:26:06,673][02591] Updated weights for policy 0, policy_version 20 (0.0025) [2023-11-20 15:26:11,175][00638] Fps is (10 sec: 2457.6, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 90112. Throughput: 0: 572.8. Samples: 22914. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:26:11,177][00638] Avg episode reward: [(0, '4.460')] [2023-11-20 15:26:16,175][00638] Fps is (10 sec: 2459.2, 60 sec: 2275.5, 300 sec: 2275.5). Total num frames: 102400. Throughput: 0: 596.9. Samples: 26862. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2023-11-20 15:26:16,178][00638] Avg episode reward: [(0, '4.284')] [2023-11-20 15:26:16,182][02578] Saving new best policy, reward=4.284! [2023-11-20 15:26:20,948][02591] Updated weights for policy 0, policy_version 30 (0.0021) [2023-11-20 15:26:21,175][00638] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 122880. Throughput: 0: 656.6. Samples: 29576. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:21,183][00638] Avg episode reward: [(0, '4.369')] [2023-11-20 15:26:21,193][02578] Saving new best policy, reward=4.369! [2023-11-20 15:26:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 135168. Throughput: 0: 703.0. Samples: 33620. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:26:26,185][00638] Avg episode reward: [(0, '4.416')] [2023-11-20 15:26:26,192][02578] Saving new best policy, reward=4.416! [2023-11-20 15:26:31,176][00638] Fps is (10 sec: 2866.8, 60 sec: 2525.8, 300 sec: 2525.8). Total num frames: 151552. Throughput: 0: 766.6. Samples: 38468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:31,181][00638] Avg episode reward: [(0, '4.338')] [2023-11-20 15:26:35,079][02591] Updated weights for policy 0, policy_version 40 (0.0019) [2023-11-20 15:26:36,175][00638] Fps is (10 sec: 2867.3, 60 sec: 2730.7, 300 sec: 2520.6). Total num frames: 163840. Throughput: 0: 783.4. Samples: 40440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:36,180][00638] Avg episode reward: [(0, '4.349')] [2023-11-20 15:26:41,176][00638] Fps is (10 sec: 2457.6, 60 sec: 2935.4, 300 sec: 2516.1). Total num frames: 176128. Throughput: 0: 766.5. Samples: 44544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:41,181][00638] Avg episode reward: [(0, '4.481')] [2023-11-20 15:26:41,199][02578] Saving new best policy, reward=4.481! [2023-11-20 15:26:46,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2621.4). Total num frames: 196608. Throughput: 0: 746.4. Samples: 50066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:26:46,180][00638] Avg episode reward: [(0, '4.525')] [2023-11-20 15:26:46,183][02578] Saving new best policy, reward=4.525! [2023-11-20 15:26:47,410][02591] Updated weights for policy 0, policy_version 50 (0.0012) [2023-11-20 15:26:51,175][00638] Fps is (10 sec: 4506.2, 60 sec: 3345.1, 300 sec: 2764.8). Total num frames: 221184. Throughput: 0: 763.6. Samples: 53226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:51,180][00638] Avg episode reward: [(0, '4.394')] [2023-11-20 15:26:56,176][00638] Fps is (10 sec: 3685.9, 60 sec: 3208.5, 300 sec: 2746.7). Total num frames: 233472. Throughput: 0: 790.9. Samples: 58506. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:26:56,186][00638] Avg episode reward: [(0, '4.406')] [2023-11-20 15:26:59,771][02591] Updated weights for policy 0, policy_version 60 (0.0032) [2023-11-20 15:27:01,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3003.7, 300 sec: 2730.7). Total num frames: 245760. Throughput: 0: 794.0. Samples: 62592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:27:01,182][00638] Avg episode reward: [(0, '4.435')] [2023-11-20 15:27:01,258][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth... [2023-11-20 15:27:06,175][00638] Fps is (10 sec: 2867.6, 60 sec: 3072.4, 300 sec: 2759.4). Total num frames: 262144. Throughput: 0: 777.1. Samples: 64544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:27:06,181][00638] Avg episode reward: [(0, '4.559')] [2023-11-20 15:27:06,189][02578] Saving new best policy, reward=4.559! [2023-11-20 15:27:11,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 2826.2). Total num frames: 282624. Throughput: 0: 803.4. Samples: 69772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:27:11,181][00638] Avg episode reward: [(0, '4.843')] [2023-11-20 15:27:11,191][02578] Saving new best policy, reward=4.843! [2023-11-20 15:27:12,266][02591] Updated weights for policy 0, policy_version 70 (0.0033) [2023-11-20 15:27:16,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 2886.7). Total num frames: 303104. Throughput: 0: 837.3. Samples: 76144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:27:16,179][00638] Avg episode reward: [(0, '4.809')] [2023-11-20 15:27:21,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2867.2). Total num frames: 315392. Throughput: 0: 845.9. Samples: 78504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:27:21,177][00638] Avg episode reward: [(0, '4.584')] [2023-11-20 15:27:24,597][02591] Updated weights for policy 0, policy_version 80 (0.0014) [2023-11-20 15:27:26,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 2849.4). Total num frames: 327680. Throughput: 0: 843.6. Samples: 82504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:27:26,177][00638] Avg episode reward: [(0, '4.602')] [2023-11-20 15:27:31,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.6, 300 sec: 2867.2). Total num frames: 344064. Throughput: 0: 806.8. Samples: 86374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:27:31,177][00638] Avg episode reward: [(0, '4.471')] [2023-11-20 15:27:36,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 2883.6). Total num frames: 360448. Throughput: 0: 796.7. Samples: 89078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:27:36,177][00638] Avg episode reward: [(0, '4.508')] [2023-11-20 15:27:37,215][02591] Updated weights for policy 0, policy_version 90 (0.0013) [2023-11-20 15:27:41,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 2930.2). Total num frames: 380928. Throughput: 0: 820.3. Samples: 95420. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:27:41,177][00638] Avg episode reward: [(0, '4.552')] [2023-11-20 15:27:46,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 2943.1). Total num frames: 397312. Throughput: 0: 838.0. Samples: 100302. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:27:46,181][00638] Avg episode reward: [(0, '4.576')] [2023-11-20 15:27:49,899][02591] Updated weights for policy 0, policy_version 100 (0.0014) [2023-11-20 15:27:51,178][00638] Fps is (10 sec: 2866.3, 60 sec: 3140.1, 300 sec: 2925.6). Total num frames: 409600. Throughput: 0: 839.0. Samples: 102302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:27:51,180][00638] Avg episode reward: [(0, '4.621')] [2023-11-20 15:27:56,176][00638] Fps is (10 sec: 2866.8, 60 sec: 3208.5, 300 sec: 2937.8). Total num frames: 425984. Throughput: 0: 814.4. Samples: 106422. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:27:56,184][00638] Avg episode reward: [(0, '4.743')] [2023-11-20 15:28:01,175][00638] Fps is (10 sec: 3687.6, 60 sec: 3345.1, 300 sec: 2976.4). Total num frames: 446464. Throughput: 0: 794.6. Samples: 111902. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:01,181][00638] Avg episode reward: [(0, '4.639')] [2023-11-20 15:28:01,860][02591] Updated weights for policy 0, policy_version 110 (0.0016) [2023-11-20 15:28:06,175][00638] Fps is (10 sec: 4096.5, 60 sec: 3413.3, 300 sec: 3012.5). Total num frames: 466944. Throughput: 0: 813.9. Samples: 115130. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:28:06,181][00638] Avg episode reward: [(0, '4.632')] [2023-11-20 15:28:11,176][00638] Fps is (10 sec: 3686.1, 60 sec: 3345.0, 300 sec: 3020.8). Total num frames: 483328. Throughput: 0: 842.3. Samples: 120410. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:28:11,183][00638] Avg episode reward: [(0, '4.813')] [2023-11-20 15:28:14,185][02591] Updated weights for policy 0, policy_version 120 (0.0022) [2023-11-20 15:28:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3003.7). Total num frames: 495616. Throughput: 0: 849.0. Samples: 124578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:16,177][00638] Avg episode reward: [(0, '4.630')] [2023-11-20 15:28:21,175][00638] Fps is (10 sec: 2457.8, 60 sec: 3208.5, 300 sec: 2987.7). Total num frames: 507904. Throughput: 0: 833.6. Samples: 126588. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:28:21,181][00638] Avg episode reward: [(0, '4.686')] [2023-11-20 15:28:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3019.3). Total num frames: 528384. Throughput: 0: 806.2. Samples: 131698. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:26,182][00638] Avg episode reward: [(0, '4.498')] [2023-11-20 15:28:26,451][02591] Updated weights for policy 0, policy_version 130 (0.0017) [2023-11-20 15:28:31,175][00638] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3049.2). Total num frames: 548864. Throughput: 0: 841.5. Samples: 138170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:31,179][00638] Avg episode reward: [(0, '4.838')] [2023-11-20 15:28:36,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3055.4). Total num frames: 565248. Throughput: 0: 855.8. Samples: 140810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:28:36,180][00638] Avg episode reward: [(0, '4.943')] [2023-11-20 15:28:36,185][02578] Saving new best policy, reward=4.943! [2023-11-20 15:28:38,323][02591] Updated weights for policy 0, policy_version 140 (0.0032) [2023-11-20 15:28:41,180][00638] Fps is (10 sec: 2865.6, 60 sec: 3276.5, 300 sec: 3039.6). Total num frames: 577536. Throughput: 0: 854.0. Samples: 144854. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:28:41,183][00638] Avg episode reward: [(0, '4.839')] [2023-11-20 15:28:46,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3045.7). Total num frames: 593920. Throughput: 0: 823.3. Samples: 148952. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:46,179][00638] Avg episode reward: [(0, '5.009')] [2023-11-20 15:28:46,183][02578] Saving new best policy, reward=5.009! [2023-11-20 15:28:51,175][00638] Fps is (10 sec: 3278.7, 60 sec: 3345.2, 300 sec: 3051.5). Total num frames: 610304. Throughput: 0: 807.8. Samples: 151482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:28:51,178][00638] Avg episode reward: [(0, '4.964')] [2023-11-20 15:28:51,213][02591] Updated weights for policy 0, policy_version 150 (0.0020) [2023-11-20 15:28:56,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3097.0). Total num frames: 634880. Throughput: 0: 836.6. Samples: 158056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:28:56,181][00638] Avg episode reward: [(0, '5.009')] [2023-11-20 15:29:01,178][00638] Fps is (10 sec: 3685.4, 60 sec: 3344.9, 300 sec: 3081.7). Total num frames: 647168. Throughput: 0: 855.1. Samples: 163058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:29:01,181][00638] Avg episode reward: [(0, '5.187')] [2023-11-20 15:29:01,198][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000158_647168.pth... [2023-11-20 15:29:01,352][02578] Saving new best policy, reward=5.187! [2023-11-20 15:29:02,724][02591] Updated weights for policy 0, policy_version 160 (0.0013) [2023-11-20 15:29:06,177][00638] Fps is (10 sec: 2866.7, 60 sec: 3276.7, 300 sec: 3086.3). Total num frames: 663552. Throughput: 0: 854.2. Samples: 165028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:06,179][00638] Avg episode reward: [(0, '5.017')] [2023-11-20 15:29:11,175][00638] Fps is (10 sec: 2868.0, 60 sec: 3208.6, 300 sec: 3072.0). Total num frames: 675840. Throughput: 0: 830.3. Samples: 169062. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:29:11,183][00638] Avg episode reward: [(0, '5.065')] [2023-11-20 15:29:15,834][02591] Updated weights for policy 0, policy_version 170 (0.0017) [2023-11-20 15:29:16,175][00638] Fps is (10 sec: 3277.4, 60 sec: 3345.1, 300 sec: 3094.8). Total num frames: 696320. Throughput: 0: 807.7. Samples: 174518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:29:16,182][00638] Avg episode reward: [(0, '4.770')] [2023-11-20 15:29:21,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3116.5). Total num frames: 716800. Throughput: 0: 819.8. Samples: 177702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:29:21,177][00638] Avg episode reward: [(0, '4.651')] [2023-11-20 15:29:26,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3119.9). Total num frames: 733184. Throughput: 0: 849.0. Samples: 183054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:26,185][00638] Avg episode reward: [(0, '4.595')] [2023-11-20 15:29:27,331][02591] Updated weights for policy 0, policy_version 180 (0.0028) [2023-11-20 15:29:31,177][00638] Fps is (10 sec: 2866.7, 60 sec: 3276.7, 300 sec: 3106.1). Total num frames: 745472. Throughput: 0: 849.0. Samples: 187160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:29:31,183][00638] Avg episode reward: [(0, '4.802')] [2023-11-20 15:29:36,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3092.9). Total num frames: 757760. Throughput: 0: 837.9. Samples: 189188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:36,177][00638] Avg episode reward: [(0, '4.688')] [2023-11-20 15:29:40,573][02591] Updated weights for policy 0, policy_version 190 (0.0013) [2023-11-20 15:29:41,175][00638] Fps is (10 sec: 3277.3, 60 sec: 3345.4, 300 sec: 3113.0). Total num frames: 778240. Throughput: 0: 805.1. Samples: 194284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:41,177][00638] Avg episode reward: [(0, '5.121')] [2023-11-20 15:29:46,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3132.2). Total num frames: 798720. Throughput: 0: 836.6. Samples: 200702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:46,182][00638] Avg episode reward: [(0, '5.400')] [2023-11-20 15:29:46,243][02578] Saving new best policy, reward=5.400! [2023-11-20 15:29:51,178][00638] Fps is (10 sec: 3685.1, 60 sec: 3413.1, 300 sec: 3135.0). Total num frames: 815104. Throughput: 0: 846.1. Samples: 203106. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:29:51,186][00638] Avg episode reward: [(0, '5.414')] [2023-11-20 15:29:51,201][02578] Saving new best policy, reward=5.414! [2023-11-20 15:29:52,102][02591] Updated weights for policy 0, policy_version 200 (0.0040) [2023-11-20 15:29:56,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3122.2). Total num frames: 827392. Throughput: 0: 844.7. Samples: 207072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:29:56,186][00638] Avg episode reward: [(0, '5.141')] [2023-11-20 15:30:01,175][00638] Fps is (10 sec: 2458.5, 60 sec: 3208.7, 300 sec: 3109.9). Total num frames: 839680. Throughput: 0: 811.4. Samples: 211032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:30:01,181][00638] Avg episode reward: [(0, '5.314')] [2023-11-20 15:30:05,621][02591] Updated weights for policy 0, policy_version 210 (0.0019) [2023-11-20 15:30:06,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.9, 300 sec: 3127.9). Total num frames: 860160. Throughput: 0: 797.5. Samples: 213588. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:30:06,177][00638] Avg episode reward: [(0, '5.278')] [2023-11-20 15:30:11,175][00638] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3159.8). Total num frames: 884736. Throughput: 0: 823.1. Samples: 220094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:30:11,180][00638] Avg episode reward: [(0, '5.650')] [2023-11-20 15:30:11,191][02578] Saving new best policy, reward=5.650! [2023-11-20 15:30:16,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3147.5). Total num frames: 897024. Throughput: 0: 842.6. Samples: 225074. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:30:16,182][00638] Avg episode reward: [(0, '5.696')] [2023-11-20 15:30:16,189][02578] Saving new best policy, reward=5.696! [2023-11-20 15:30:16,685][02591] Updated weights for policy 0, policy_version 220 (0.0016) [2023-11-20 15:30:21,178][00638] Fps is (10 sec: 2866.4, 60 sec: 3276.6, 300 sec: 3149.7). Total num frames: 913408. Throughput: 0: 842.7. Samples: 227110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:30:21,180][00638] Avg episode reward: [(0, '5.806')] [2023-11-20 15:30:21,190][02578] Saving new best policy, reward=5.806! [2023-11-20 15:30:26,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3138.0). Total num frames: 925696. Throughput: 0: 817.6. Samples: 231078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:30:26,182][00638] Avg episode reward: [(0, '5.396')] [2023-11-20 15:30:30,306][02591] Updated weights for policy 0, policy_version 230 (0.0039) [2023-11-20 15:30:31,175][00638] Fps is (10 sec: 2868.0, 60 sec: 3276.9, 300 sec: 3193.5). Total num frames: 942080. Throughput: 0: 792.1. Samples: 236348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:30:31,182][00638] Avg episode reward: [(0, '5.462')] [2023-11-20 15:30:36,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3262.9). Total num frames: 962560. Throughput: 0: 808.4. Samples: 239480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:30:36,180][00638] Avg episode reward: [(0, '5.701')] [2023-11-20 15:30:41,181][00638] Fps is (10 sec: 3684.2, 60 sec: 3344.7, 300 sec: 3276.7). Total num frames: 978944. Throughput: 0: 842.1. Samples: 244972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:30:41,183][00638] Avg episode reward: [(0, '5.559')] [2023-11-20 15:30:41,554][02591] Updated weights for policy 0, policy_version 240 (0.0020) [2023-11-20 15:30:46,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 991232. Throughput: 0: 843.9. Samples: 249006. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:30:46,178][00638] Avg episode reward: [(0, '5.281')] [2023-11-20 15:30:51,175][00638] Fps is (10 sec: 2868.8, 60 sec: 3208.7, 300 sec: 3276.8). Total num frames: 1007616. Throughput: 0: 831.5. Samples: 251004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:30:51,181][00638] Avg episode reward: [(0, '5.213')] [2023-11-20 15:30:55,371][02591] Updated weights for policy 0, policy_version 250 (0.0022) [2023-11-20 15:30:56,175][00638] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 3249.0). Total num frames: 1024000. Throughput: 0: 796.9. Samples: 255954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:30:56,181][00638] Avg episode reward: [(0, '5.444')] [2023-11-20 15:31:01,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3290.8). Total num frames: 1048576. Throughput: 0: 830.4. Samples: 262444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:31:01,177][00638] Avg episode reward: [(0, '5.982')] [2023-11-20 15:31:01,188][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000256_1048576.pth... [2023-11-20 15:31:01,290][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth [2023-11-20 15:31:01,307][02578] Saving new best policy, reward=5.982! [2023-11-20 15:31:06,175][00638] Fps is (10 sec: 3686.3, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 1060864. Throughput: 0: 841.4. Samples: 264970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:31:06,178][00638] Avg episode reward: [(0, '6.025')] [2023-11-20 15:31:06,180][02578] Saving new best policy, reward=6.025! [2023-11-20 15:31:06,643][02591] Updated weights for policy 0, policy_version 260 (0.0017) [2023-11-20 15:31:11,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 1077248. Throughput: 0: 841.8. Samples: 268960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:11,177][00638] Avg episode reward: [(0, '6.012')] [2023-11-20 15:31:16,175][00638] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 1089536. Throughput: 0: 813.8. Samples: 272970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:16,182][00638] Avg episode reward: [(0, '5.681')] [2023-11-20 15:31:20,037][02591] Updated weights for policy 0, policy_version 270 (0.0016) [2023-11-20 15:31:21,175][00638] Fps is (10 sec: 3276.7, 60 sec: 3276.9, 300 sec: 3304.6). Total num frames: 1110016. Throughput: 0: 802.4. Samples: 275588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:31:21,177][00638] Avg episode reward: [(0, '5.389')] [2023-11-20 15:31:26,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 1130496. Throughput: 0: 825.8. Samples: 282126. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:26,182][00638] Avg episode reward: [(0, '5.945')] [2023-11-20 15:31:30,723][02591] Updated weights for policy 0, policy_version 280 (0.0012) [2023-11-20 15:31:31,175][00638] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 1146880. Throughput: 0: 850.0. Samples: 287254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:31,186][00638] Avg episode reward: [(0, '5.996')] [2023-11-20 15:31:36,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3332.4). Total num frames: 1159168. Throughput: 0: 850.2. Samples: 289264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:36,177][00638] Avg episode reward: [(0, '6.047')] [2023-11-20 15:31:36,186][02578] Saving new best policy, reward=6.047! [2023-11-20 15:31:41,176][00638] Fps is (10 sec: 2457.3, 60 sec: 3208.8, 300 sec: 3304.6). Total num frames: 1171456. Throughput: 0: 829.3. Samples: 293274. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:31:41,178][00638] Avg episode reward: [(0, '6.705')] [2023-11-20 15:31:41,190][02578] Saving new best policy, reward=6.705! [2023-11-20 15:31:44,737][02591] Updated weights for policy 0, policy_version 290 (0.0024) [2023-11-20 15:31:46,177][00638] Fps is (10 sec: 3275.9, 60 sec: 3344.9, 300 sec: 3290.7). Total num frames: 1191936. Throughput: 0: 803.1. Samples: 298586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:31:46,183][00638] Avg episode reward: [(0, '6.730')] [2023-11-20 15:31:46,186][02578] Saving new best policy, reward=6.730! [2023-11-20 15:31:51,175][00638] Fps is (10 sec: 4096.5, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 1212416. Throughput: 0: 816.3. Samples: 301702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:31:51,177][00638] Avg episode reward: [(0, '7.292')] [2023-11-20 15:31:51,189][02578] Saving new best policy, reward=7.292! [2023-11-20 15:31:55,438][02591] Updated weights for policy 0, policy_version 300 (0.0024) [2023-11-20 15:31:56,179][00638] Fps is (10 sec: 3685.7, 60 sec: 3413.1, 300 sec: 3332.3). Total num frames: 1228800. Throughput: 0: 850.8. Samples: 307252. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:31:56,187][00638] Avg episode reward: [(0, '7.175')] [2023-11-20 15:32:01,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3318.5). Total num frames: 1241088. Throughput: 0: 852.1. Samples: 311314. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:01,179][00638] Avg episode reward: [(0, '7.403')] [2023-11-20 15:32:01,190][02578] Saving new best policy, reward=7.403! [2023-11-20 15:32:06,175][00638] Fps is (10 sec: 2458.5, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 1253376. Throughput: 0: 836.6. Samples: 313234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:32:06,184][00638] Avg episode reward: [(0, '7.118')] [2023-11-20 15:32:09,503][02591] Updated weights for policy 0, policy_version 310 (0.0013) [2023-11-20 15:32:11,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 1273856. Throughput: 0: 801.7. Samples: 318202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:11,184][00638] Avg episode reward: [(0, '7.302')] [2023-11-20 15:32:16,175][00638] Fps is (10 sec: 4096.3, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 1294336. Throughput: 0: 830.8. Samples: 324638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:32:16,181][00638] Avg episode reward: [(0, '7.261')] [2023-11-20 15:32:20,001][02591] Updated weights for policy 0, policy_version 320 (0.0023) [2023-11-20 15:32:21,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 1310720. Throughput: 0: 844.0. Samples: 327242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:21,180][00638] Avg episode reward: [(0, '7.093')] [2023-11-20 15:32:26,181][00638] Fps is (10 sec: 3274.6, 60 sec: 3276.4, 300 sec: 3332.3). Total num frames: 1327104. Throughput: 0: 846.9. Samples: 331388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:32:26,184][00638] Avg episode reward: [(0, '7.207')] [2023-11-20 15:32:31,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3318.5). Total num frames: 1339392. Throughput: 0: 819.8. Samples: 335474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:32:31,181][00638] Avg episode reward: [(0, '7.311')] [2023-11-20 15:32:34,205][02591] Updated weights for policy 0, policy_version 330 (0.0019) [2023-11-20 15:32:36,175][00638] Fps is (10 sec: 3279.0, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 1359872. Throughput: 0: 805.0. Samples: 337926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:36,182][00638] Avg episode reward: [(0, '7.697')] [2023-11-20 15:32:36,186][02578] Saving new best policy, reward=7.697! [2023-11-20 15:32:41,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3332.3). Total num frames: 1380352. Throughput: 0: 822.3. Samples: 344254. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:32:41,181][00638] Avg episode reward: [(0, '7.938')] [2023-11-20 15:32:41,191][02578] Saving new best policy, reward=7.938! [2023-11-20 15:32:44,645][02591] Updated weights for policy 0, policy_version 340 (0.0028) [2023-11-20 15:32:46,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.5, 300 sec: 3346.3). Total num frames: 1396736. Throughput: 0: 845.3. Samples: 349352. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-11-20 15:32:46,179][00638] Avg episode reward: [(0, '8.481')] [2023-11-20 15:32:46,185][02578] Saving new best policy, reward=8.481! [2023-11-20 15:32:51,175][00638] Fps is (10 sec: 2867.0, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 1409024. Throughput: 0: 847.1. Samples: 351354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:51,182][00638] Avg episode reward: [(0, '9.194')] [2023-11-20 15:32:51,195][02578] Saving new best policy, reward=9.194! [2023-11-20 15:32:56,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3208.8, 300 sec: 3304.6). Total num frames: 1421312. Throughput: 0: 826.6. Samples: 355400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:32:56,181][00638] Avg episode reward: [(0, '9.641')] [2023-11-20 15:32:56,186][02578] Saving new best policy, reward=9.641! [2023-11-20 15:32:59,131][02591] Updated weights for policy 0, policy_version 350 (0.0047) [2023-11-20 15:33:01,175][00638] Fps is (10 sec: 3277.0, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 1441792. Throughput: 0: 801.2. Samples: 360694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:33:01,180][00638] Avg episode reward: [(0, '9.341')] [2023-11-20 15:33:01,190][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000352_1441792.pth... [2023-11-20 15:33:01,295][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000158_647168.pth [2023-11-20 15:33:06,175][00638] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3318.5). Total num frames: 1462272. Throughput: 0: 813.2. Samples: 363836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:33:06,176][00638] Avg episode reward: [(0, '9.236')] [2023-11-20 15:33:09,574][02591] Updated weights for policy 0, policy_version 360 (0.0018) [2023-11-20 15:33:11,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 1478656. Throughput: 0: 843.8. Samples: 369354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:33:11,177][00638] Avg episode reward: [(0, '9.182')] [2023-11-20 15:33:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 1490944. Throughput: 0: 842.4. Samples: 373384. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:33:16,178][00638] Avg episode reward: [(0, '9.418')] [2023-11-20 15:33:21,176][00638] Fps is (10 sec: 2457.4, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 1503232. Throughput: 0: 832.3. Samples: 375378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:33:21,181][00638] Avg episode reward: [(0, '10.219')] [2023-11-20 15:33:21,193][02578] Saving new best policy, reward=10.219! [2023-11-20 15:33:23,935][02591] Updated weights for policy 0, policy_version 370 (0.0019) [2023-11-20 15:33:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3277.2, 300 sec: 3304.6). Total num frames: 1523712. Throughput: 0: 798.5. Samples: 380186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:33:26,181][00638] Avg episode reward: [(0, '11.128')] [2023-11-20 15:33:26,185][02578] Saving new best policy, reward=11.128! [2023-11-20 15:33:31,175][00638] Fps is (10 sec: 4096.3, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 1544192. Throughput: 0: 827.5. Samples: 386588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:33:31,177][00638] Avg episode reward: [(0, '11.676')] [2023-11-20 15:33:31,195][02578] Saving new best policy, reward=11.676! [2023-11-20 15:33:34,038][02591] Updated weights for policy 0, policy_version 380 (0.0019) [2023-11-20 15:33:36,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.4). Total num frames: 1560576. Throughput: 0: 844.2. Samples: 389342. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:33:36,184][00638] Avg episode reward: [(0, '11.762')] [2023-11-20 15:33:36,189][02578] Saving new best policy, reward=11.762! [2023-11-20 15:33:41,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3318.5). Total num frames: 1572864. Throughput: 0: 840.1. Samples: 393204. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:33:41,178][00638] Avg episode reward: [(0, '11.701')] [2023-11-20 15:33:46,175][00638] Fps is (10 sec: 2457.5, 60 sec: 3140.2, 300 sec: 3304.6). Total num frames: 1585152. Throughput: 0: 813.5. Samples: 397300. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-11-20 15:33:46,177][00638] Avg episode reward: [(0, '11.078')] [2023-11-20 15:33:48,765][02591] Updated weights for policy 0, policy_version 390 (0.0016) [2023-11-20 15:33:51,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 1605632. Throughput: 0: 797.7. Samples: 399732. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:33:51,181][00638] Avg episode reward: [(0, '10.469')] [2023-11-20 15:33:56,175][00638] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 1626112. Throughput: 0: 818.1. Samples: 406170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:33:56,181][00638] Avg episode reward: [(0, '10.900')] [2023-11-20 15:33:58,563][02591] Updated weights for policy 0, policy_version 400 (0.0023) [2023-11-20 15:34:01,178][00638] Fps is (10 sec: 3685.4, 60 sec: 3344.9, 300 sec: 3318.4). Total num frames: 1642496. Throughput: 0: 847.9. Samples: 411542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:34:01,180][00638] Avg episode reward: [(0, '10.604')] [2023-11-20 15:34:06,175][00638] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 1658880. Throughput: 0: 847.2. Samples: 413500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:34:06,183][00638] Avg episode reward: [(0, '11.682')] [2023-11-20 15:34:11,175][00638] Fps is (10 sec: 2868.0, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 1671168. Throughput: 0: 832.0. Samples: 417628. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:34:11,180][00638] Avg episode reward: [(0, '12.291')] [2023-11-20 15:34:11,195][02578] Saving new best policy, reward=12.291! [2023-11-20 15:34:13,413][02591] Updated weights for policy 0, policy_version 410 (0.0026) [2023-11-20 15:34:16,175][00638] Fps is (10 sec: 2867.3, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 1687552. Throughput: 0: 803.2. Samples: 422730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:34:16,177][00638] Avg episode reward: [(0, '12.070')] [2023-11-20 15:34:21,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3318.5). Total num frames: 1712128. Throughput: 0: 813.2. Samples: 425936. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:34:21,177][00638] Avg episode reward: [(0, '12.394')] [2023-11-20 15:34:21,185][02578] Saving new best policy, reward=12.394! [2023-11-20 15:34:23,111][02591] Updated weights for policy 0, policy_version 420 (0.0017) [2023-11-20 15:34:26,175][00638] Fps is (10 sec: 3686.2, 60 sec: 3345.0, 300 sec: 3318.5). Total num frames: 1724416. Throughput: 0: 853.2. Samples: 431600. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:34:26,178][00638] Avg episode reward: [(0, '12.073')] [2023-11-20 15:34:31,176][00638] Fps is (10 sec: 2866.8, 60 sec: 3276.7, 300 sec: 3332.3). Total num frames: 1740800. Throughput: 0: 855.6. Samples: 435802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:34:31,182][00638] Avg episode reward: [(0, '12.647')] [2023-11-20 15:34:31,194][02578] Saving new best policy, reward=12.647! [2023-11-20 15:34:36,175][00638] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 1753088. Throughput: 0: 846.7. Samples: 437832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:34:36,182][00638] Avg episode reward: [(0, '12.670')] [2023-11-20 15:34:36,185][02578] Saving new best policy, reward=12.670! [2023-11-20 15:34:38,097][02591] Updated weights for policy 0, policy_version 430 (0.0037) [2023-11-20 15:34:41,175][00638] Fps is (10 sec: 3277.2, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 1773568. Throughput: 0: 806.8. Samples: 442476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:34:41,182][00638] Avg episode reward: [(0, '12.644')] [2023-11-20 15:34:46,177][00638] Fps is (10 sec: 4095.3, 60 sec: 3481.5, 300 sec: 3318.5). Total num frames: 1794048. Throughput: 0: 829.5. Samples: 448868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:34:46,180][00638] Avg episode reward: [(0, '12.483')] [2023-11-20 15:34:47,717][02591] Updated weights for policy 0, policy_version 440 (0.0039) [2023-11-20 15:34:51,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 1810432. Throughput: 0: 853.3. Samples: 451898. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:34:51,177][00638] Avg episode reward: [(0, '13.100')] [2023-11-20 15:34:51,195][02578] Saving new best policy, reward=13.100! [2023-11-20 15:34:56,175][00638] Fps is (10 sec: 2867.7, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 1822720. Throughput: 0: 846.6. Samples: 455724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:34:56,180][00638] Avg episode reward: [(0, '12.655')] [2023-11-20 15:35:01,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3208.7, 300 sec: 3304.6). Total num frames: 1835008. Throughput: 0: 823.0. Samples: 459766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:35:01,180][00638] Avg episode reward: [(0, '14.140')] [2023-11-20 15:35:01,192][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000448_1835008.pth... [2023-11-20 15:35:01,343][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000256_1048576.pth [2023-11-20 15:35:01,362][02578] Saving new best policy, reward=14.140! [2023-11-20 15:35:03,133][02591] Updated weights for policy 0, policy_version 450 (0.0027) [2023-11-20 15:35:06,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 1851392. Throughput: 0: 795.2. Samples: 461718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:35:06,183][00638] Avg episode reward: [(0, '15.481')] [2023-11-20 15:35:06,190][02578] Saving new best policy, reward=15.481! [2023-11-20 15:35:11,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 1871872. Throughput: 0: 808.9. Samples: 467998. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:35:11,181][00638] Avg episode reward: [(0, '16.518')] [2023-11-20 15:35:11,278][02578] Saving new best policy, reward=16.518! [2023-11-20 15:35:13,173][02591] Updated weights for policy 0, policy_version 460 (0.0013) [2023-11-20 15:35:16,176][00638] Fps is (10 sec: 4095.3, 60 sec: 3413.2, 300 sec: 3318.5). Total num frames: 1892352. Throughput: 0: 837.8. Samples: 473504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:35:16,184][00638] Avg episode reward: [(0, '16.531')] [2023-11-20 15:35:16,186][02578] Saving new best policy, reward=16.531! [2023-11-20 15:35:21,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3318.5). Total num frames: 1904640. Throughput: 0: 835.7. Samples: 475438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:35:21,181][00638] Avg episode reward: [(0, '17.515')] [2023-11-20 15:35:21,192][02578] Saving new best policy, reward=17.515! [2023-11-20 15:35:26,175][00638] Fps is (10 sec: 2458.0, 60 sec: 3208.6, 300 sec: 3304.6). Total num frames: 1916928. Throughput: 0: 822.0. Samples: 479464. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:35:26,181][00638] Avg episode reward: [(0, '17.105')] [2023-11-20 15:35:28,079][02591] Updated weights for policy 0, policy_version 470 (0.0025) [2023-11-20 15:35:31,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.6, 300 sec: 3290.7). Total num frames: 1933312. Throughput: 0: 784.3. Samples: 484160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:35:31,177][00638] Avg episode reward: [(0, '18.534')] [2023-11-20 15:35:31,187][02578] Saving new best policy, reward=18.534! [2023-11-20 15:35:36,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 1953792. Throughput: 0: 786.4. Samples: 487286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:35:36,182][00638] Avg episode reward: [(0, '19.286')] [2023-11-20 15:35:36,187][02578] Saving new best policy, reward=19.286! [2023-11-20 15:35:38,282][02591] Updated weights for policy 0, policy_version 480 (0.0013) [2023-11-20 15:35:41,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 1974272. Throughput: 0: 831.9. Samples: 493158. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:35:41,182][00638] Avg episode reward: [(0, '19.902')] [2023-11-20 15:35:41,196][02578] Saving new best policy, reward=19.902! [2023-11-20 15:35:46,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3208.6, 300 sec: 3318.5). Total num frames: 1986560. Throughput: 0: 831.5. Samples: 497184. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:35:46,183][00638] Avg episode reward: [(0, '19.702')] [2023-11-20 15:35:51,176][00638] Fps is (10 sec: 2457.3, 60 sec: 3140.2, 300 sec: 3304.6). Total num frames: 1998848. Throughput: 0: 833.5. Samples: 499226. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:35:51,188][00638] Avg episode reward: [(0, '19.899')] [2023-11-20 15:35:53,180][02591] Updated weights for policy 0, policy_version 490 (0.0034) [2023-11-20 15:35:56,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 2015232. Throughput: 0: 792.2. Samples: 503648. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:35:56,183][00638] Avg episode reward: [(0, '19.300')] [2023-11-20 15:36:01,175][00638] Fps is (10 sec: 4096.5, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2039808. Throughput: 0: 813.9. Samples: 510128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:36:01,177][00638] Avg episode reward: [(0, '16.206')] [2023-11-20 15:36:02,875][02591] Updated weights for policy 0, policy_version 500 (0.0033) [2023-11-20 15:36:06,177][00638] Fps is (10 sec: 4094.9, 60 sec: 3413.2, 300 sec: 3318.4). Total num frames: 2056192. Throughput: 0: 842.3. Samples: 513346. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:36:06,180][00638] Avg episode reward: [(0, '16.536')] [2023-11-20 15:36:11,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2068480. Throughput: 0: 842.1. Samples: 517360. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:36:11,183][00638] Avg episode reward: [(0, '17.222')] [2023-11-20 15:36:16,175][00638] Fps is (10 sec: 2458.3, 60 sec: 3140.4, 300 sec: 3290.7). Total num frames: 2080768. Throughput: 0: 827.9. Samples: 521416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:36:16,179][00638] Avg episode reward: [(0, '17.053')] [2023-11-20 15:36:17,802][02591] Updated weights for policy 0, policy_version 510 (0.0018) [2023-11-20 15:36:21,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 2101248. Throughput: 0: 803.6. Samples: 523448. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:36:21,181][00638] Avg episode reward: [(0, '17.714')] [2023-11-20 15:36:26,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 2121728. Throughput: 0: 811.6. Samples: 529680. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:36:26,177][00638] Avg episode reward: [(0, '18.314')] [2023-11-20 15:36:27,864][02591] Updated weights for policy 0, policy_version 520 (0.0026) [2023-11-20 15:36:31,175][00638] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2138112. Throughput: 0: 850.7. Samples: 535466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:36:31,179][00638] Avg episode reward: [(0, '19.638')] [2023-11-20 15:36:36,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2150400. Throughput: 0: 849.7. Samples: 537460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:36:36,181][00638] Avg episode reward: [(0, '19.635')] [2023-11-20 15:36:41,182][00638] Fps is (10 sec: 2865.3, 60 sec: 3208.2, 300 sec: 3304.5). Total num frames: 2166784. Throughput: 0: 839.8. Samples: 541444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:36:41,184][00638] Avg episode reward: [(0, '20.124')] [2023-11-20 15:36:41,197][02578] Saving new best policy, reward=20.124! [2023-11-20 15:36:42,400][02591] Updated weights for policy 0, policy_version 530 (0.0041) [2023-11-20 15:36:46,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 2183168. Throughput: 0: 797.9. Samples: 546034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:36:46,181][00638] Avg episode reward: [(0, '21.194')] [2023-11-20 15:36:46,187][02578] Saving new best policy, reward=21.194! [2023-11-20 15:36:51,175][00638] Fps is (10 sec: 3688.9, 60 sec: 3413.4, 300 sec: 3304.6). Total num frames: 2203648. Throughput: 0: 796.0. Samples: 549164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:36:51,177][00638] Avg episode reward: [(0, '20.224')] [2023-11-20 15:36:52,804][02591] Updated weights for policy 0, policy_version 540 (0.0020) [2023-11-20 15:36:56,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2220032. Throughput: 0: 845.9. Samples: 555424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:36:56,182][00638] Avg episode reward: [(0, '20.370')] [2023-11-20 15:37:01,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3332.3). Total num frames: 2236416. Throughput: 0: 846.6. Samples: 559514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:01,180][00638] Avg episode reward: [(0, '20.547')] [2023-11-20 15:37:01,193][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000546_2236416.pth... [2023-11-20 15:37:01,354][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000352_1441792.pth [2023-11-20 15:37:06,182][00638] Fps is (10 sec: 2865.3, 60 sec: 3208.3, 300 sec: 3304.5). Total num frames: 2248704. Throughput: 0: 845.4. Samples: 561496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:37:06,185][00638] Avg episode reward: [(0, '20.887')] [2023-11-20 15:37:07,046][02591] Updated weights for policy 0, policy_version 550 (0.0016) [2023-11-20 15:37:11,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 2265088. Throughput: 0: 800.4. Samples: 565698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:11,177][00638] Avg episode reward: [(0, '21.234')] [2023-11-20 15:37:11,187][02578] Saving new best policy, reward=21.234! [2023-11-20 15:37:16,175][00638] Fps is (10 sec: 3688.8, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 2285568. Throughput: 0: 812.6. Samples: 572034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:16,180][00638] Avg episode reward: [(0, '22.486')] [2023-11-20 15:37:16,183][02578] Saving new best policy, reward=22.486! [2023-11-20 15:37:17,627][02591] Updated weights for policy 0, policy_version 560 (0.0028) [2023-11-20 15:37:21,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 2301952. Throughput: 0: 837.2. Samples: 575136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:37:21,177][00638] Avg episode reward: [(0, '22.992')] [2023-11-20 15:37:21,192][02578] Saving new best policy, reward=22.992! [2023-11-20 15:37:26,175][00638] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2318336. Throughput: 0: 841.3. Samples: 579298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:26,177][00638] Avg episode reward: [(0, '22.562')] [2023-11-20 15:37:31,179][00638] Fps is (10 sec: 2866.1, 60 sec: 3208.3, 300 sec: 3290.6). Total num frames: 2330624. Throughput: 0: 830.4. Samples: 583406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:31,181][00638] Avg episode reward: [(0, '22.800')] [2023-11-20 15:37:31,959][02591] Updated weights for policy 0, policy_version 570 (0.0022) [2023-11-20 15:37:36,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 2347008. Throughput: 0: 806.4. Samples: 585452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:37:36,177][00638] Avg episode reward: [(0, '22.535')] [2023-11-20 15:37:41,175][00638] Fps is (10 sec: 3687.8, 60 sec: 3345.4, 300 sec: 3290.7). Total num frames: 2367488. Throughput: 0: 803.7. Samples: 591592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:37:41,178][00638] Avg episode reward: [(0, '23.678')] [2023-11-20 15:37:41,193][02578] Saving new best policy, reward=23.678! [2023-11-20 15:37:42,493][02591] Updated weights for policy 0, policy_version 580 (0.0031) [2023-11-20 15:37:46,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2387968. Throughput: 0: 842.2. Samples: 597412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:46,182][00638] Avg episode reward: [(0, '21.886')] [2023-11-20 15:37:51,175][00638] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2400256. Throughput: 0: 843.3. Samples: 599438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:37:51,178][00638] Avg episode reward: [(0, '21.930')] [2023-11-20 15:37:56,178][00638] Fps is (10 sec: 2456.9, 60 sec: 3208.4, 300 sec: 3290.7). Total num frames: 2412544. Throughput: 0: 839.5. Samples: 603476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:37:56,180][00638] Avg episode reward: [(0, '21.406')] [2023-11-20 15:37:56,554][02591] Updated weights for policy 0, policy_version 590 (0.0028) [2023-11-20 15:38:01,175][00638] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 2428928. Throughput: 0: 802.4. Samples: 608140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:38:01,183][00638] Avg episode reward: [(0, '20.698')] [2023-11-20 15:38:06,175][00638] Fps is (10 sec: 4097.2, 60 sec: 3413.7, 300 sec: 3304.6). Total num frames: 2453504. Throughput: 0: 804.6. Samples: 611342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:38:06,177][00638] Avg episode reward: [(0, '19.589')] [2023-11-20 15:38:07,160][02591] Updated weights for policy 0, policy_version 600 (0.0017) [2023-11-20 15:38:11,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2469888. Throughput: 0: 851.1. Samples: 617596. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:38:11,180][00638] Avg episode reward: [(0, '20.318')] [2023-11-20 15:38:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2482176. Throughput: 0: 850.7. Samples: 621684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:38:16,178][00638] Avg episode reward: [(0, '20.785')] [2023-11-20 15:38:21,029][02591] Updated weights for policy 0, policy_version 610 (0.0013) [2023-11-20 15:38:21,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 2498560. Throughput: 0: 850.6. Samples: 623728. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:38:21,177][00638] Avg episode reward: [(0, '20.273')] [2023-11-20 15:38:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 2514944. Throughput: 0: 805.2. Samples: 627824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:38:26,177][00638] Avg episode reward: [(0, '22.010')] [2023-11-20 15:38:31,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.5, 300 sec: 3304.6). Total num frames: 2535424. Throughput: 0: 819.9. Samples: 634306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:38:31,177][00638] Avg episode reward: [(0, '24.179')] [2023-11-20 15:38:31,191][02578] Saving new best policy, reward=24.179! [2023-11-20 15:38:31,880][02591] Updated weights for policy 0, policy_version 620 (0.0023) [2023-11-20 15:38:36,175][00638] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2551808. Throughput: 0: 843.8. Samples: 637410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:38:36,180][00638] Avg episode reward: [(0, '24.702')] [2023-11-20 15:38:36,181][02578] Saving new best policy, reward=24.702! [2023-11-20 15:38:41,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 2568192. Throughput: 0: 850.5. Samples: 641744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:38:41,194][00638] Avg episode reward: [(0, '24.289')] [2023-11-20 15:38:45,460][02591] Updated weights for policy 0, policy_version 630 (0.0019) [2023-11-20 15:38:46,178][00638] Fps is (10 sec: 2866.5, 60 sec: 3208.4, 300 sec: 3304.5). Total num frames: 2580480. Throughput: 0: 836.7. Samples: 645794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:38:46,181][00638] Avg episode reward: [(0, '24.494')] [2023-11-20 15:38:51,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 2596864. Throughput: 0: 810.2. Samples: 647800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:38:51,177][00638] Avg episode reward: [(0, '25.211')] [2023-11-20 15:38:51,188][02578] Saving new best policy, reward=25.211! [2023-11-20 15:38:56,175][00638] Fps is (10 sec: 3687.4, 60 sec: 3413.5, 300 sec: 3304.6). Total num frames: 2617344. Throughput: 0: 805.3. Samples: 653836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:38:56,181][00638] Avg episode reward: [(0, '23.162')] [2023-11-20 15:38:56,586][02591] Updated weights for policy 0, policy_version 640 (0.0031) [2023-11-20 15:39:01,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3318.5). Total num frames: 2637824. Throughput: 0: 849.2. Samples: 659898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:39:01,183][00638] Avg episode reward: [(0, '22.675')] [2023-11-20 15:39:01,193][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000644_2637824.pth... [2023-11-20 15:39:01,338][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000448_1835008.pth [2023-11-20 15:39:06,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2650112. Throughput: 0: 847.5. Samples: 661864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:39:06,182][00638] Avg episode reward: [(0, '23.061')] [2023-11-20 15:39:10,151][02591] Updated weights for policy 0, policy_version 650 (0.0020) [2023-11-20 15:39:11,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 2662400. Throughput: 0: 846.4. Samples: 665914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:39:11,191][00638] Avg episode reward: [(0, '23.148')] [2023-11-20 15:39:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 2678784. Throughput: 0: 800.1. Samples: 670310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:39:16,182][00638] Avg episode reward: [(0, '23.061')] [2023-11-20 15:39:21,175][00638] Fps is (10 sec: 3686.3, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 2699264. Throughput: 0: 802.7. Samples: 673532. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:39:21,177][00638] Avg episode reward: [(0, '23.633')] [2023-11-20 15:39:21,416][02591] Updated weights for policy 0, policy_version 660 (0.0029) [2023-11-20 15:39:26,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 2719744. Throughput: 0: 848.5. Samples: 679926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:39:26,178][00638] Avg episode reward: [(0, '24.460')] [2023-11-20 15:39:31,175][00638] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2732032. Throughput: 0: 847.6. Samples: 683934. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:39:31,180][00638] Avg episode reward: [(0, '25.390')] [2023-11-20 15:39:31,193][02578] Saving new best policy, reward=25.390! [2023-11-20 15:39:34,764][02591] Updated weights for policy 0, policy_version 670 (0.0024) [2023-11-20 15:39:36,176][00638] Fps is (10 sec: 2457.2, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 2744320. Throughput: 0: 845.2. Samples: 685836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:39:36,179][00638] Avg episode reward: [(0, '24.475')] [2023-11-20 15:39:41,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 2760704. Throughput: 0: 800.6. Samples: 689862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:39:41,182][00638] Avg episode reward: [(0, '24.541')] [2023-11-20 15:39:46,175][00638] Fps is (10 sec: 3687.0, 60 sec: 3345.2, 300 sec: 3290.7). Total num frames: 2781184. Throughput: 0: 807.0. Samples: 696214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:39:46,177][00638] Avg episode reward: [(0, '25.123')] [2023-11-20 15:39:46,542][02591] Updated weights for policy 0, policy_version 680 (0.0023) [2023-11-20 15:39:51,177][00638] Fps is (10 sec: 4095.2, 60 sec: 3413.2, 300 sec: 3318.4). Total num frames: 2801664. Throughput: 0: 832.5. Samples: 699328. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:39:51,183][00638] Avg episode reward: [(0, '24.266')] [2023-11-20 15:39:56,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2813952. Throughput: 0: 840.7. Samples: 703746. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:39:56,183][00638] Avg episode reward: [(0, '24.376')] [2023-11-20 15:40:00,005][02591] Updated weights for policy 0, policy_version 690 (0.0039) [2023-11-20 15:40:01,175][00638] Fps is (10 sec: 2458.1, 60 sec: 3140.3, 300 sec: 3304.6). Total num frames: 2826240. Throughput: 0: 831.9. Samples: 707744. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:01,178][00638] Avg episode reward: [(0, '25.751')] [2023-11-20 15:40:01,195][02578] Saving new best policy, reward=25.751! [2023-11-20 15:40:06,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 2842624. Throughput: 0: 803.1. Samples: 709670. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:06,178][00638] Avg episode reward: [(0, '25.577')] [2023-11-20 15:40:11,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 2863104. Throughput: 0: 790.4. Samples: 715494. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:11,177][00638] Avg episode reward: [(0, '25.891')] [2023-11-20 15:40:11,193][02578] Saving new best policy, reward=25.891! [2023-11-20 15:40:11,740][02591] Updated weights for policy 0, policy_version 700 (0.0018) [2023-11-20 15:40:16,179][00638] Fps is (10 sec: 4094.5, 60 sec: 3413.1, 300 sec: 3318.4). Total num frames: 2883584. Throughput: 0: 835.4. Samples: 721528. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:40:16,181][00638] Avg episode reward: [(0, '25.571')] [2023-11-20 15:40:21,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2895872. Throughput: 0: 836.1. Samples: 723458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:40:21,180][00638] Avg episode reward: [(0, '25.414')] [2023-11-20 15:40:25,044][02591] Updated weights for policy 0, policy_version 710 (0.0024) [2023-11-20 15:40:26,175][00638] Fps is (10 sec: 2458.5, 60 sec: 3140.3, 300 sec: 3304.6). Total num frames: 2908160. Throughput: 0: 836.8. Samples: 727520. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:26,181][00638] Avg episode reward: [(0, '25.335')] [2023-11-20 15:40:31,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 2924544. Throughput: 0: 790.4. Samples: 731782. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:31,181][00638] Avg episode reward: [(0, '22.919')] [2023-11-20 15:40:36,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3290.7). Total num frames: 2945024. Throughput: 0: 791.0. Samples: 734922. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:36,183][00638] Avg episode reward: [(0, '22.367')] [2023-11-20 15:40:36,554][02591] Updated weights for policy 0, policy_version 720 (0.0024) [2023-11-20 15:40:41,179][00638] Fps is (10 sec: 4094.4, 60 sec: 3413.1, 300 sec: 3318.4). Total num frames: 2965504. Throughput: 0: 833.2. Samples: 741242. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:41,181][00638] Avg episode reward: [(0, '21.728')] [2023-11-20 15:40:46,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 2977792. Throughput: 0: 837.3. Samples: 745424. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:40:46,177][00638] Avg episode reward: [(0, '20.449')] [2023-11-20 15:40:49,559][02591] Updated weights for policy 0, policy_version 730 (0.0033) [2023-11-20 15:40:51,175][00638] Fps is (10 sec: 2868.3, 60 sec: 3208.6, 300 sec: 3318.5). Total num frames: 2994176. Throughput: 0: 838.8. Samples: 747414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:40:51,184][00638] Avg episode reward: [(0, '19.546')] [2023-11-20 15:40:56,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3006464. Throughput: 0: 799.2. Samples: 751460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:40:56,180][00638] Avg episode reward: [(0, '18.829')] [2023-11-20 15:41:01,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 3026944. Throughput: 0: 802.5. Samples: 757636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:41:01,189][00638] Avg episode reward: [(0, '21.115')] [2023-11-20 15:41:01,267][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth... [2023-11-20 15:41:01,272][02591] Updated weights for policy 0, policy_version 740 (0.0029) [2023-11-20 15:41:01,374][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000546_2236416.pth [2023-11-20 15:41:06,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 3047424. Throughput: 0: 830.5. Samples: 760832. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:41:06,182][00638] Avg episode reward: [(0, '21.003')] [2023-11-20 15:41:11,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3318.5). Total num frames: 3059712. Throughput: 0: 841.9. Samples: 765404. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:41:11,184][00638] Avg episode reward: [(0, '22.385')] [2023-11-20 15:41:14,458][02591] Updated weights for policy 0, policy_version 750 (0.0026) [2023-11-20 15:41:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.7, 300 sec: 3304.6). Total num frames: 3076096. Throughput: 0: 835.7. Samples: 769388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:41:16,179][00638] Avg episode reward: [(0, '21.748')] [2023-11-20 15:41:21,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3088384. Throughput: 0: 810.1. Samples: 771378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:41:21,177][00638] Avg episode reward: [(0, '23.917')] [2023-11-20 15:41:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 3108864. Throughput: 0: 792.0. Samples: 776880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:41:26,185][00638] Avg episode reward: [(0, '24.695')] [2023-11-20 15:41:26,585][02591] Updated weights for policy 0, policy_version 760 (0.0027) [2023-11-20 15:41:31,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3318.5). Total num frames: 3129344. Throughput: 0: 838.3. Samples: 783148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:41:31,182][00638] Avg episode reward: [(0, '23.359')] [2023-11-20 15:41:36,175][00638] Fps is (10 sec: 3276.6, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3141632. Throughput: 0: 837.7. Samples: 785110. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2023-11-20 15:41:36,179][00638] Avg episode reward: [(0, '23.752')] [2023-11-20 15:41:40,039][02591] Updated weights for policy 0, policy_version 770 (0.0022) [2023-11-20 15:41:41,178][00638] Fps is (10 sec: 2456.9, 60 sec: 3140.3, 300 sec: 3290.7). Total num frames: 3153920. Throughput: 0: 835.6. Samples: 789066. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:41:41,182][00638] Avg episode reward: [(0, '22.055')] [2023-11-20 15:41:46,175][00638] Fps is (10 sec: 2867.4, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3170304. Throughput: 0: 785.2. Samples: 792970. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) [2023-11-20 15:41:46,181][00638] Avg episode reward: [(0, '22.489')] [2023-11-20 15:41:51,175][00638] Fps is (10 sec: 3687.4, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3190784. Throughput: 0: 780.7. Samples: 795964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:41:51,178][00638] Avg episode reward: [(0, '21.621')] [2023-11-20 15:41:51,833][02591] Updated weights for policy 0, policy_version 780 (0.0022) [2023-11-20 15:41:56,175][00638] Fps is (10 sec: 4095.9, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3211264. Throughput: 0: 820.7. Samples: 802336. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:41:56,181][00638] Avg episode reward: [(0, '22.233')] [2023-11-20 15:42:01,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3223552. Throughput: 0: 831.6. Samples: 806808. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:42:01,177][00638] Avg episode reward: [(0, '22.481')] [2023-11-20 15:42:04,540][02591] Updated weights for policy 0, policy_version 790 (0.0033) [2023-11-20 15:42:06,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3290.7). Total num frames: 3235840. Throughput: 0: 831.8. Samples: 808808. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:42:06,181][00638] Avg episode reward: [(0, '23.027')] [2023-11-20 15:42:11,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3252224. Throughput: 0: 799.8. Samples: 812870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:42:11,182][00638] Avg episode reward: [(0, '23.576')] [2023-11-20 15:42:16,175][00638] Fps is (10 sec: 3686.5, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3272704. Throughput: 0: 788.0. Samples: 818606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:42:16,177][00638] Avg episode reward: [(0, '23.914')] [2023-11-20 15:42:16,700][02591] Updated weights for policy 0, policy_version 800 (0.0023) [2023-11-20 15:42:21,175][00638] Fps is (10 sec: 4095.8, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3293184. Throughput: 0: 814.4. Samples: 821758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:42:21,181][00638] Avg episode reward: [(0, '23.843')] [2023-11-20 15:42:26,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3305472. Throughput: 0: 834.9. Samples: 826634. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:42:26,177][00638] Avg episode reward: [(0, '24.271')] [2023-11-20 15:42:29,743][02591] Updated weights for policy 0, policy_version 810 (0.0013) [2023-11-20 15:42:31,175][00638] Fps is (10 sec: 2457.7, 60 sec: 3140.3, 300 sec: 3290.7). Total num frames: 3317760. Throughput: 0: 835.5. Samples: 830568. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:42:31,182][00638] Avg episode reward: [(0, '23.568')] [2023-11-20 15:42:36,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.6, 300 sec: 3276.8). Total num frames: 3334144. Throughput: 0: 812.6. Samples: 832532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:42:36,179][00638] Avg episode reward: [(0, '24.231')] [2023-11-20 15:42:41,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3276.8). Total num frames: 3354624. Throughput: 0: 789.0. Samples: 837840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:42:41,177][00638] Avg episode reward: [(0, '24.219')] [2023-11-20 15:42:41,944][02591] Updated weights for policy 0, policy_version 820 (0.0022) [2023-11-20 15:42:46,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3375104. Throughput: 0: 830.7. Samples: 844188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:42:46,177][00638] Avg episode reward: [(0, '24.801')] [2023-11-20 15:42:51,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3387392. Throughput: 0: 834.0. Samples: 846340. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-11-20 15:42:51,177][00638] Avg episode reward: [(0, '25.472')] [2023-11-20 15:42:54,853][02591] Updated weights for policy 0, policy_version 830 (0.0021) [2023-11-20 15:42:56,175][00638] Fps is (10 sec: 2457.5, 60 sec: 3140.3, 300 sec: 3290.7). Total num frames: 3399680. Throughput: 0: 835.9. Samples: 850484. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:42:56,183][00638] Avg episode reward: [(0, '26.822')] [2023-11-20 15:42:56,186][02578] Saving new best policy, reward=26.822! [2023-11-20 15:43:01,175][00638] Fps is (10 sec: 2867.1, 60 sec: 3208.5, 300 sec: 3262.9). Total num frames: 3416064. Throughput: 0: 796.3. Samples: 854438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:43:01,178][00638] Avg episode reward: [(0, '25.880')] [2023-11-20 15:43:01,203][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000834_3416064.pth... [2023-11-20 15:43:01,340][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000644_2637824.pth [2023-11-20 15:43:06,179][00638] Fps is (10 sec: 3275.5, 60 sec: 3276.6, 300 sec: 3262.9). Total num frames: 3432448. Throughput: 0: 786.2. Samples: 857138. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:43:06,181][00638] Avg episode reward: [(0, '27.229')] [2023-11-20 15:43:06,216][02578] Saving new best policy, reward=27.229! [2023-11-20 15:43:07,261][02591] Updated weights for policy 0, policy_version 840 (0.0032) [2023-11-20 15:43:11,175][00638] Fps is (10 sec: 4096.2, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3457024. Throughput: 0: 816.5. Samples: 863378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:43:11,177][00638] Avg episode reward: [(0, '27.534')] [2023-11-20 15:43:11,197][02578] Saving new best policy, reward=27.534! [2023-11-20 15:43:16,175][00638] Fps is (10 sec: 3687.9, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3469312. Throughput: 0: 834.1. Samples: 868104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:43:16,177][00638] Avg episode reward: [(0, '26.763')] [2023-11-20 15:43:19,940][02591] Updated weights for policy 0, policy_version 850 (0.0016) [2023-11-20 15:43:21,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3276.8). Total num frames: 3481600. Throughput: 0: 836.3. Samples: 870164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:43:21,179][00638] Avg episode reward: [(0, '27.079')] [2023-11-20 15:43:26,175][00638] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3249.0). Total num frames: 3493888. Throughput: 0: 805.4. Samples: 874084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:43:26,184][00638] Avg episode reward: [(0, '26.848')] [2023-11-20 15:43:31,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3262.9). Total num frames: 3514368. Throughput: 0: 784.2. Samples: 879476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:43:31,181][00638] Avg episode reward: [(0, '26.073')] [2023-11-20 15:43:32,373][02591] Updated weights for policy 0, policy_version 860 (0.0043) [2023-11-20 15:43:36,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3534848. Throughput: 0: 807.5. Samples: 882678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:43:36,181][00638] Avg episode reward: [(0, '25.985')] [2023-11-20 15:43:41,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3551232. Throughput: 0: 834.4. Samples: 888030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:43:41,180][00638] Avg episode reward: [(0, '26.029')] [2023-11-20 15:43:44,870][02591] Updated weights for policy 0, policy_version 870 (0.0028) [2023-11-20 15:43:46,175][00638] Fps is (10 sec: 2867.1, 60 sec: 3140.3, 300 sec: 3276.8). Total num frames: 3563520. Throughput: 0: 834.6. Samples: 891996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:43:46,179][00638] Avg episode reward: [(0, '26.872')] [2023-11-20 15:43:51,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3262.9). Total num frames: 3579904. Throughput: 0: 818.5. Samples: 893968. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:43:51,179][00638] Avg episode reward: [(0, '27.244')] [2023-11-20 15:43:56,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3249.0). Total num frames: 3596288. Throughput: 0: 786.4. Samples: 898766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:43:56,178][00638] Avg episode reward: [(0, '26.751')] [2023-11-20 15:43:57,481][02591] Updated weights for policy 0, policy_version 880 (0.0023) [2023-11-20 15:44:01,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3616768. Throughput: 0: 821.0. Samples: 905048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:44:01,181][00638] Avg episode reward: [(0, '26.913')] [2023-11-20 15:44:06,175][00638] Fps is (10 sec: 3686.5, 60 sec: 3345.3, 300 sec: 3290.7). Total num frames: 3633152. Throughput: 0: 837.1. Samples: 907834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:44:06,183][00638] Avg episode reward: [(0, '27.725')] [2023-11-20 15:44:06,185][02578] Saving new best policy, reward=27.725! [2023-11-20 15:44:09,871][02591] Updated weights for policy 0, policy_version 890 (0.0026) [2023-11-20 15:44:11,177][00638] Fps is (10 sec: 2866.5, 60 sec: 3140.1, 300 sec: 3276.8). Total num frames: 3645440. Throughput: 0: 838.9. Samples: 911836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:44:11,184][00638] Avg episode reward: [(0, '26.179')] [2023-11-20 15:44:16,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3262.9). Total num frames: 3661824. Throughput: 0: 808.4. Samples: 915854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:44:16,178][00638] Avg episode reward: [(0, '25.897')] [2023-11-20 15:44:21,175][00638] Fps is (10 sec: 3277.6, 60 sec: 3276.8, 300 sec: 3249.0). Total num frames: 3678208. Throughput: 0: 789.8. Samples: 918220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:44:21,177][00638] Avg episode reward: [(0, '23.579')] [2023-11-20 15:44:22,588][02591] Updated weights for policy 0, policy_version 900 (0.0027) [2023-11-20 15:44:26,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3276.8). Total num frames: 3698688. Throughput: 0: 809.2. Samples: 924446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:44:26,178][00638] Avg episode reward: [(0, '23.551')] [2023-11-20 15:44:31,176][00638] Fps is (10 sec: 3686.1, 60 sec: 3345.0, 300 sec: 3290.7). Total num frames: 3715072. Throughput: 0: 838.3. Samples: 929722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:44:31,182][00638] Avg episode reward: [(0, '22.858')] [2023-11-20 15:44:34,773][02591] Updated weights for policy 0, policy_version 910 (0.0042) [2023-11-20 15:44:36,175][00638] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3731456. Throughput: 0: 838.5. Samples: 931702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:44:36,182][00638] Avg episode reward: [(0, '22.849')] [2023-11-20 15:44:41,175][00638] Fps is (10 sec: 2867.4, 60 sec: 3208.5, 300 sec: 3262.9). Total num frames: 3743744. Throughput: 0: 819.6. Samples: 935650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:44:41,179][00638] Avg episode reward: [(0, '22.830')] [2023-11-20 15:44:46,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3249.1). Total num frames: 3760128. Throughput: 0: 790.6. Samples: 940626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:44:46,180][00638] Avg episode reward: [(0, '24.207')] [2023-11-20 15:44:47,545][02591] Updated weights for policy 0, policy_version 920 (0.0034) [2023-11-20 15:44:51,175][00638] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3780608. Throughput: 0: 798.7. Samples: 943776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:44:51,177][00638] Avg episode reward: [(0, '24.598')] [2023-11-20 15:44:56,175][00638] Fps is (10 sec: 3686.5, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 3796992. Throughput: 0: 837.1. Samples: 949502. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:44:56,183][00638] Avg episode reward: [(0, '25.376')] [2023-11-20 15:44:59,756][02591] Updated weights for policy 0, policy_version 930 (0.0031) [2023-11-20 15:45:01,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3809280. Throughput: 0: 835.0. Samples: 953430. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:45:01,182][00638] Avg episode reward: [(0, '26.380')] [2023-11-20 15:45:01,195][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000930_3809280.pth... [2023-11-20 15:45:01,335][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth [2023-11-20 15:45:06,180][00638] Fps is (10 sec: 2865.6, 60 sec: 3208.2, 300 sec: 3262.9). Total num frames: 3825664. Throughput: 0: 824.6. Samples: 955330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:45:06,183][00638] Avg episode reward: [(0, '26.670')] [2023-11-20 15:45:11,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.7, 300 sec: 3235.2). Total num frames: 3837952. Throughput: 0: 778.9. Samples: 959498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-11-20 15:45:11,177][00638] Avg episode reward: [(0, '26.750')] [2023-11-20 15:45:13,143][02591] Updated weights for policy 0, policy_version 940 (0.0019) [2023-11-20 15:45:16,175][00638] Fps is (10 sec: 3688.5, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3862528. Throughput: 0: 802.4. Samples: 965828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:45:16,177][00638] Avg episode reward: [(0, '26.057')] [2023-11-20 15:45:21,175][00638] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 3878912. Throughput: 0: 828.7. Samples: 968992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:45:21,177][00638] Avg episode reward: [(0, '25.570')] [2023-11-20 15:45:25,055][02591] Updated weights for policy 0, policy_version 950 (0.0020) [2023-11-20 15:45:26,175][00638] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3276.8). Total num frames: 3891200. Throughput: 0: 834.0. Samples: 973178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-11-20 15:45:26,177][00638] Avg episode reward: [(0, '24.658')] [2023-11-20 15:45:31,175][00638] Fps is (10 sec: 2867.1, 60 sec: 3208.6, 300 sec: 3262.9). Total num frames: 3907584. Throughput: 0: 812.2. Samples: 977176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:45:31,181][00638] Avg episode reward: [(0, '24.026')] [2023-11-20 15:45:36,175][00638] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3249.1). Total num frames: 3923968. Throughput: 0: 787.7. Samples: 979224. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-11-20 15:45:36,182][00638] Avg episode reward: [(0, '23.204')] [2023-11-20 15:45:37,986][02591] Updated weights for policy 0, policy_version 960 (0.0018) [2023-11-20 15:45:41,175][00638] Fps is (10 sec: 3686.5, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3944448. Throughput: 0: 797.2. Samples: 985378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-11-20 15:45:41,180][00638] Avg episode reward: [(0, '23.483')] [2023-11-20 15:45:46,182][00638] Fps is (10 sec: 3683.9, 60 sec: 3344.7, 300 sec: 3276.7). Total num frames: 3960832. Throughput: 0: 839.1. Samples: 991196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:45:46,184][00638] Avg episode reward: [(0, '23.852')] [2023-11-20 15:45:49,783][02591] Updated weights for policy 0, policy_version 970 (0.0017) [2023-11-20 15:45:51,179][00638] Fps is (10 sec: 2865.9, 60 sec: 3208.3, 300 sec: 3276.7). Total num frames: 3973120. Throughput: 0: 839.6. Samples: 993110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-11-20 15:45:51,182][00638] Avg episode reward: [(0, '24.693')] [2023-11-20 15:45:56,175][00638] Fps is (10 sec: 2869.1, 60 sec: 3208.5, 300 sec: 3262.9). Total num frames: 3989504. Throughput: 0: 837.0. Samples: 997164. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-11-20 15:45:56,178][00638] Avg episode reward: [(0, '24.190')] [2023-11-20 15:46:00,940][02578] Stopping Batcher_0... [2023-11-20 15:46:00,941][02578] Loop batcher_evt_loop terminating... [2023-11-20 15:46:00,941][00638] Component Batcher_0 stopped! [2023-11-20 15:46:00,945][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-11-20 15:46:01,012][00638] Component RolloutWorker_w0 stopped! [2023-11-20 15:46:01,015][00638] Component RolloutWorker_w1 stopped! [2023-11-20 15:46:01,017][02592] Stopping RolloutWorker_w0... [2023-11-20 15:46:01,022][02592] Loop rollout_proc0_evt_loop terminating... [2023-11-20 15:46:01,013][02593] Stopping RolloutWorker_w1... [2023-11-20 15:46:01,031][00638] Component RolloutWorker_w5 stopped! [2023-11-20 15:46:01,041][00638] Component RolloutWorker_w7 stopped! [2023-11-20 15:46:01,031][02596] Stopping RolloutWorker_w5... [2023-11-20 15:46:01,032][02593] Loop rollout_proc1_evt_loop terminating... [2023-11-20 15:46:01,041][02598] Stopping RolloutWorker_w7... [2023-11-20 15:46:01,049][02591] Weights refcount: 2 0 [2023-11-20 15:46:01,053][00638] Component RolloutWorker_w2 stopped! [2023-11-20 15:46:01,056][00638] Component RolloutWorker_w6 stopped! [2023-11-20 15:46:01,050][02596] Loop rollout_proc5_evt_loop terminating... [2023-11-20 15:46:01,058][02597] Stopping RolloutWorker_w6... [2023-11-20 15:46:01,059][00638] Component RolloutWorker_w4 stopped! [2023-11-20 15:46:01,053][02598] Loop rollout_proc7_evt_loop terminating... [2023-11-20 15:46:01,055][02594] Stopping RolloutWorker_w2... [2023-11-20 15:46:01,061][02595] Stopping RolloutWorker_w4... [2023-11-20 15:46:01,070][02591] Stopping InferenceWorker_p0-w0... [2023-11-20 15:46:01,070][02591] Loop inference_proc0-0_evt_loop terminating... [2023-11-20 15:46:01,070][00638] Component InferenceWorker_p0-w0 stopped! [2023-11-20 15:46:01,062][02597] Loop rollout_proc6_evt_loop terminating... [2023-11-20 15:46:01,077][02599] Stopping RolloutWorker_w3... [2023-11-20 15:46:01,078][00638] Component RolloutWorker_w3 stopped! [2023-11-20 15:46:01,082][02599] Loop rollout_proc3_evt_loop terminating... [2023-11-20 15:46:01,066][02594] Loop rollout_proc2_evt_loop terminating... [2023-11-20 15:46:01,081][02595] Loop rollout_proc4_evt_loop terminating... [2023-11-20 15:46:01,112][02578] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000834_3416064.pth [2023-11-20 15:46:01,124][02578] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-11-20 15:46:01,290][00638] Component LearnerWorker_p0 stopped! [2023-11-20 15:46:01,297][00638] Waiting for process learner_proc0 to stop... [2023-11-20 15:46:01,303][02578] Stopping LearnerWorker_p0... [2023-11-20 15:46:01,304][02578] Loop learner_proc0_evt_loop terminating... [2023-11-20 15:46:02,774][00638] Waiting for process inference_proc0-0 to join... [2023-11-20 15:46:02,784][00638] Waiting for process rollout_proc0 to join... [2023-11-20 15:46:04,756][00638] Waiting for process rollout_proc1 to join... [2023-11-20 15:46:04,757][00638] Waiting for process rollout_proc2 to join... [2023-11-20 15:46:04,761][00638] Waiting for process rollout_proc3 to join... [2023-11-20 15:46:04,766][00638] Waiting for process rollout_proc4 to join... [2023-11-20 15:46:04,768][00638] Waiting for process rollout_proc5 to join... [2023-11-20 15:46:04,770][00638] Waiting for process rollout_proc6 to join... [2023-11-20 15:46:04,776][00638] Waiting for process rollout_proc7 to join... [2023-11-20 15:46:04,781][00638] Batcher 0 profile tree view: batching: 27.2141, releasing_batches: 0.0292 [2023-11-20 15:46:04,782][00638] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0025 wait_policy_total: 574.4430 update_model: 9.0142 weight_update: 0.0028 one_step: 0.0025 handle_policy_step: 604.7835 deserialize: 16.3536, stack: 3.1145, obs_to_device_normalize: 119.5590, forward: 328.6223, send_messages: 28.1560 prepare_outputs: 79.5615 to_cpu: 46.4791 [2023-11-20 15:46:04,785][00638] Learner 0 profile tree view: misc: 0.0060, prepare_batch: 13.8220 train: 75.7631 epoch_init: 0.0110, minibatch_init: 0.0108, losses_postprocess: 0.6884, kl_divergence: 0.5871, after_optimizer: 34.4647 calculate_losses: 27.1559 losses_init: 0.0041, forward_head: 1.5290, bptt_initial: 17.7096, tail: 1.1646, advantages_returns: 0.2768, losses: 3.9977 bptt: 2.1062 bptt_forward_core: 1.9626 update: 12.1818 clip: 0.9746 [2023-11-20 15:46:04,788][00638] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.3380, enqueue_policy_requests: 164.0028, env_step: 917.1142, overhead: 24.1810, complete_rollouts: 7.4181 save_policy_outputs: 22.1856 split_output_tensors: 9.9926 [2023-11-20 15:46:04,789][00638] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.3630, enqueue_policy_requests: 178.5102, env_step: 902.8770, overhead: 23.7331, complete_rollouts: 8.1042 save_policy_outputs: 22.5141 split_output_tensors: 10.8336 [2023-11-20 15:46:04,793][00638] Loop Runner_EvtLoop terminating... [2023-11-20 15:46:04,794][00638] Runner profile tree view: main_loop: 1257.1266 [2023-11-20 15:46:04,798][00638] Collected {0: 4005888}, FPS: 3186.5 [2023-11-20 15:46:04,827][00638] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-11-20 15:46:04,829][00638] Overriding arg 'num_workers' with value 1 passed from command line [2023-11-20 15:46:04,830][00638] Adding new argument 'no_render'=True that is not in the saved config file! [2023-11-20 15:46:04,832][00638] Adding new argument 'save_video'=True that is not in the saved config file! [2023-11-20 15:46:04,834][00638] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-11-20 15:46:04,836][00638] Adding new argument 'video_name'=None that is not in the saved config file! [2023-11-20 15:46:04,838][00638] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-11-20 15:46:04,840][00638] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-11-20 15:46:04,841][00638] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-11-20 15:46:04,842][00638] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-11-20 15:46:04,845][00638] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-11-20 15:46:04,846][00638] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-11-20 15:46:04,848][00638] Adding new argument 'train_script'=None that is not in the saved config file! [2023-11-20 15:46:04,849][00638] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-11-20 15:46:04,850][00638] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-11-20 15:46:04,888][00638] Doom resolution: 160x120, resize resolution: (128, 72) [2023-11-20 15:46:04,892][00638] RunningMeanStd input shape: (3, 72, 128) [2023-11-20 15:46:04,895][00638] RunningMeanStd input shape: (1,) [2023-11-20 15:46:04,912][00638] ConvEncoder: input_channels=3 [2023-11-20 15:46:05,018][00638] Conv encoder output size: 512 [2023-11-20 15:46:05,020][00638] Policy head output size: 512 [2023-11-20 15:46:05,298][00638] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-11-20 15:46:05,938][00638] Num frames 100... [2023-11-20 15:46:06,073][00638] Num frames 200... [2023-11-20 15:46:06,198][00638] Num frames 300... [2023-11-20 15:46:06,326][00638] Num frames 400... [2023-11-20 15:46:06,451][00638] Num frames 500... [2023-11-20 15:46:06,582][00638] Num frames 600... [2023-11-20 15:46:06,752][00638] Avg episode rewards: #0: 11.850, true rewards: #0: 6.850 [2023-11-20 15:46:06,754][00638] Avg episode reward: 11.850, avg true_objective: 6.850 [2023-11-20 15:46:06,775][00638] Num frames 700... [2023-11-20 15:46:06,906][00638] Num frames 800... [2023-11-20 15:46:07,033][00638] Num frames 900... [2023-11-20 15:46:07,158][00638] Num frames 1000... [2023-11-20 15:46:07,284][00638] Num frames 1100... [2023-11-20 15:46:07,409][00638] Num frames 1200... [2023-11-20 15:46:07,544][00638] Num frames 1300... [2023-11-20 15:46:07,669][00638] Num frames 1400... [2023-11-20 15:46:07,841][00638] Avg episode rewards: #0: 14.925, true rewards: #0: 7.425 [2023-11-20 15:46:07,843][00638] Avg episode reward: 14.925, avg true_objective: 7.425 [2023-11-20 15:46:07,865][00638] Num frames 1500... [2023-11-20 15:46:07,991][00638] Num frames 1600... [2023-11-20 15:46:08,127][00638] Num frames 1700... [2023-11-20 15:46:08,262][00638] Num frames 1800... [2023-11-20 15:46:08,389][00638] Num frames 1900... [2023-11-20 15:46:08,524][00638] Avg episode rewards: #0: 12.217, true rewards: #0: 6.550 [2023-11-20 15:46:08,526][00638] Avg episode reward: 12.217, avg true_objective: 6.550 [2023-11-20 15:46:08,574][00638] Num frames 2000... [2023-11-20 15:46:08,702][00638] Num frames 2100... [2023-11-20 15:46:08,838][00638] Num frames 2200... [2023-11-20 15:46:08,972][00638] Num frames 2300... [2023-11-20 15:46:09,099][00638] Num frames 2400... [2023-11-20 15:46:09,230][00638] Num frames 2500... [2023-11-20 15:46:09,361][00638] Num frames 2600... [2023-11-20 15:46:09,493][00638] Num frames 2700... [2023-11-20 15:46:09,666][00638] Num frames 2800... [2023-11-20 15:46:09,858][00638] Num frames 2900... [2023-11-20 15:46:10,045][00638] Num frames 3000... [2023-11-20 15:46:10,240][00638] Num frames 3100... [2023-11-20 15:46:10,426][00638] Num frames 3200... [2023-11-20 15:46:10,619][00638] Num frames 3300... [2023-11-20 15:46:10,808][00638] Num frames 3400... [2023-11-20 15:46:10,997][00638] Num frames 3500... [2023-11-20 15:46:11,181][00638] Num frames 3600... [2023-11-20 15:46:11,362][00638] Num frames 3700... [2023-11-20 15:46:11,596][00638] Avg episode rewards: #0: 20.722, true rewards: #0: 9.472 [2023-11-20 15:46:11,598][00638] Avg episode reward: 20.722, avg true_objective: 9.472 [2023-11-20 15:46:11,624][00638] Num frames 3800... [2023-11-20 15:46:11,809][00638] Num frames 3900... [2023-11-20 15:46:12,009][00638] Num frames 4000... [2023-11-20 15:46:12,193][00638] Num frames 4100... [2023-11-20 15:46:12,370][00638] Num frames 4200... [2023-11-20 15:46:12,553][00638] Num frames 4300... [2023-11-20 15:46:12,744][00638] Num frames 4400... [2023-11-20 15:46:12,941][00638] Num frames 4500... [2023-11-20 15:46:13,127][00638] Num frames 4600... [2023-11-20 15:46:13,318][00638] Num frames 4700... [2023-11-20 15:46:13,516][00638] Num frames 4800... [2023-11-20 15:46:13,703][00638] Num frames 4900... [2023-11-20 15:46:13,904][00638] Num frames 5000... [2023-11-20 15:46:14,102][00638] Num frames 5100... [2023-11-20 15:46:14,298][00638] Num frames 5200... [2023-11-20 15:46:14,495][00638] Num frames 5300... [2023-11-20 15:46:14,702][00638] Num frames 5400... [2023-11-20 15:46:14,898][00638] Num frames 5500... [2023-11-20 15:46:15,072][00638] Num frames 5600... [2023-11-20 15:46:15,202][00638] Num frames 5700... [2023-11-20 15:46:15,327][00638] Num frames 5800... [2023-11-20 15:46:15,494][00638] Avg episode rewards: #0: 28.578, true rewards: #0: 11.778 [2023-11-20 15:46:15,496][00638] Avg episode reward: 28.578, avg true_objective: 11.778 [2023-11-20 15:46:15,514][00638] Num frames 5900... [2023-11-20 15:46:15,649][00638] Num frames 6000... [2023-11-20 15:46:15,780][00638] Num frames 6100... [2023-11-20 15:46:15,913][00638] Num frames 6200... [2023-11-20 15:46:16,062][00638] Num frames 6300... [2023-11-20 15:46:16,199][00638] Num frames 6400... [2023-11-20 15:46:16,339][00638] Num frames 6500... [2023-11-20 15:46:16,475][00638] Num frames 6600... [2023-11-20 15:46:16,608][00638] Num frames 6700... [2023-11-20 15:46:16,744][00638] Num frames 6800... [2023-11-20 15:46:16,871][00638] Num frames 6900... [2023-11-20 15:46:17,006][00638] Num frames 7000... [2023-11-20 15:46:17,170][00638] Avg episode rewards: #0: 29.138, true rewards: #0: 11.805 [2023-11-20 15:46:17,171][00638] Avg episode reward: 29.138, avg true_objective: 11.805 [2023-11-20 15:46:17,198][00638] Num frames 7100... [2023-11-20 15:46:17,325][00638] Num frames 7200... [2023-11-20 15:46:17,465][00638] Num frames 7300... [2023-11-20 15:46:17,594][00638] Num frames 7400... [2023-11-20 15:46:17,726][00638] Num frames 7500... [2023-11-20 15:46:17,853][00638] Num frames 7600... [2023-11-20 15:46:17,984][00638] Num frames 7700... [2023-11-20 15:46:18,123][00638] Num frames 7800... [2023-11-20 15:46:18,253][00638] Num frames 7900... [2023-11-20 15:46:18,381][00638] Num frames 8000... [2023-11-20 15:46:18,506][00638] Num frames 8100... [2023-11-20 15:46:18,634][00638] Num frames 8200... [2023-11-20 15:46:18,764][00638] Num frames 8300... [2023-11-20 15:46:18,942][00638] Avg episode rewards: #0: 29.564, true rewards: #0: 11.993 [2023-11-20 15:46:18,943][00638] Avg episode reward: 29.564, avg true_objective: 11.993 [2023-11-20 15:46:18,955][00638] Num frames 8400... [2023-11-20 15:46:19,091][00638] Num frames 8500... [2023-11-20 15:46:19,217][00638] Num frames 8600... [2023-11-20 15:46:19,348][00638] Num frames 8700... [2023-11-20 15:46:19,475][00638] Num frames 8800... [2023-11-20 15:46:19,607][00638] Num frames 8900... [2023-11-20 15:46:19,747][00638] Num frames 9000... [2023-11-20 15:46:19,890][00638] Num frames 9100... [2023-11-20 15:46:20,055][00638] Num frames 9200... [2023-11-20 15:46:20,190][00638] Avg episode rewards: #0: 28.449, true rewards: #0: 11.574 [2023-11-20 15:46:20,192][00638] Avg episode reward: 28.449, avg true_objective: 11.574 [2023-11-20 15:46:20,251][00638] Num frames 9300... [2023-11-20 15:46:20,376][00638] Num frames 9400... [2023-11-20 15:46:20,509][00638] Num frames 9500... [2023-11-20 15:46:20,633][00638] Num frames 9600... [2023-11-20 15:46:20,763][00638] Num frames 9700... [2023-11-20 15:46:20,910][00638] Avg episode rewards: #0: 26.079, true rewards: #0: 10.857 [2023-11-20 15:46:20,912][00638] Avg episode reward: 26.079, avg true_objective: 10.857 [2023-11-20 15:46:20,957][00638] Num frames 9800... [2023-11-20 15:46:21,093][00638] Num frames 9900... [2023-11-20 15:46:21,219][00638] Num frames 10000... [2023-11-20 15:46:21,344][00638] Num frames 10100... [2023-11-20 15:46:21,474][00638] Num frames 10200... [2023-11-20 15:46:21,606][00638] Num frames 10300... [2023-11-20 15:46:21,737][00638] Num frames 10400... [2023-11-20 15:46:21,864][00638] Num frames 10500... [2023-11-20 15:46:21,990][00638] Num frames 10600... [2023-11-20 15:46:22,124][00638] Num frames 10700... [2023-11-20 15:46:22,249][00638] Num frames 10800... [2023-11-20 15:46:22,374][00638] Num frames 10900... [2023-11-20 15:46:22,501][00638] Num frames 11000... [2023-11-20 15:46:22,633][00638] Num frames 11100... [2023-11-20 15:46:22,760][00638] Num frames 11200... [2023-11-20 15:46:22,920][00638] Avg episode rewards: #0: 27.282, true rewards: #0: 11.282 [2023-11-20 15:46:22,922][00638] Avg episode reward: 27.282, avg true_objective: 11.282 [2023-11-20 15:47:36,484][00638] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-11-20 15:47:36,514][00638] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-11-20 15:47:36,515][00638] Overriding arg 'num_workers' with value 1 passed from command line [2023-11-20 15:47:36,518][00638] Adding new argument 'no_render'=True that is not in the saved config file! [2023-11-20 15:47:36,520][00638] Adding new argument 'save_video'=True that is not in the saved config file! [2023-11-20 15:47:36,521][00638] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-11-20 15:47:36,522][00638] Adding new argument 'video_name'=None that is not in the saved config file! [2023-11-20 15:47:36,523][00638] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-11-20 15:47:36,524][00638] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-11-20 15:47:36,525][00638] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-11-20 15:47:36,526][00638] Adding new argument 'hf_repository'='SamDNX/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-11-20 15:47:36,527][00638] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-11-20 15:47:36,528][00638] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-11-20 15:47:36,529][00638] Adding new argument 'train_script'=None that is not in the saved config file! [2023-11-20 15:47:36,530][00638] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-11-20 15:47:36,531][00638] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-11-20 15:47:36,571][00638] RunningMeanStd input shape: (3, 72, 128) [2023-11-20 15:47:36,573][00638] RunningMeanStd input shape: (1,) [2023-11-20 15:47:36,586][00638] ConvEncoder: input_channels=3 [2023-11-20 15:47:36,625][00638] Conv encoder output size: 512 [2023-11-20 15:47:36,627][00638] Policy head output size: 512 [2023-11-20 15:47:36,649][00638] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-11-20 15:47:37,063][00638] Num frames 100... [2023-11-20 15:47:37,191][00638] Num frames 200... [2023-11-20 15:47:37,318][00638] Num frames 300... [2023-11-20 15:47:37,453][00638] Num frames 400... [2023-11-20 15:47:37,585][00638] Num frames 500... [2023-11-20 15:47:37,722][00638] Num frames 600... [2023-11-20 15:47:37,851][00638] Num frames 700... [2023-11-20 15:47:37,984][00638] Num frames 800... [2023-11-20 15:47:38,081][00638] Avg episode rewards: #0: 17.320, true rewards: #0: 8.320 [2023-11-20 15:47:38,083][00638] Avg episode reward: 17.320, avg true_objective: 8.320 [2023-11-20 15:47:38,169][00638] Num frames 900... [2023-11-20 15:47:38,290][00638] Num frames 1000... [2023-11-20 15:47:38,419][00638] Num frames 1100... [2023-11-20 15:47:38,558][00638] Num frames 1200... [2023-11-20 15:47:38,686][00638] Num frames 1300... [2023-11-20 15:47:38,815][00638] Num frames 1400... [2023-11-20 15:47:38,940][00638] Num frames 1500... [2023-11-20 15:47:39,069][00638] Num frames 1600... [2023-11-20 15:47:39,195][00638] Num frames 1700... [2023-11-20 15:47:39,331][00638] Num frames 1800... [2023-11-20 15:47:39,459][00638] Num frames 1900... [2023-11-20 15:47:39,602][00638] Num frames 2000... [2023-11-20 15:47:39,744][00638] Num frames 2100... [2023-11-20 15:47:39,874][00638] Num frames 2200... [2023-11-20 15:47:40,005][00638] Num frames 2300... [2023-11-20 15:47:40,139][00638] Num frames 2400... [2023-11-20 15:47:40,282][00638] Num frames 2500... [2023-11-20 15:47:40,415][00638] Num frames 2600... [2023-11-20 15:47:40,557][00638] Num frames 2700... [2023-11-20 15:47:40,687][00638] Num frames 2800... [2023-11-20 15:47:40,816][00638] Num frames 2900... [2023-11-20 15:47:40,916][00638] Avg episode rewards: #0: 35.659, true rewards: #0: 14.660 [2023-11-20 15:47:40,917][00638] Avg episode reward: 35.659, avg true_objective: 14.660 [2023-11-20 15:47:41,009][00638] Num frames 3000... [2023-11-20 15:47:41,134][00638] Num frames 3100... [2023-11-20 15:47:41,266][00638] Num frames 3200... [2023-11-20 15:47:41,395][00638] Num frames 3300... [2023-11-20 15:47:41,532][00638] Num frames 3400... [2023-11-20 15:47:41,658][00638] Num frames 3500... [2023-11-20 15:47:41,786][00638] Num frames 3600... [2023-11-20 15:47:41,911][00638] Num frames 3700... [2023-11-20 15:47:42,035][00638] Num frames 3800... [2023-11-20 15:47:42,165][00638] Num frames 3900... [2023-11-20 15:47:42,291][00638] Num frames 4000... [2023-11-20 15:47:42,414][00638] Num frames 4100... [2023-11-20 15:47:42,555][00638] Num frames 4200... [2023-11-20 15:47:42,684][00638] Num frames 4300... [2023-11-20 15:47:42,808][00638] Num frames 4400... [2023-11-20 15:47:42,939][00638] Num frames 4500... [2023-11-20 15:47:43,063][00638] Num frames 4600... [2023-11-20 15:47:43,190][00638] Num frames 4700... [2023-11-20 15:47:43,316][00638] Avg episode rewards: #0: 39.853, true rewards: #0: 15.853 [2023-11-20 15:47:43,318][00638] Avg episode reward: 39.853, avg true_objective: 15.853 [2023-11-20 15:47:43,376][00638] Num frames 4800... [2023-11-20 15:47:43,504][00638] Num frames 4900... [2023-11-20 15:47:43,636][00638] Num frames 5000... [2023-11-20 15:47:43,762][00638] Num frames 5100... [2023-11-20 15:47:43,830][00638] Avg episode rewards: #0: 31.020, true rewards: #0: 12.770 [2023-11-20 15:47:43,831][00638] Avg episode reward: 31.020, avg true_objective: 12.770 [2023-11-20 15:47:43,952][00638] Num frames 5200... [2023-11-20 15:47:44,072][00638] Num frames 5300... [2023-11-20 15:47:44,197][00638] Num frames 5400... [2023-11-20 15:47:44,328][00638] Num frames 5500... [2023-11-20 15:47:44,460][00638] Num frames 5600... [2023-11-20 15:47:44,597][00638] Num frames 5700... [2023-11-20 15:47:44,728][00638] Num frames 5800... [2023-11-20 15:47:44,857][00638] Num frames 5900... [2023-11-20 15:47:44,985][00638] Num frames 6000... [2023-11-20 15:47:45,109][00638] Num frames 6100... [2023-11-20 15:47:45,231][00638] Num frames 6200... [2023-11-20 15:47:45,358][00638] Num frames 6300... [2023-11-20 15:47:45,485][00638] Num frames 6400... [2023-11-20 15:47:45,635][00638] Avg episode rewards: #0: 30.704, true rewards: #0: 12.904 [2023-11-20 15:47:45,637][00638] Avg episode reward: 30.704, avg true_objective: 12.904 [2023-11-20 15:47:45,728][00638] Num frames 6500... [2023-11-20 15:47:45,908][00638] Num frames 6600... [2023-11-20 15:47:46,095][00638] Num frames 6700... [2023-11-20 15:47:46,281][00638] Num frames 6800... [2023-11-20 15:47:46,473][00638] Num frames 6900... [2023-11-20 15:47:46,665][00638] Num frames 7000... [2023-11-20 15:47:46,855][00638] Num frames 7100... [2023-11-20 15:47:47,036][00638] Num frames 7200... [2023-11-20 15:47:47,225][00638] Num frames 7300... [2023-11-20 15:47:47,414][00638] Num frames 7400... [2023-11-20 15:47:47,608][00638] Num frames 7500... [2023-11-20 15:47:47,816][00638] Num frames 7600... [2023-11-20 15:47:48,001][00638] Num frames 7700... [2023-11-20 15:47:48,189][00638] Num frames 7800... [2023-11-20 15:47:48,375][00638] Num frames 7900... [2023-11-20 15:47:48,558][00638] Num frames 8000... [2023-11-20 15:47:48,749][00638] Num frames 8100... [2023-11-20 15:47:48,936][00638] Num frames 8200... [2023-11-20 15:47:49,125][00638] Num frames 8300... [2023-11-20 15:47:49,313][00638] Avg episode rewards: #0: 34.286, true rewards: #0: 13.953 [2023-11-20 15:47:49,315][00638] Avg episode reward: 34.286, avg true_objective: 13.953 [2023-11-20 15:47:49,379][00638] Num frames 8400... [2023-11-20 15:47:49,574][00638] Num frames 8500... [2023-11-20 15:47:49,785][00638] Num frames 8600... [2023-11-20 15:47:49,991][00638] Num frames 8700... [2023-11-20 15:47:50,194][00638] Num frames 8800... [2023-11-20 15:47:50,393][00638] Num frames 8900... [2023-11-20 15:47:50,594][00638] Num frames 9000... [2023-11-20 15:47:50,780][00638] Num frames 9100... [2023-11-20 15:47:50,984][00638] Num frames 9200... [2023-11-20 15:47:51,050][00638] Avg episode rewards: #0: 32.148, true rewards: #0: 13.149 [2023-11-20 15:47:51,052][00638] Avg episode reward: 32.148, avg true_objective: 13.149 [2023-11-20 15:47:51,174][00638] Num frames 9300... [2023-11-20 15:47:51,302][00638] Num frames 9400... [2023-11-20 15:47:51,429][00638] Num frames 9500... [2023-11-20 15:47:51,564][00638] Num frames 9600... [2023-11-20 15:47:51,703][00638] Num frames 9700... [2023-11-20 15:47:51,835][00638] Num frames 9800... [2023-11-20 15:47:51,974][00638] Num frames 9900... [2023-11-20 15:47:52,103][00638] Num frames 10000... [2023-11-20 15:47:52,233][00638] Num frames 10100... [2023-11-20 15:47:52,367][00638] Num frames 10200... [2023-11-20 15:47:52,499][00638] Num frames 10300... [2023-11-20 15:47:52,631][00638] Num frames 10400... [2023-11-20 15:47:52,756][00638] Avg episode rewards: #0: 32.065, true rewards: #0: 13.065 [2023-11-20 15:47:52,757][00638] Avg episode reward: 32.065, avg true_objective: 13.065 [2023-11-20 15:47:52,826][00638] Num frames 10500... [2023-11-20 15:47:52,961][00638] Num frames 10600... [2023-11-20 15:47:53,091][00638] Num frames 10700... [2023-11-20 15:47:53,213][00638] Num frames 10800... [2023-11-20 15:47:53,334][00638] Num frames 10900... [2023-11-20 15:47:53,468][00638] Num frames 11000... [2023-11-20 15:47:53,603][00638] Num frames 11100... [2023-11-20 15:47:53,772][00638] Avg episode rewards: #0: 30.320, true rewards: #0: 12.431 [2023-11-20 15:47:53,774][00638] Avg episode reward: 30.320, avg true_objective: 12.431 [2023-11-20 15:47:53,793][00638] Num frames 11200... [2023-11-20 15:47:53,922][00638] Num frames 11300... [2023-11-20 15:47:54,052][00638] Num frames 11400... [2023-11-20 15:47:54,176][00638] Num frames 11500... [2023-11-20 15:47:54,305][00638] Num frames 11600... [2023-11-20 15:47:54,430][00638] Num frames 11700... [2023-11-20 15:47:54,564][00638] Num frames 11800... [2023-11-20 15:47:54,692][00638] Num frames 11900... [2023-11-20 15:47:54,819][00638] Num frames 12000... [2023-11-20 15:47:54,956][00638] Num frames 12100... [2023-11-20 15:47:55,117][00638] Avg episode rewards: #0: 29.780, true rewards: #0: 12.180 [2023-11-20 15:47:55,119][00638] Avg episode reward: 29.780, avg true_objective: 12.180 [2023-11-20 15:49:13,581][00638] Replay video saved to /content/train_dir/default_experiment/replay.mp4!