diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,1172 @@ +[2023-02-26 06:14:57,471][06480] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-26 06:14:57,475][06480] Rollout worker 0 uses device cpu +[2023-02-26 06:14:57,476][06480] Rollout worker 1 uses device cpu +[2023-02-26 06:14:57,478][06480] Rollout worker 2 uses device cpu +[2023-02-26 06:14:57,479][06480] Rollout worker 3 uses device cpu +[2023-02-26 06:14:57,480][06480] Rollout worker 4 uses device cpu +[2023-02-26 06:14:57,481][06480] Rollout worker 5 uses device cpu +[2023-02-26 06:14:57,484][06480] Rollout worker 6 uses device cpu +[2023-02-26 06:14:57,485][06480] Rollout worker 7 uses device cpu +[2023-02-26 06:14:57,682][06480] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 06:14:57,685][06480] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-26 06:14:57,717][06480] Starting all processes... +[2023-02-26 06:14:57,718][06480] Starting process learner_proc0 +[2023-02-26 06:14:57,774][06480] Starting all processes... +[2023-02-26 06:14:57,787][06480] Starting process inference_proc0-0 +[2023-02-26 06:14:57,790][06480] Starting process rollout_proc0 +[2023-02-26 06:14:57,790][06480] Starting process rollout_proc1 +[2023-02-26 06:14:57,790][06480] Starting process rollout_proc2 +[2023-02-26 06:14:57,797][06480] Starting process rollout_proc3 +[2023-02-26 06:14:57,797][06480] Starting process rollout_proc4 +[2023-02-26 06:14:57,797][06480] Starting process rollout_proc5 +[2023-02-26 06:14:57,797][06480] Starting process rollout_proc6 +[2023-02-26 06:14:57,797][06480] Starting process rollout_proc7 +[2023-02-26 06:15:09,485][13238] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 06:15:09,485][13238] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-26 06:15:09,499][13252] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 06:15:09,503][13252] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-26 06:15:09,550][13253] Worker 0 uses CPU cores [0] +[2023-02-26 06:15:09,609][13254] Worker 1 uses CPU cores [1] +[2023-02-26 06:15:09,624][13256] Worker 2 uses CPU cores [0] +[2023-02-26 06:15:09,784][13257] Worker 5 uses CPU cores [1] +[2023-02-26 06:15:09,849][13258] Worker 4 uses CPU cores [0] +[2023-02-26 06:15:09,913][13259] Worker 7 uses CPU cores [1] +[2023-02-26 06:15:09,933][13260] Worker 6 uses CPU cores [0] +[2023-02-26 06:15:10,078][13255] Worker 3 uses CPU cores [1] +[2023-02-26 06:15:10,410][13252] Num visible devices: 1 +[2023-02-26 06:15:10,410][13238] Num visible devices: 1 +[2023-02-26 06:15:10,431][13238] Starting seed is not provided +[2023-02-26 06:15:10,432][13238] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 06:15:10,433][13238] Initializing actor-critic model on device cuda:0 +[2023-02-26 06:15:10,434][13238] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 06:15:10,437][13238] RunningMeanStd input shape: (1,) +[2023-02-26 06:15:10,457][13238] ConvEncoder: input_channels=3 +[2023-02-26 06:15:10,752][13238] Conv encoder output size: 512 +[2023-02-26 06:15:10,752][13238] Policy head output size: 512 +[2023-02-26 06:15:10,805][13238] Created Actor Critic model with architecture: +[2023-02-26 06:15:10,805][13238] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-26 06:15:17,675][06480] Heartbeat connected on Batcher_0 +[2023-02-26 06:15:17,683][06480] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-26 06:15:17,692][06480] Heartbeat connected on RolloutWorker_w0 +[2023-02-26 06:15:17,696][06480] Heartbeat connected on RolloutWorker_w1 +[2023-02-26 06:15:17,699][06480] Heartbeat connected on RolloutWorker_w2 +[2023-02-26 06:15:17,703][06480] Heartbeat connected on RolloutWorker_w3 +[2023-02-26 06:15:17,706][06480] Heartbeat connected on RolloutWorker_w4 +[2023-02-26 06:15:17,709][06480] Heartbeat connected on RolloutWorker_w5 +[2023-02-26 06:15:17,713][06480] Heartbeat connected on RolloutWorker_w6 +[2023-02-26 06:15:17,716][06480] Heartbeat connected on RolloutWorker_w7 +[2023-02-26 06:15:18,520][13238] Using optimizer +[2023-02-26 06:15:18,522][13238] No checkpoints found +[2023-02-26 06:15:18,522][13238] Did not load from checkpoint, starting from scratch! +[2023-02-26 06:15:18,522][13238] Initialized policy 0 weights for model version 0 +[2023-02-26 06:15:18,525][13238] LearnerWorker_p0 finished initialization! +[2023-02-26 06:15:18,526][06480] Heartbeat connected on LearnerWorker_p0 +[2023-02-26 06:15:18,526][13238] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 06:15:18,759][13252] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 06:15:18,761][13252] RunningMeanStd input shape: (1,) +[2023-02-26 06:15:18,781][13252] ConvEncoder: input_channels=3 +[2023-02-26 06:15:18,933][13252] Conv encoder output size: 512 +[2023-02-26 06:15:18,934][13252] Policy head output size: 512 +[2023-02-26 06:15:22,133][06480] Inference worker 0-0 is ready! +[2023-02-26 06:15:22,135][06480] All inference workers are ready! Signal rollout workers to start! +[2023-02-26 06:15:22,259][13254] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,262][13259] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,311][13257] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,314][13255] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,388][13256] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,380][13253] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,405][13258] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:22,392][13260] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 06:15:23,272][06480] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 06:15:23,671][13259] Decorrelating experience for 0 frames... +[2023-02-26 06:15:23,672][13254] Decorrelating experience for 0 frames... +[2023-02-26 06:15:23,673][13257] Decorrelating experience for 0 frames... +[2023-02-26 06:15:24,303][13258] Decorrelating experience for 0 frames... +[2023-02-26 06:15:24,306][13253] Decorrelating experience for 0 frames... +[2023-02-26 06:15:24,309][13256] Decorrelating experience for 0 frames... +[2023-02-26 06:15:24,322][13260] Decorrelating experience for 0 frames... +[2023-02-26 06:15:24,722][13259] Decorrelating experience for 32 frames... +[2023-02-26 06:15:24,724][13254] Decorrelating experience for 32 frames... +[2023-02-26 06:15:24,733][13257] Decorrelating experience for 32 frames... +[2023-02-26 06:15:25,361][13258] Decorrelating experience for 32 frames... +[2023-02-26 06:15:25,367][13256] Decorrelating experience for 32 frames... +[2023-02-26 06:15:25,378][13260] Decorrelating experience for 32 frames... +[2023-02-26 06:15:25,915][13256] Decorrelating experience for 64 frames... +[2023-02-26 06:15:26,083][13255] Decorrelating experience for 0 frames... +[2023-02-26 06:15:26,295][13257] Decorrelating experience for 64 frames... +[2023-02-26 06:15:26,299][13259] Decorrelating experience for 64 frames... +[2023-02-26 06:15:26,327][13256] Decorrelating experience for 96 frames... +[2023-02-26 06:15:26,835][13260] Decorrelating experience for 64 frames... +[2023-02-26 06:15:26,875][13254] Decorrelating experience for 64 frames... +[2023-02-26 06:15:27,469][13255] Decorrelating experience for 32 frames... +[2023-02-26 06:15:27,748][13259] Decorrelating experience for 96 frames... +[2023-02-26 06:15:27,753][13257] Decorrelating experience for 96 frames... +[2023-02-26 06:15:28,088][13253] Decorrelating experience for 32 frames... +[2023-02-26 06:15:28,191][13260] Decorrelating experience for 96 frames... +[2023-02-26 06:15:28,272][06480] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 06:15:28,590][13255] Decorrelating experience for 64 frames... +[2023-02-26 06:15:28,681][13254] Decorrelating experience for 96 frames... +[2023-02-26 06:15:29,081][13258] Decorrelating experience for 64 frames... +[2023-02-26 06:15:29,172][13253] Decorrelating experience for 64 frames... +[2023-02-26 06:15:29,652][13255] Decorrelating experience for 96 frames... +[2023-02-26 06:15:29,901][13258] Decorrelating experience for 96 frames... +[2023-02-26 06:15:29,986][13253] Decorrelating experience for 96 frames... +[2023-02-26 06:15:33,157][13238] Signal inference workers to stop experience collection... +[2023-02-26 06:15:33,169][13252] InferenceWorker_p0-w0: stopping experience collection +[2023-02-26 06:15:33,272][06480] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 46.2. Samples: 462. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 06:15:33,275][06480] Avg episode reward: [(0, '1.582')] +[2023-02-26 06:15:36,192][13238] Signal inference workers to resume experience collection... +[2023-02-26 06:15:36,193][13252] InferenceWorker_p0-w0: resuming experience collection +[2023-02-26 06:15:38,276][06480] Fps is (10 sec: 409.4, 60 sec: 273.0, 300 sec: 273.0). Total num frames: 4096. Throughput: 0: 170.4. Samples: 2556. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-02-26 06:15:38,279][06480] Avg episode reward: [(0, '2.078')] +[2023-02-26 06:15:43,272][06480] Fps is (10 sec: 2457.6, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 24576. Throughput: 0: 324.4. Samples: 6488. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) +[2023-02-26 06:15:43,274][06480] Avg episode reward: [(0, '3.576')] +[2023-02-26 06:15:46,847][13252] Updated weights for policy 0, policy_version 10 (0.0017) +[2023-02-26 06:15:48,272][06480] Fps is (10 sec: 4097.8, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 388.8. Samples: 9720. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) +[2023-02-26 06:15:48,274][06480] Avg episode reward: [(0, '4.253')] +[2023-02-26 06:15:53,276][06480] Fps is (10 sec: 3684.9, 60 sec: 2047.7, 300 sec: 2047.7). Total num frames: 61440. Throughput: 0: 539.8. Samples: 16196. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:15:53,278][06480] Avg episode reward: [(0, '4.511')] +[2023-02-26 06:15:58,272][06480] Fps is (10 sec: 3276.8, 60 sec: 2223.5, 300 sec: 2223.5). Total num frames: 77824. Throughput: 0: 582.1. Samples: 20372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:15:58,278][06480] Avg episode reward: [(0, '4.533')] +[2023-02-26 06:15:58,948][13252] Updated weights for policy 0, policy_version 20 (0.0015) +[2023-02-26 06:16:03,272][06480] Fps is (10 sec: 3278.1, 60 sec: 2355.2, 300 sec: 2355.2). Total num frames: 94208. Throughput: 0: 560.0. Samples: 22398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:16:03,274][06480] Avg episode reward: [(0, '4.462')] +[2023-02-26 06:16:08,272][06480] Fps is (10 sec: 3686.4, 60 sec: 2548.6, 300 sec: 2548.6). Total num frames: 114688. Throughput: 0: 639.0. Samples: 28754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:16:08,274][06480] Avg episode reward: [(0, '4.427')] +[2023-02-26 06:16:08,277][13238] Saving new best policy, reward=4.427! +[2023-02-26 06:16:09,601][13252] Updated weights for policy 0, policy_version 30 (0.0013) +[2023-02-26 06:16:13,275][06480] Fps is (10 sec: 4094.7, 60 sec: 2703.2, 300 sec: 2703.2). Total num frames: 135168. Throughput: 0: 764.4. Samples: 34402. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:16:13,281][06480] Avg episode reward: [(0, '4.640')] +[2023-02-26 06:16:13,295][13238] Saving new best policy, reward=4.640! +[2023-02-26 06:16:18,274][06480] Fps is (10 sec: 3276.1, 60 sec: 2680.9, 300 sec: 2680.9). Total num frames: 147456. Throughput: 0: 799.8. Samples: 36456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:16:18,278][06480] Avg episode reward: [(0, '4.596')] +[2023-02-26 06:16:22,752][13252] Updated weights for policy 0, policy_version 40 (0.0018) +[2023-02-26 06:16:23,272][06480] Fps is (10 sec: 2868.1, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 163840. Throughput: 0: 850.7. Samples: 40834. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:16:23,275][06480] Avg episode reward: [(0, '4.562')] +[2023-02-26 06:16:28,272][06480] Fps is (10 sec: 4096.8, 60 sec: 3140.3, 300 sec: 2898.7). Total num frames: 188416. Throughput: 0: 912.8. Samples: 47564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:16:28,274][06480] Avg episode reward: [(0, '4.379')] +[2023-02-26 06:16:32,261][13252] Updated weights for policy 0, policy_version 50 (0.0012) +[2023-02-26 06:16:33,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 204800. Throughput: 0: 913.0. Samples: 50806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:16:33,277][06480] Avg episode reward: [(0, '4.403')] +[2023-02-26 06:16:38,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3550.1, 300 sec: 2894.5). Total num frames: 217088. Throughput: 0: 865.8. Samples: 55152. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:16:38,274][06480] Avg episode reward: [(0, '4.397')] +[2023-02-26 06:16:43,272][06480] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 2969.6). Total num frames: 237568. Throughput: 0: 879.8. Samples: 59964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:16:43,274][06480] Avg episode reward: [(0, '4.449')] +[2023-02-26 06:16:45,062][13252] Updated weights for policy 0, policy_version 60 (0.0018) +[2023-02-26 06:16:48,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3035.9). Total num frames: 258048. Throughput: 0: 906.3. Samples: 63180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:16:48,274][06480] Avg episode reward: [(0, '4.486')] +[2023-02-26 06:16:53,276][06480] Fps is (10 sec: 3275.5, 60 sec: 3481.6, 300 sec: 3003.6). Total num frames: 270336. Throughput: 0: 878.3. Samples: 68280. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:16:53,278][06480] Avg episode reward: [(0, '4.500')] +[2023-02-26 06:16:53,374][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000067_274432.pth... +[2023-02-26 06:16:58,272][06480] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 2975.0). Total num frames: 282624. Throughput: 0: 822.2. Samples: 71398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:16:58,281][06480] Avg episode reward: [(0, '4.548')] +[2023-02-26 06:16:59,198][13252] Updated weights for policy 0, policy_version 70 (0.0019) +[2023-02-26 06:17:03,273][06480] Fps is (10 sec: 2458.3, 60 sec: 3345.0, 300 sec: 2949.1). Total num frames: 294912. Throughput: 0: 813.7. Samples: 73074. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:17:03,281][06480] Avg episode reward: [(0, '4.567')] +[2023-02-26 06:17:08,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3003.7). Total num frames: 315392. Throughput: 0: 832.1. Samples: 78278. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:17:08,274][06480] Avg episode reward: [(0, '4.676')] +[2023-02-26 06:17:08,282][13238] Saving new best policy, reward=4.676! +[2023-02-26 06:17:10,769][13252] Updated weights for policy 0, policy_version 80 (0.0022) +[2023-02-26 06:17:13,272][06480] Fps is (10 sec: 4096.5, 60 sec: 3345.2, 300 sec: 3053.4). Total num frames: 335872. Throughput: 0: 830.6. Samples: 84942. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:17:13,274][06480] Avg episode reward: [(0, '4.641')] +[2023-02-26 06:17:18,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3413.5, 300 sec: 3063.1). Total num frames: 352256. Throughput: 0: 819.9. Samples: 87700. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:17:18,275][06480] Avg episode reward: [(0, '4.437')] +[2023-02-26 06:17:23,109][13252] Updated weights for policy 0, policy_version 90 (0.0013) +[2023-02-26 06:17:23,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3072.0). Total num frames: 368640. Throughput: 0: 818.2. Samples: 91972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:17:23,274][06480] Avg episode reward: [(0, '4.440')] +[2023-02-26 06:17:28,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3113.0). Total num frames: 389120. Throughput: 0: 844.0. Samples: 97946. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:17:28,274][06480] Avg episode reward: [(0, '4.569')] +[2023-02-26 06:17:32,564][13252] Updated weights for policy 0, policy_version 100 (0.0028) +[2023-02-26 06:17:33,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3150.8). Total num frames: 409600. Throughput: 0: 848.7. Samples: 101372. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:17:33,274][06480] Avg episode reward: [(0, '4.773')] +[2023-02-26 06:17:33,285][13238] Saving new best policy, reward=4.773! +[2023-02-26 06:17:38,275][06480] Fps is (10 sec: 3685.3, 60 sec: 3481.4, 300 sec: 3155.4). Total num frames: 425984. Throughput: 0: 858.4. Samples: 106906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:17:38,278][06480] Avg episode reward: [(0, '4.602')] +[2023-02-26 06:17:43,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3130.5). Total num frames: 438272. Throughput: 0: 882.7. Samples: 111120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:17:43,281][06480] Avg episode reward: [(0, '4.502')] +[2023-02-26 06:17:45,552][13252] Updated weights for policy 0, policy_version 110 (0.0031) +[2023-02-26 06:17:48,272][06480] Fps is (10 sec: 3687.5, 60 sec: 3413.3, 300 sec: 3192.1). Total num frames: 462848. Throughput: 0: 910.4. Samples: 114040. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:17:48,281][06480] Avg episode reward: [(0, '4.371')] +[2023-02-26 06:17:53,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3550.1, 300 sec: 3222.2). Total num frames: 483328. Throughput: 0: 943.4. Samples: 120732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:17:53,279][06480] Avg episode reward: [(0, '4.391')] +[2023-02-26 06:17:54,717][13252] Updated weights for policy 0, policy_version 120 (0.0018) +[2023-02-26 06:17:58,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3223.9). Total num frames: 499712. Throughput: 0: 908.8. Samples: 125838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:17:58,278][06480] Avg episode reward: [(0, '4.418')] +[2023-02-26 06:18:03,272][06480] Fps is (10 sec: 2867.1, 60 sec: 3618.2, 300 sec: 3200.0). Total num frames: 512000. Throughput: 0: 893.1. Samples: 127892. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:18:03,277][06480] Avg episode reward: [(0, '4.457')] +[2023-02-26 06:18:07,300][13252] Updated weights for policy 0, policy_version 130 (0.0019) +[2023-02-26 06:18:08,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3252.0). Total num frames: 536576. Throughput: 0: 923.6. Samples: 133532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:18:08,275][06480] Avg episode reward: [(0, '4.595')] +[2023-02-26 06:18:13,272][06480] Fps is (10 sec: 4505.7, 60 sec: 3686.4, 300 sec: 3276.8). Total num frames: 557056. Throughput: 0: 939.5. Samples: 140224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:18:13,274][06480] Avg episode reward: [(0, '4.613')] +[2023-02-26 06:18:18,185][13252] Updated weights for policy 0, policy_version 140 (0.0016) +[2023-02-26 06:18:18,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3276.8). Total num frames: 573440. Throughput: 0: 915.6. Samples: 142574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:18:18,274][06480] Avg episode reward: [(0, '4.654')] +[2023-02-26 06:18:23,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3254.0). Total num frames: 585728. Throughput: 0: 884.4. Samples: 146702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:18:23,275][06480] Avg episode reward: [(0, '4.567')] +[2023-02-26 06:18:28,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 606208. Throughput: 0: 927.3. Samples: 152848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:18:28,274][06480] Avg episode reward: [(0, '4.478')] +[2023-02-26 06:18:29,345][13252] Updated weights for policy 0, policy_version 150 (0.0023) +[2023-02-26 06:18:33,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3319.9). Total num frames: 630784. Throughput: 0: 934.3. Samples: 156084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:18:33,274][06480] Avg episode reward: [(0, '4.445')] +[2023-02-26 06:18:38,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3297.8). Total num frames: 643072. Throughput: 0: 900.6. Samples: 161258. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:18:38,274][06480] Avg episode reward: [(0, '4.318')] +[2023-02-26 06:18:41,892][13252] Updated weights for policy 0, policy_version 160 (0.0023) +[2023-02-26 06:18:43,272][06480] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 655360. Throughput: 0: 879.7. Samples: 165426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:18:43,280][06480] Avg episode reward: [(0, '4.402')] +[2023-02-26 06:18:48,272][06480] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3316.8). Total num frames: 679936. Throughput: 0: 899.9. Samples: 168388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:18:48,281][06480] Avg episode reward: [(0, '4.531')] +[2023-02-26 06:18:51,801][13252] Updated weights for policy 0, policy_version 170 (0.0019) +[2023-02-26 06:18:53,272][06480] Fps is (10 sec: 4505.7, 60 sec: 3618.1, 300 sec: 3335.3). Total num frames: 700416. Throughput: 0: 924.1. Samples: 175118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:18:53,278][06480] Avg episode reward: [(0, '4.701')] +[2023-02-26 06:18:53,288][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth... +[2023-02-26 06:18:58,272][06480] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3334.0). Total num frames: 716800. Throughput: 0: 882.2. Samples: 179924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:18:58,280][06480] Avg episode reward: [(0, '4.791')] +[2023-02-26 06:18:58,285][13238] Saving new best policy, reward=4.791! +[2023-02-26 06:19:03,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3618.2, 300 sec: 3314.0). Total num frames: 729088. Throughput: 0: 875.9. Samples: 181988. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:19:03,279][06480] Avg episode reward: [(0, '4.633')] +[2023-02-26 06:19:04,634][13252] Updated weights for policy 0, policy_version 180 (0.0017) +[2023-02-26 06:19:08,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3349.6). Total num frames: 753664. Throughput: 0: 913.0. Samples: 187788. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:19:08,274][06480] Avg episode reward: [(0, '4.923')] +[2023-02-26 06:19:08,277][13238] Saving new best policy, reward=4.923! +[2023-02-26 06:19:13,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3365.8). Total num frames: 774144. Throughput: 0: 926.6. Samples: 194544. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:19:13,275][06480] Avg episode reward: [(0, '5.036')] +[2023-02-26 06:19:13,284][13238] Saving new best policy, reward=5.036! +[2023-02-26 06:19:13,694][13252] Updated weights for policy 0, policy_version 190 (0.0018) +[2023-02-26 06:19:18,274][06480] Fps is (10 sec: 3685.7, 60 sec: 3618.0, 300 sec: 3363.9). Total num frames: 790528. Throughput: 0: 903.4. Samples: 196738. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:19:18,278][06480] Avg episode reward: [(0, '5.137')] +[2023-02-26 06:19:18,285][13238] Saving new best policy, reward=5.137! +[2023-02-26 06:19:23,272][06480] Fps is (10 sec: 2457.6, 60 sec: 3549.9, 300 sec: 3328.0). Total num frames: 798720. Throughput: 0: 868.2. Samples: 200326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:19:23,280][06480] Avg episode reward: [(0, '5.077')] +[2023-02-26 06:19:28,272][06480] Fps is (10 sec: 2048.4, 60 sec: 3413.3, 300 sec: 3310.2). Total num frames: 811008. Throughput: 0: 858.9. Samples: 204078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:19:28,274][06480] Avg episode reward: [(0, '5.387')] +[2023-02-26 06:19:28,276][13238] Saving new best policy, reward=5.387! +[2023-02-26 06:19:30,007][13252] Updated weights for policy 0, policy_version 200 (0.0030) +[2023-02-26 06:19:33,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3326.0). Total num frames: 831488. Throughput: 0: 846.0. Samples: 206458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:19:33,274][06480] Avg episode reward: [(0, '5.289')] +[2023-02-26 06:19:38,272][06480] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3341.0). Total num frames: 851968. Throughput: 0: 843.8. Samples: 213088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:19:38,279][06480] Avg episode reward: [(0, '5.230')] +[2023-02-26 06:19:40,660][13252] Updated weights for policy 0, policy_version 210 (0.0019) +[2023-02-26 06:19:43,273][06480] Fps is (10 sec: 3276.5, 60 sec: 3481.5, 300 sec: 3324.0). Total num frames: 864256. Throughput: 0: 831.7. Samples: 217350. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:19:43,277][06480] Avg episode reward: [(0, '4.980')] +[2023-02-26 06:19:48,272][06480] Fps is (10 sec: 2867.3, 60 sec: 3345.1, 300 sec: 3323.2). Total num frames: 880640. Throughput: 0: 830.9. Samples: 219378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:19:48,279][06480] Avg episode reward: [(0, '4.760')] +[2023-02-26 06:19:51,964][13252] Updated weights for policy 0, policy_version 220 (0.0021) +[2023-02-26 06:19:53,272][06480] Fps is (10 sec: 4096.4, 60 sec: 3413.3, 300 sec: 3352.7). Total num frames: 905216. Throughput: 0: 849.7. Samples: 226024. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:19:53,278][06480] Avg episode reward: [(0, '4.862')] +[2023-02-26 06:19:58,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3366.2). Total num frames: 925696. Throughput: 0: 830.7. Samples: 231926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:19:58,278][06480] Avg episode reward: [(0, '4.922')] +[2023-02-26 06:20:03,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3349.9). Total num frames: 937984. Throughput: 0: 827.9. Samples: 233992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:20:03,278][06480] Avg episode reward: [(0, '4.859')] +[2023-02-26 06:20:04,249][13252] Updated weights for policy 0, policy_version 230 (0.0019) +[2023-02-26 06:20:08,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3348.7). Total num frames: 954368. Throughput: 0: 848.8. Samples: 238520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:20:08,277][06480] Avg episode reward: [(0, '4.907')] +[2023-02-26 06:20:13,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3375.7). Total num frames: 978944. Throughput: 0: 909.6. Samples: 245008. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:20:13,279][06480] Avg episode reward: [(0, '4.996')] +[2023-02-26 06:20:14,129][13252] Updated weights for policy 0, policy_version 240 (0.0018) +[2023-02-26 06:20:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3374.0). Total num frames: 995328. Throughput: 0: 929.9. Samples: 248302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:20:18,279][06480] Avg episode reward: [(0, '5.029')] +[2023-02-26 06:20:23,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1007616. Throughput: 0: 873.8. Samples: 252408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:20:23,278][06480] Avg episode reward: [(0, '5.137')] +[2023-02-26 06:20:27,511][13252] Updated weights for policy 0, policy_version 250 (0.0014) +[2023-02-26 06:20:28,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1024000. Throughput: 0: 887.7. Samples: 257294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:20:28,279][06480] Avg episode reward: [(0, '5.182')] +[2023-02-26 06:20:33,272][06480] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3540.7). Total num frames: 1048576. Throughput: 0: 915.1. Samples: 260556. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:20:33,275][06480] Avg episode reward: [(0, '5.287')] +[2023-02-26 06:20:36,996][13252] Updated weights for policy 0, policy_version 260 (0.0013) +[2023-02-26 06:20:38,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1064960. Throughput: 0: 906.8. Samples: 266832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:20:38,274][06480] Avg episode reward: [(0, '5.402')] +[2023-02-26 06:20:38,277][13238] Saving new best policy, reward=5.402! +[2023-02-26 06:20:43,277][06480] Fps is (10 sec: 3275.2, 60 sec: 3617.9, 300 sec: 3512.8). Total num frames: 1081344. Throughput: 0: 866.0. Samples: 270900. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:20:43,279][06480] Avg episode reward: [(0, '5.303')] +[2023-02-26 06:20:48,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3512.9). Total num frames: 1097728. Throughput: 0: 866.7. Samples: 272994. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:20:48,274][06480] Avg episode reward: [(0, '5.488')] +[2023-02-26 06:20:48,278][13238] Saving new best policy, reward=5.488! +[2023-02-26 06:20:50,192][13252] Updated weights for policy 0, policy_version 270 (0.0043) +[2023-02-26 06:20:53,272][06480] Fps is (10 sec: 3688.3, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1118208. Throughput: 0: 904.3. Samples: 279214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:20:53,288][06480] Avg episode reward: [(0, '5.539')] +[2023-02-26 06:20:53,304][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000273_1118208.pth... +[2023-02-26 06:20:53,479][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000067_274432.pth +[2023-02-26 06:20:53,506][13238] Saving new best policy, reward=5.539! +[2023-02-26 06:20:58,272][06480] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1134592. Throughput: 0: 886.7. Samples: 284910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:20:58,276][06480] Avg episode reward: [(0, '5.594')] +[2023-02-26 06:20:58,284][13238] Saving new best policy, reward=5.594! +[2023-02-26 06:21:01,821][13252] Updated weights for policy 0, policy_version 280 (0.0012) +[2023-02-26 06:21:03,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 1146880. Throughput: 0: 855.7. Samples: 286810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:03,276][06480] Avg episode reward: [(0, '5.562')] +[2023-02-26 06:21:08,272][06480] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1163264. Throughput: 0: 861.7. Samples: 291186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:21:08,279][06480] Avg episode reward: [(0, '5.819')] +[2023-02-26 06:21:08,287][13238] Saving new best policy, reward=5.819! +[2023-02-26 06:21:13,026][13252] Updated weights for policy 0, policy_version 290 (0.0019) +[2023-02-26 06:21:13,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1187840. Throughput: 0: 896.5. Samples: 297638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:21:13,274][06480] Avg episode reward: [(0, '5.861')] +[2023-02-26 06:21:13,292][13238] Saving new best policy, reward=5.861! +[2023-02-26 06:21:18,273][06480] Fps is (10 sec: 4095.3, 60 sec: 3481.5, 300 sec: 3526.7). Total num frames: 1204224. Throughput: 0: 894.1. Samples: 300792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:18,275][06480] Avg episode reward: [(0, '5.671')] +[2023-02-26 06:21:23,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1220608. Throughput: 0: 846.7. Samples: 304932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:23,282][06480] Avg episode reward: [(0, '5.812')] +[2023-02-26 06:21:25,992][13252] Updated weights for policy 0, policy_version 300 (0.0039) +[2023-02-26 06:21:28,272][06480] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1236992. Throughput: 0: 866.5. Samples: 309890. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:21:28,277][06480] Avg episode reward: [(0, '6.061')] +[2023-02-26 06:21:28,282][13238] Saving new best policy, reward=6.061! +[2023-02-26 06:21:33,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1257472. Throughput: 0: 889.6. Samples: 313028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:21:33,278][06480] Avg episode reward: [(0, '6.405')] +[2023-02-26 06:21:33,291][13238] Saving new best policy, reward=6.405! +[2023-02-26 06:21:35,717][13252] Updated weights for policy 0, policy_version 310 (0.0012) +[2023-02-26 06:21:38,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1277952. Throughput: 0: 888.4. Samples: 319194. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 06:21:38,276][06480] Avg episode reward: [(0, '6.488')] +[2023-02-26 06:21:38,283][13238] Saving new best policy, reward=6.488! +[2023-02-26 06:21:43,273][06480] Fps is (10 sec: 3276.4, 60 sec: 3481.8, 300 sec: 3498.9). Total num frames: 1290240. Throughput: 0: 853.5. Samples: 323316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:43,277][06480] Avg episode reward: [(0, '6.629')] +[2023-02-26 06:21:43,294][13238] Saving new best policy, reward=6.629! +[2023-02-26 06:21:48,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3512.9). Total num frames: 1306624. Throughput: 0: 857.9. Samples: 325416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:48,276][06480] Avg episode reward: [(0, '6.930')] +[2023-02-26 06:21:48,279][13238] Saving new best policy, reward=6.930! +[2023-02-26 06:21:49,158][13252] Updated weights for policy 0, policy_version 320 (0.0011) +[2023-02-26 06:21:53,282][06480] Fps is (10 sec: 2864.7, 60 sec: 3344.5, 300 sec: 3512.7). Total num frames: 1318912. Throughput: 0: 866.4. Samples: 330184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:21:53,288][06480] Avg episode reward: [(0, '7.178')] +[2023-02-26 06:21:53,299][13238] Saving new best policy, reward=7.178! +[2023-02-26 06:21:58,272][06480] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3512.9). Total num frames: 1331200. Throughput: 0: 806.2. Samples: 333918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:21:58,275][06480] Avg episode reward: [(0, '7.361')] +[2023-02-26 06:21:58,280][13238] Saving new best policy, reward=7.361! +[2023-02-26 06:22:03,272][06480] Fps is (10 sec: 2460.0, 60 sec: 3276.8, 300 sec: 3485.1). Total num frames: 1343488. Throughput: 0: 780.5. Samples: 335914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:22:03,276][06480] Avg episode reward: [(0, '7.781')] +[2023-02-26 06:22:03,352][13238] Saving new best policy, reward=7.781! +[2023-02-26 06:22:04,984][13252] Updated weights for policy 0, policy_version 330 (0.0026) +[2023-02-26 06:22:08,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3471.2). Total num frames: 1359872. Throughput: 0: 780.1. Samples: 340036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:22:08,275][06480] Avg episode reward: [(0, '8.176')] +[2023-02-26 06:22:08,279][13238] Saving new best policy, reward=8.176! +[2023-02-26 06:22:13,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3485.1). Total num frames: 1380352. Throughput: 0: 814.7. Samples: 346552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:22:13,274][06480] Avg episode reward: [(0, '8.337')] +[2023-02-26 06:22:13,285][13238] Saving new best policy, reward=8.337! +[2023-02-26 06:22:15,203][13252] Updated weights for policy 0, policy_version 340 (0.0024) +[2023-02-26 06:22:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3276.9, 300 sec: 3499.0). Total num frames: 1400832. Throughput: 0: 815.9. Samples: 349744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:22:18,276][06480] Avg episode reward: [(0, '8.028')] +[2023-02-26 06:22:23,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3471.2). Total num frames: 1413120. Throughput: 0: 775.3. Samples: 354084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:22:23,277][06480] Avg episode reward: [(0, '8.052')] +[2023-02-26 06:22:28,270][13252] Updated weights for policy 0, policy_version 350 (0.0017) +[2023-02-26 06:22:28,272][06480] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3471.2). Total num frames: 1433600. Throughput: 0: 790.6. Samples: 358894. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:22:28,281][06480] Avg episode reward: [(0, '7.913')] +[2023-02-26 06:22:33,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3485.1). Total num frames: 1454080. Throughput: 0: 816.5. Samples: 362160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:22:33,274][06480] Avg episode reward: [(0, '7.815')] +[2023-02-26 06:22:37,605][13252] Updated weights for policy 0, policy_version 360 (0.0017) +[2023-02-26 06:22:38,272][06480] Fps is (10 sec: 4096.2, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 1474560. Throughput: 0: 855.7. Samples: 368682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:22:38,280][06480] Avg episode reward: [(0, '8.477')] +[2023-02-26 06:22:38,286][13238] Saving new best policy, reward=8.477! +[2023-02-26 06:22:43,275][06480] Fps is (10 sec: 3275.8, 60 sec: 3276.7, 300 sec: 3471.2). Total num frames: 1486848. Throughput: 0: 863.5. Samples: 372778. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-26 06:22:43,278][06480] Avg episode reward: [(0, '8.631')] +[2023-02-26 06:22:43,300][13238] Saving new best policy, reward=8.631! +[2023-02-26 06:22:48,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3457.3). Total num frames: 1503232. Throughput: 0: 862.9. Samples: 374744. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-26 06:22:48,279][06480] Avg episode reward: [(0, '8.606')] +[2023-02-26 06:22:50,953][13252] Updated weights for policy 0, policy_version 370 (0.0017) +[2023-02-26 06:22:53,272][06480] Fps is (10 sec: 3687.5, 60 sec: 3413.9, 300 sec: 3471.2). Total num frames: 1523712. Throughput: 0: 906.8. Samples: 380844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:22:53,278][06480] Avg episode reward: [(0, '8.821')] +[2023-02-26 06:22:53,288][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000372_1523712.pth... +[2023-02-26 06:22:53,411][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth +[2023-02-26 06:22:53,421][13238] Saving new best policy, reward=8.821! +[2023-02-26 06:22:58,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1544192. Throughput: 0: 894.0. Samples: 386780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:22:58,274][06480] Avg episode reward: [(0, '8.741')] +[2023-02-26 06:23:02,355][13252] Updated weights for policy 0, policy_version 380 (0.0011) +[2023-02-26 06:23:03,276][06480] Fps is (10 sec: 3275.5, 60 sec: 3549.6, 300 sec: 3457.3). Total num frames: 1556480. Throughput: 0: 867.5. Samples: 388784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:23:03,287][06480] Avg episode reward: [(0, '9.003')] +[2023-02-26 06:23:03,309][13238] Saving new best policy, reward=9.003! +[2023-02-26 06:23:08,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1572864. Throughput: 0: 865.2. Samples: 393020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:23:08,281][06480] Avg episode reward: [(0, '9.082')] +[2023-02-26 06:23:08,285][13238] Saving new best policy, reward=9.082! +[2023-02-26 06:23:13,272][06480] Fps is (10 sec: 3687.9, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1593344. Throughput: 0: 902.2. Samples: 399494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:23:13,276][06480] Avg episode reward: [(0, '10.086')] +[2023-02-26 06:23:13,291][13238] Saving new best policy, reward=10.086! +[2023-02-26 06:23:13,709][13252] Updated weights for policy 0, policy_version 390 (0.0017) +[2023-02-26 06:23:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1613824. Throughput: 0: 900.0. Samples: 402662. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:23:18,276][06480] Avg episode reward: [(0, '10.959')] +[2023-02-26 06:23:18,281][13238] Saving new best policy, reward=10.959! +[2023-02-26 06:23:23,273][06480] Fps is (10 sec: 3276.4, 60 sec: 3549.8, 300 sec: 3457.3). Total num frames: 1626112. Throughput: 0: 848.8. Samples: 406878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:23:23,280][06480] Avg episode reward: [(0, '10.642')] +[2023-02-26 06:23:26,935][13252] Updated weights for policy 0, policy_version 400 (0.0019) +[2023-02-26 06:23:28,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1642496. Throughput: 0: 864.4. Samples: 411674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:23:28,274][06480] Avg episode reward: [(0, '9.966')] +[2023-02-26 06:23:33,272][06480] Fps is (10 sec: 3686.6, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 1662976. Throughput: 0: 893.3. Samples: 414944. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:23:33,279][06480] Avg episode reward: [(0, '9.625')] +[2023-02-26 06:23:36,157][13252] Updated weights for policy 0, policy_version 410 (0.0012) +[2023-02-26 06:23:38,272][06480] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1683456. Throughput: 0: 898.7. Samples: 421286. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:23:38,277][06480] Avg episode reward: [(0, '9.815')] +[2023-02-26 06:23:43,275][06480] Fps is (10 sec: 3275.8, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1695744. Throughput: 0: 855.6. Samples: 425286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:23:43,279][06480] Avg episode reward: [(0, '10.248')] +[2023-02-26 06:23:48,272][06480] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1716224. Throughput: 0: 858.2. Samples: 427400. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:23:48,277][06480] Avg episode reward: [(0, '10.448')] +[2023-02-26 06:23:49,177][13252] Updated weights for policy 0, policy_version 420 (0.0022) +[2023-02-26 06:23:53,272][06480] Fps is (10 sec: 4097.5, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1736704. Throughput: 0: 904.1. Samples: 433704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:23:53,274][06480] Avg episode reward: [(0, '9.990')] +[2023-02-26 06:23:58,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 1753088. Throughput: 0: 887.1. Samples: 439414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:23:58,276][06480] Avg episode reward: [(0, '10.032')] +[2023-02-26 06:24:00,330][13252] Updated weights for policy 0, policy_version 430 (0.0022) +[2023-02-26 06:24:03,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.8, 300 sec: 3429.5). Total num frames: 1765376. Throughput: 0: 861.3. Samples: 441422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:03,277][06480] Avg episode reward: [(0, '10.741')] +[2023-02-26 06:24:08,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1781760. Throughput: 0: 863.0. Samples: 445710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:08,278][06480] Avg episode reward: [(0, '11.157')] +[2023-02-26 06:24:08,283][13238] Saving new best policy, reward=11.157! +[2023-02-26 06:24:12,034][13252] Updated weights for policy 0, policy_version 440 (0.0019) +[2023-02-26 06:24:13,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1806336. Throughput: 0: 901.1. Samples: 452222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:24:13,274][06480] Avg episode reward: [(0, '11.962')] +[2023-02-26 06:24:13,288][13238] Saving new best policy, reward=11.962! +[2023-02-26 06:24:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 1822720. Throughput: 0: 898.1. Samples: 455360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:24:18,274][06480] Avg episode reward: [(0, '11.934')] +[2023-02-26 06:24:23,274][06480] Fps is (10 sec: 2866.6, 60 sec: 3481.5, 300 sec: 3471.2). Total num frames: 1835008. Throughput: 0: 837.8. Samples: 458990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:24:23,276][06480] Avg episode reward: [(0, '11.035')] +[2023-02-26 06:24:26,674][13252] Updated weights for policy 0, policy_version 450 (0.0020) +[2023-02-26 06:24:28,273][06480] Fps is (10 sec: 2047.8, 60 sec: 3345.0, 300 sec: 3429.5). Total num frames: 1843200. Throughput: 0: 819.9. Samples: 462178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:24:28,277][06480] Avg episode reward: [(0, '11.075')] +[2023-02-26 06:24:33,272][06480] Fps is (10 sec: 2458.1, 60 sec: 3276.8, 300 sec: 3415.7). Total num frames: 1859584. Throughput: 0: 812.7. Samples: 463972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:33,278][06480] Avg episode reward: [(0, '11.480')] +[2023-02-26 06:24:38,272][06480] Fps is (10 sec: 3686.8, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 1880064. Throughput: 0: 809.7. Samples: 470140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:38,274][06480] Avg episode reward: [(0, '12.761')] +[2023-02-26 06:24:38,281][13238] Saving new best policy, reward=12.761! +[2023-02-26 06:24:38,610][13252] Updated weights for policy 0, policy_version 460 (0.0027) +[2023-02-26 06:24:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.5, 300 sec: 3457.3). Total num frames: 1900544. Throughput: 0: 809.2. Samples: 475828. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:43,274][06480] Avg episode reward: [(0, '12.489')] +[2023-02-26 06:24:48,274][06480] Fps is (10 sec: 3276.2, 60 sec: 3276.7, 300 sec: 3415.6). Total num frames: 1912832. Throughput: 0: 810.4. Samples: 477890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:24:48,276][06480] Avg episode reward: [(0, '12.721')] +[2023-02-26 06:24:51,589][13252] Updated weights for policy 0, policy_version 470 (0.0011) +[2023-02-26 06:24:53,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 1929216. Throughput: 0: 818.0. Samples: 482520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:24:53,274][06480] Avg episode reward: [(0, '12.100')] +[2023-02-26 06:24:53,381][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000472_1933312.pth... +[2023-02-26 06:24:53,496][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000273_1118208.pth +[2023-02-26 06:24:58,272][06480] Fps is (10 sec: 4096.8, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 1953792. Throughput: 0: 814.4. Samples: 488870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:24:58,279][06480] Avg episode reward: [(0, '12.147')] +[2023-02-26 06:25:01,100][13252] Updated weights for policy 0, policy_version 480 (0.0037) +[2023-02-26 06:25:03,274][06480] Fps is (10 sec: 4095.2, 60 sec: 3413.2, 300 sec: 3443.4). Total num frames: 1970176. Throughput: 0: 813.6. Samples: 491972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:25:03,278][06480] Avg episode reward: [(0, '12.727')] +[2023-02-26 06:25:08,274][06480] Fps is (10 sec: 2866.5, 60 sec: 3344.9, 300 sec: 3401.7). Total num frames: 1982464. Throughput: 0: 823.4. Samples: 496044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:08,281][06480] Avg episode reward: [(0, '12.665')] +[2023-02-26 06:25:13,272][06480] Fps is (10 sec: 2867.8, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 1998848. Throughput: 0: 866.7. Samples: 501178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:13,282][06480] Avg episode reward: [(0, '12.999')] +[2023-02-26 06:25:13,332][13238] Saving new best policy, reward=12.999! +[2023-02-26 06:25:14,465][13252] Updated weights for policy 0, policy_version 490 (0.0015) +[2023-02-26 06:25:18,272][06480] Fps is (10 sec: 4097.0, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 2023424. Throughput: 0: 896.4. Samples: 504312. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:25:18,274][06480] Avg episode reward: [(0, '12.398')] +[2023-02-26 06:25:23,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.5, 300 sec: 3443.4). Total num frames: 2039808. Throughput: 0: 891.8. Samples: 510270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:25:23,274][06480] Avg episode reward: [(0, '12.724')] +[2023-02-26 06:25:25,642][13252] Updated weights for policy 0, policy_version 500 (0.0024) +[2023-02-26 06:25:28,272][06480] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 2052096. Throughput: 0: 856.3. Samples: 514362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:28,275][06480] Avg episode reward: [(0, '12.522')] +[2023-02-26 06:25:33,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 2072576. Throughput: 0: 859.4. Samples: 516562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:33,275][06480] Avg episode reward: [(0, '13.707')] +[2023-02-26 06:25:33,291][13238] Saving new best policy, reward=13.707! +[2023-02-26 06:25:36,922][13252] Updated weights for policy 0, policy_version 510 (0.0015) +[2023-02-26 06:25:38,272][06480] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3429.6). Total num frames: 2093056. Throughput: 0: 900.2. Samples: 523028. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:25:38,278][06480] Avg episode reward: [(0, '14.946')] +[2023-02-26 06:25:38,284][13238] Saving new best policy, reward=14.946! +[2023-02-26 06:25:43,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2109440. Throughput: 0: 880.2. Samples: 528480. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:25:43,277][06480] Avg episode reward: [(0, '16.676')] +[2023-02-26 06:25:43,286][13238] Saving new best policy, reward=16.676! +[2023-02-26 06:25:48,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3481.7, 300 sec: 3401.8). Total num frames: 2121728. Throughput: 0: 855.8. Samples: 530480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:48,277][06480] Avg episode reward: [(0, '16.354')] +[2023-02-26 06:25:50,013][13252] Updated weights for policy 0, policy_version 520 (0.0025) +[2023-02-26 06:25:53,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3415.7). Total num frames: 2142208. Throughput: 0: 870.2. Samples: 535200. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:25:53,275][06480] Avg episode reward: [(0, '16.321')] +[2023-02-26 06:25:58,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2162688. Throughput: 0: 903.0. Samples: 541814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:25:58,274][06480] Avg episode reward: [(0, '15.866')] +[2023-02-26 06:25:59,521][13252] Updated weights for policy 0, policy_version 530 (0.0014) +[2023-02-26 06:26:03,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3457.3). Total num frames: 2183168. Throughput: 0: 904.0. Samples: 544992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:26:03,274][06480] Avg episode reward: [(0, '14.541')] +[2023-02-26 06:26:08,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3415.6). Total num frames: 2195456. Throughput: 0: 864.4. Samples: 549166. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:26:08,277][06480] Avg episode reward: [(0, '14.902')] +[2023-02-26 06:26:12,365][13252] Updated weights for policy 0, policy_version 540 (0.0019) +[2023-02-26 06:26:13,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3415.7). Total num frames: 2211840. Throughput: 0: 892.6. Samples: 554528. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:26:13,278][06480] Avg episode reward: [(0, '15.436')] +[2023-02-26 06:26:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2236416. Throughput: 0: 916.6. Samples: 557810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:26:18,274][06480] Avg episode reward: [(0, '16.401')] +[2023-02-26 06:26:21,845][13252] Updated weights for policy 0, policy_version 550 (0.0021) +[2023-02-26 06:26:23,272][06480] Fps is (10 sec: 4095.8, 60 sec: 3549.8, 300 sec: 3443.4). Total num frames: 2252800. Throughput: 0: 907.0. Samples: 563844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:26:23,278][06480] Avg episode reward: [(0, '16.750')] +[2023-02-26 06:26:23,386][13238] Saving new best policy, reward=16.750! +[2023-02-26 06:26:28,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3429.5). Total num frames: 2269184. Throughput: 0: 877.2. Samples: 567956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:26:28,280][06480] Avg episode reward: [(0, '17.033')] +[2023-02-26 06:26:28,284][13238] Saving new best policy, reward=17.033! +[2023-02-26 06:26:33,272][06480] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 2285568. Throughput: 0: 888.6. Samples: 570466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:26:33,277][06480] Avg episode reward: [(0, '17.097')] +[2023-02-26 06:26:33,286][13238] Saving new best policy, reward=17.097! +[2023-02-26 06:26:34,319][13252] Updated weights for policy 0, policy_version 560 (0.0018) +[2023-02-26 06:26:38,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2310144. Throughput: 0: 932.1. Samples: 577144. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:26:38,277][06480] Avg episode reward: [(0, '18.273')] +[2023-02-26 06:26:38,283][13238] Saving new best policy, reward=18.273! +[2023-02-26 06:26:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2326528. Throughput: 0: 908.1. Samples: 582680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:26:43,282][06480] Avg episode reward: [(0, '18.638')] +[2023-02-26 06:26:43,301][13238] Saving new best policy, reward=18.638! +[2023-02-26 06:26:45,227][13252] Updated weights for policy 0, policy_version 570 (0.0012) +[2023-02-26 06:26:48,272][06480] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3457.4). Total num frames: 2338816. Throughput: 0: 882.9. Samples: 584722. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:26:48,281][06480] Avg episode reward: [(0, '17.486')] +[2023-02-26 06:26:53,272][06480] Fps is (10 sec: 2457.5, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2351104. Throughput: 0: 871.3. Samples: 588376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:26:53,275][06480] Avg episode reward: [(0, '17.578')] +[2023-02-26 06:26:53,286][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000574_2351104.pth... +[2023-02-26 06:26:53,451][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000372_1523712.pth +[2023-02-26 06:26:58,272][06480] Fps is (10 sec: 2457.7, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 2363392. Throughput: 0: 843.7. Samples: 592496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:26:58,281][06480] Avg episode reward: [(0, '16.942')] +[2023-02-26 06:27:00,305][13252] Updated weights for policy 0, policy_version 580 (0.0047) +[2023-02-26 06:27:03,272][06480] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2383872. Throughput: 0: 840.2. Samples: 595618. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:27:03,279][06480] Avg episode reward: [(0, '16.733')] +[2023-02-26 06:27:08,276][06480] Fps is (10 sec: 3684.9, 60 sec: 3413.1, 300 sec: 3457.3). Total num frames: 2400256. Throughput: 0: 800.2. Samples: 599858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:27:08,280][06480] Avg episode reward: [(0, '16.715')] +[2023-02-26 06:27:12,919][13252] Updated weights for policy 0, policy_version 590 (0.0018) +[2023-02-26 06:27:13,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2416640. Throughput: 0: 823.2. Samples: 604998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:27:13,281][06480] Avg episode reward: [(0, '16.404')] +[2023-02-26 06:27:18,272][06480] Fps is (10 sec: 3687.9, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2437120. Throughput: 0: 842.5. Samples: 608380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:27:18,274][06480] Avg episode reward: [(0, '18.634')] +[2023-02-26 06:27:22,218][13252] Updated weights for policy 0, policy_version 600 (0.0023) +[2023-02-26 06:27:23,274][06480] Fps is (10 sec: 4095.1, 60 sec: 3413.2, 300 sec: 3471.2). Total num frames: 2457600. Throughput: 0: 837.2. Samples: 614818. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:27:23,278][06480] Avg episode reward: [(0, '19.932')] +[2023-02-26 06:27:23,289][13238] Saving new best policy, reward=19.932! +[2023-02-26 06:27:28,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 2469888. Throughput: 0: 804.6. Samples: 618886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:27:28,276][06480] Avg episode reward: [(0, '19.055')] +[2023-02-26 06:27:33,272][06480] Fps is (10 sec: 3277.5, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2490368. Throughput: 0: 806.1. Samples: 620994. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:27:33,279][06480] Avg episode reward: [(0, '19.446')] +[2023-02-26 06:27:35,019][13252] Updated weights for policy 0, policy_version 610 (0.0018) +[2023-02-26 06:27:38,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2510848. Throughput: 0: 873.2. Samples: 627668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:27:38,274][06480] Avg episode reward: [(0, '20.452')] +[2023-02-26 06:27:38,276][13238] Saving new best policy, reward=20.452! +[2023-02-26 06:27:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2531328. Throughput: 0: 913.1. Samples: 633586. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:27:43,280][06480] Avg episode reward: [(0, '20.137')] +[2023-02-26 06:27:45,965][13252] Updated weights for policy 0, policy_version 620 (0.0030) +[2023-02-26 06:27:48,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3413.4, 300 sec: 3457.3). Total num frames: 2543616. Throughput: 0: 889.3. Samples: 635636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:27:48,276][06480] Avg episode reward: [(0, '19.549')] +[2023-02-26 06:27:53,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2564096. Throughput: 0: 898.8. Samples: 640298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:27:53,278][06480] Avg episode reward: [(0, '19.501')] +[2023-02-26 06:27:57,010][13252] Updated weights for policy 0, policy_version 630 (0.0044) +[2023-02-26 06:27:58,273][06480] Fps is (10 sec: 4095.6, 60 sec: 3686.3, 300 sec: 3485.1). Total num frames: 2584576. Throughput: 0: 931.8. Samples: 646930. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:27:58,275][06480] Avg episode reward: [(0, '20.517')] +[2023-02-26 06:27:58,277][13238] Saving new best policy, reward=20.517! +[2023-02-26 06:28:03,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2600960. Throughput: 0: 927.3. Samples: 650108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:28:03,276][06480] Avg episode reward: [(0, '18.537')] +[2023-02-26 06:28:08,272][06480] Fps is (10 sec: 3277.1, 60 sec: 3618.4, 300 sec: 3471.2). Total num frames: 2617344. Throughput: 0: 877.6. Samples: 654310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:28:08,279][06480] Avg episode reward: [(0, '18.562')] +[2023-02-26 06:28:09,296][13252] Updated weights for policy 0, policy_version 640 (0.0014) +[2023-02-26 06:28:13,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2633728. Throughput: 0: 901.7. Samples: 659462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:28:13,277][06480] Avg episode reward: [(0, '18.507')] +[2023-02-26 06:28:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 2658304. Throughput: 0: 928.4. Samples: 662770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:28:18,274][06480] Avg episode reward: [(0, '17.729')] +[2023-02-26 06:28:19,227][13252] Updated weights for policy 0, policy_version 650 (0.0027) +[2023-02-26 06:28:23,276][06480] Fps is (10 sec: 4094.3, 60 sec: 3618.0, 300 sec: 3498.9). Total num frames: 2674688. Throughput: 0: 919.0. Samples: 669028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:28:23,278][06480] Avg episode reward: [(0, '17.950')] +[2023-02-26 06:28:28,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2686976. Throughput: 0: 875.7. Samples: 672994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:28:28,282][06480] Avg episode reward: [(0, '17.807')] +[2023-02-26 06:28:32,225][13252] Updated weights for policy 0, policy_version 660 (0.0032) +[2023-02-26 06:28:33,272][06480] Fps is (10 sec: 3278.2, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2707456. Throughput: 0: 876.9. Samples: 675098. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:28:33,279][06480] Avg episode reward: [(0, '18.766')] +[2023-02-26 06:28:38,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2727936. Throughput: 0: 922.4. Samples: 681806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:28:38,278][06480] Avg episode reward: [(0, '18.210')] +[2023-02-26 06:28:41,744][13252] Updated weights for policy 0, policy_version 670 (0.0013) +[2023-02-26 06:28:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2748416. Throughput: 0: 906.4. Samples: 687716. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:28:43,274][06480] Avg episode reward: [(0, '20.171')] +[2023-02-26 06:28:48,273][06480] Fps is (10 sec: 3276.3, 60 sec: 3618.0, 300 sec: 3471.2). Total num frames: 2760704. Throughput: 0: 883.5. Samples: 689868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:28:48,278][06480] Avg episode reward: [(0, '21.242')] +[2023-02-26 06:28:48,281][13238] Saving new best policy, reward=21.242! +[2023-02-26 06:28:53,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2781184. Throughput: 0: 894.6. Samples: 694566. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:28:53,274][06480] Avg episode reward: [(0, '22.411')] +[2023-02-26 06:28:53,288][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000679_2781184.pth... +[2023-02-26 06:28:53,401][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000472_1933312.pth +[2023-02-26 06:28:53,408][13238] Saving new best policy, reward=22.411! +[2023-02-26 06:28:54,305][13252] Updated weights for policy 0, policy_version 680 (0.0014) +[2023-02-26 06:28:58,272][06480] Fps is (10 sec: 4096.5, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 2801664. Throughput: 0: 923.8. Samples: 701032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:28:58,275][06480] Avg episode reward: [(0, '22.438')] +[2023-02-26 06:28:58,281][13238] Saving new best policy, reward=22.438! +[2023-02-26 06:29:03,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2818048. Throughput: 0: 922.2. Samples: 704268. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:29:03,277][06480] Avg episode reward: [(0, '22.316')] +[2023-02-26 06:29:05,404][13252] Updated weights for policy 0, policy_version 690 (0.0025) +[2023-02-26 06:29:08,272][06480] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2830336. Throughput: 0: 873.0. Samples: 708308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:29:08,279][06480] Avg episode reward: [(0, '20.347')] +[2023-02-26 06:29:13,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2850816. Throughput: 0: 895.9. Samples: 713310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:29:13,275][06480] Avg episode reward: [(0, '18.684')] +[2023-02-26 06:29:16,786][13252] Updated weights for policy 0, policy_version 700 (0.0019) +[2023-02-26 06:29:18,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3512.9). Total num frames: 2871296. Throughput: 0: 919.6. Samples: 716482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:29:18,274][06480] Avg episode reward: [(0, '18.632')] +[2023-02-26 06:29:23,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3550.1, 300 sec: 3540.6). Total num frames: 2887680. Throughput: 0: 890.2. Samples: 721864. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:29:23,276][06480] Avg episode reward: [(0, '18.087')] +[2023-02-26 06:29:28,272][06480] Fps is (10 sec: 2457.5, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 2895872. Throughput: 0: 830.8. Samples: 725104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:29:28,280][06480] Avg episode reward: [(0, '17.589')] +[2023-02-26 06:29:32,583][13252] Updated weights for policy 0, policy_version 710 (0.0027) +[2023-02-26 06:29:33,272][06480] Fps is (10 sec: 2048.0, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 2908160. Throughput: 0: 819.1. Samples: 726728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:29:33,275][06480] Avg episode reward: [(0, '17.534')] +[2023-02-26 06:29:38,272][06480] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 2928640. Throughput: 0: 826.4. Samples: 731752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:29:38,280][06480] Avg episode reward: [(0, '19.361')] +[2023-02-26 06:29:42,605][13252] Updated weights for policy 0, policy_version 720 (0.0012) +[2023-02-26 06:29:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3512.9). Total num frames: 2949120. Throughput: 0: 830.1. Samples: 738386. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:29:43,277][06480] Avg episode reward: [(0, '20.260')] +[2023-02-26 06:29:48,273][06480] Fps is (10 sec: 3686.1, 60 sec: 3413.4, 300 sec: 3512.8). Total num frames: 2965504. Throughput: 0: 818.7. Samples: 741112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:29:48,280][06480] Avg episode reward: [(0, '19.681')] +[2023-02-26 06:29:53,273][06480] Fps is (10 sec: 2866.9, 60 sec: 3276.7, 300 sec: 3471.2). Total num frames: 2977792. Throughput: 0: 819.5. Samples: 745186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:29:53,277][06480] Avg episode reward: [(0, '19.725')] +[2023-02-26 06:29:55,662][13252] Updated weights for policy 0, policy_version 730 (0.0019) +[2023-02-26 06:29:58,272][06480] Fps is (10 sec: 3277.0, 60 sec: 3276.8, 300 sec: 3485.1). Total num frames: 2998272. Throughput: 0: 828.9. Samples: 750612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:29:58,274][06480] Avg episode reward: [(0, '20.222')] +[2023-02-26 06:30:03,272][06480] Fps is (10 sec: 4096.4, 60 sec: 3345.1, 300 sec: 3512.9). Total num frames: 3018752. Throughput: 0: 829.9. Samples: 753826. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:30:03,278][06480] Avg episode reward: [(0, '19.873')] +[2023-02-26 06:30:06,057][13252] Updated weights for policy 0, policy_version 740 (0.0012) +[2023-02-26 06:30:08,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3035136. Throughput: 0: 829.6. Samples: 759196. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 06:30:08,279][06480] Avg episode reward: [(0, '20.334')] +[2023-02-26 06:30:13,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3471.2). Total num frames: 3047424. Throughput: 0: 848.6. Samples: 763292. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:30:13,275][06480] Avg episode reward: [(0, '21.425')] +[2023-02-26 06:30:18,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3485.1). Total num frames: 3067904. Throughput: 0: 870.6. Samples: 765906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:30:18,278][06480] Avg episode reward: [(0, '21.415')] +[2023-02-26 06:30:18,452][13252] Updated weights for policy 0, policy_version 750 (0.0048) +[2023-02-26 06:30:23,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 3092480. Throughput: 0: 908.8. Samples: 772646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:30:23,274][06480] Avg episode reward: [(0, '22.097')] +[2023-02-26 06:30:28,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3108864. Throughput: 0: 877.2. Samples: 777860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:30:28,279][06480] Avg episode reward: [(0, '23.036')] +[2023-02-26 06:30:28,285][13238] Saving new best policy, reward=23.036! +[2023-02-26 06:30:29,827][13252] Updated weights for policy 0, policy_version 760 (0.0021) +[2023-02-26 06:30:33,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3121152. Throughput: 0: 859.7. Samples: 779798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:30:33,280][06480] Avg episode reward: [(0, '23.180')] +[2023-02-26 06:30:33,303][13238] Saving new best policy, reward=23.180! +[2023-02-26 06:30:38,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3141632. Throughput: 0: 882.2. Samples: 784884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:30:38,281][06480] Avg episode reward: [(0, '22.062')] +[2023-02-26 06:30:40,913][13252] Updated weights for policy 0, policy_version 770 (0.0024) +[2023-02-26 06:30:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3162112. Throughput: 0: 907.3. Samples: 791440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:30:43,274][06480] Avg episode reward: [(0, '21.140')] +[2023-02-26 06:30:48,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3178496. Throughput: 0: 895.6. Samples: 794126. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:30:48,277][06480] Avg episode reward: [(0, '20.583')] +[2023-02-26 06:30:53,272][06480] Fps is (10 sec: 2867.1, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3190784. Throughput: 0: 868.8. Samples: 798292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:30:53,275][06480] Avg episode reward: [(0, '19.434')] +[2023-02-26 06:30:53,287][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000779_3190784.pth... +[2023-02-26 06:30:53,475][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000574_2351104.pth +[2023-02-26 06:30:53,592][13252] Updated weights for policy 0, policy_version 780 (0.0022) +[2023-02-26 06:30:58,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3211264. Throughput: 0: 900.6. Samples: 803820. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:30:58,279][06480] Avg episode reward: [(0, '19.900')] +[2023-02-26 06:31:03,272][06480] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3231744. Throughput: 0: 914.4. Samples: 807056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:31:03,274][06480] Avg episode reward: [(0, '20.001')] +[2023-02-26 06:31:03,605][13252] Updated weights for policy 0, policy_version 790 (0.0013) +[2023-02-26 06:31:08,274][06480] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3512.8). Total num frames: 3248128. Throughput: 0: 885.6. Samples: 812502. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 06:31:08,277][06480] Avg episode reward: [(0, '20.948')] +[2023-02-26 06:31:13,272][06480] Fps is (10 sec: 2867.1, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 3260416. Throughput: 0: 859.9. Samples: 816556. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:31:13,277][06480] Avg episode reward: [(0, '22.050')] +[2023-02-26 06:31:16,737][13252] Updated weights for policy 0, policy_version 800 (0.0025) +[2023-02-26 06:31:18,272][06480] Fps is (10 sec: 3277.4, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 3280896. Throughput: 0: 875.8. Samples: 819210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:31:18,275][06480] Avg episode reward: [(0, '22.213')] +[2023-02-26 06:31:23,272][06480] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3301376. Throughput: 0: 906.2. Samples: 825662. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:31:23,277][06480] Avg episode reward: [(0, '21.573')] +[2023-02-26 06:31:27,064][13252] Updated weights for policy 0, policy_version 810 (0.0012) +[2023-02-26 06:31:28,274][06480] Fps is (10 sec: 3685.8, 60 sec: 3481.5, 300 sec: 3498.9). Total num frames: 3317760. Throughput: 0: 875.1. Samples: 830820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:31:28,276][06480] Avg episode reward: [(0, '21.304')] +[2023-02-26 06:31:33,273][06480] Fps is (10 sec: 3276.4, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 3334144. Throughput: 0: 858.9. Samples: 832776. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:31:33,276][06480] Avg episode reward: [(0, '21.973')] +[2023-02-26 06:31:38,272][06480] Fps is (10 sec: 3277.3, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3350528. Throughput: 0: 882.8. Samples: 838016. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:31:38,277][06480] Avg episode reward: [(0, '19.576')] +[2023-02-26 06:31:39,254][13252] Updated weights for policy 0, policy_version 820 (0.0016) +[2023-02-26 06:31:43,272][06480] Fps is (10 sec: 4096.4, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3375104. Throughput: 0: 908.1. Samples: 844684. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:31:43,275][06480] Avg episode reward: [(0, '19.675')] +[2023-02-26 06:31:48,276][06480] Fps is (10 sec: 4094.4, 60 sec: 3549.6, 300 sec: 3526.7). Total num frames: 3391488. Throughput: 0: 895.7. Samples: 847366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:31:48,279][06480] Avg episode reward: [(0, '20.379')] +[2023-02-26 06:31:50,889][13252] Updated weights for policy 0, policy_version 830 (0.0015) +[2023-02-26 06:31:53,272][06480] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3403776. Throughput: 0: 865.5. Samples: 851446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:31:53,274][06480] Avg episode reward: [(0, '19.535')] +[2023-02-26 06:31:58,272][06480] Fps is (10 sec: 2458.7, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 3416064. Throughput: 0: 853.7. Samples: 854974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:31:58,274][06480] Avg episode reward: [(0, '19.374')] +[2023-02-26 06:32:03,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 3432448. Throughput: 0: 839.3. Samples: 856980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:32:03,277][06480] Avg episode reward: [(0, '20.583')] +[2023-02-26 06:32:05,233][13252] Updated weights for policy 0, policy_version 840 (0.0018) +[2023-02-26 06:32:08,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3499.0). Total num frames: 3448832. Throughput: 0: 810.3. Samples: 862126. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:32:08,279][06480] Avg episode reward: [(0, '20.284')] +[2023-02-26 06:32:13,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 3461120. Throughput: 0: 786.4. Samples: 866206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:32:13,278][06480] Avg episode reward: [(0, '19.823')] +[2023-02-26 06:32:18,232][13252] Updated weights for policy 0, policy_version 850 (0.0016) +[2023-02-26 06:32:18,272][06480] Fps is (10 sec: 3276.7, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 3481600. Throughput: 0: 799.8. Samples: 868766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:32:18,274][06480] Avg episode reward: [(0, '20.445')] +[2023-02-26 06:32:23,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 3502080. Throughput: 0: 829.4. Samples: 875338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:32:23,280][06480] Avg episode reward: [(0, '21.473')] +[2023-02-26 06:32:28,272][06480] Fps is (10 sec: 3686.5, 60 sec: 3345.2, 300 sec: 3485.1). Total num frames: 3518464. Throughput: 0: 802.3. Samples: 880786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:32:28,274][06480] Avg episode reward: [(0, '21.928')] +[2023-02-26 06:32:28,796][13252] Updated weights for policy 0, policy_version 860 (0.0017) +[2023-02-26 06:32:33,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3276.9, 300 sec: 3457.3). Total num frames: 3530752. Throughput: 0: 787.8. Samples: 882812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:32:33,275][06480] Avg episode reward: [(0, '22.125')] +[2023-02-26 06:32:38,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3551232. Throughput: 0: 809.2. Samples: 887862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:32:38,274][06480] Avg episode reward: [(0, '23.461')] +[2023-02-26 06:32:38,281][13238] Saving new best policy, reward=23.461! +[2023-02-26 06:32:40,460][13252] Updated weights for policy 0, policy_version 870 (0.0017) +[2023-02-26 06:32:43,272][06480] Fps is (10 sec: 4505.6, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 3575808. Throughput: 0: 875.0. Samples: 894348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:32:43,280][06480] Avg episode reward: [(0, '24.011')] +[2023-02-26 06:32:43,290][13238] Saving new best policy, reward=24.011! +[2023-02-26 06:32:48,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3277.1, 300 sec: 3471.2). Total num frames: 3588096. Throughput: 0: 890.1. Samples: 897034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:32:48,277][06480] Avg episode reward: [(0, '25.016')] +[2023-02-26 06:32:48,278][13238] Saving new best policy, reward=25.016! +[2023-02-26 06:32:52,831][13252] Updated weights for policy 0, policy_version 880 (0.0014) +[2023-02-26 06:32:53,273][06480] Fps is (10 sec: 2866.9, 60 sec: 3345.0, 300 sec: 3457.3). Total num frames: 3604480. Throughput: 0: 865.9. Samples: 901092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:32:53,279][06480] Avg episode reward: [(0, '24.956')] +[2023-02-26 06:32:53,298][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth... +[2023-02-26 06:32:53,437][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000679_2781184.pth +[2023-02-26 06:32:58,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3624960. Throughput: 0: 899.6. Samples: 906688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 06:32:58,275][06480] Avg episode reward: [(0, '24.288')] +[2023-02-26 06:33:03,073][13252] Updated weights for policy 0, policy_version 890 (0.0020) +[2023-02-26 06:33:03,272][06480] Fps is (10 sec: 4096.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3645440. Throughput: 0: 914.7. Samples: 909928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:03,274][06480] Avg episode reward: [(0, '23.134')] +[2023-02-26 06:33:08,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3661824. Throughput: 0: 891.6. Samples: 915462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:08,279][06480] Avg episode reward: [(0, '21.925')] +[2023-02-26 06:33:13,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3674112. Throughput: 0: 862.4. Samples: 919596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:33:13,277][06480] Avg episode reward: [(0, '21.501')] +[2023-02-26 06:33:16,185][13252] Updated weights for policy 0, policy_version 900 (0.0039) +[2023-02-26 06:33:18,272][06480] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3694592. Throughput: 0: 877.3. Samples: 922292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:33:18,274][06480] Avg episode reward: [(0, '20.837')] +[2023-02-26 06:33:23,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3715072. Throughput: 0: 911.5. Samples: 928878. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:33:23,274][06480] Avg episode reward: [(0, '21.001')] +[2023-02-26 06:33:25,819][13252] Updated weights for policy 0, policy_version 910 (0.0015) +[2023-02-26 06:33:28,273][06480] Fps is (10 sec: 3686.1, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 3731456. Throughput: 0: 879.9. Samples: 933944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:28,285][06480] Avg episode reward: [(0, '22.389')] +[2023-02-26 06:33:33,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3743744. Throughput: 0: 863.8. Samples: 935906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:33,284][06480] Avg episode reward: [(0, '23.375')] +[2023-02-26 06:33:38,272][06480] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3764224. Throughput: 0: 887.0. Samples: 941004. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:33:38,274][06480] Avg episode reward: [(0, '24.938')] +[2023-02-26 06:33:38,708][13252] Updated weights for policy 0, policy_version 920 (0.0014) +[2023-02-26 06:33:43,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3784704. Throughput: 0: 909.8. Samples: 947628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:43,274][06480] Avg episode reward: [(0, '25.481')] +[2023-02-26 06:33:43,290][13238] Saving new best policy, reward=25.481! +[2023-02-26 06:33:48,273][06480] Fps is (10 sec: 3686.1, 60 sec: 3549.8, 300 sec: 3457.3). Total num frames: 3801088. Throughput: 0: 893.7. Samples: 950146. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:48,275][06480] Avg episode reward: [(0, '25.422')] +[2023-02-26 06:33:50,045][13252] Updated weights for policy 0, policy_version 930 (0.0015) +[2023-02-26 06:33:53,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3817472. Throughput: 0: 864.0. Samples: 954342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 06:33:53,274][06480] Avg episode reward: [(0, '25.514')] +[2023-02-26 06:33:53,289][13238] Saving new best policy, reward=25.514! +[2023-02-26 06:33:58,272][06480] Fps is (10 sec: 3686.7, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3837952. Throughput: 0: 897.1. Samples: 959964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:33:58,274][06480] Avg episode reward: [(0, '26.394')] +[2023-02-26 06:33:58,281][13238] Saving new best policy, reward=26.394! +[2023-02-26 06:34:01,161][13252] Updated weights for policy 0, policy_version 940 (0.0015) +[2023-02-26 06:34:03,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3858432. Throughput: 0: 906.3. Samples: 963074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:34:03,280][06480] Avg episode reward: [(0, '25.180')] +[2023-02-26 06:34:08,272][06480] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3870720. Throughput: 0: 876.3. Samples: 968312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 06:34:08,275][06480] Avg episode reward: [(0, '24.195')] +[2023-02-26 06:34:13,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3887104. Throughput: 0: 856.0. Samples: 972464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 06:34:13,282][06480] Avg episode reward: [(0, '25.719')] +[2023-02-26 06:34:14,409][13252] Updated weights for policy 0, policy_version 950 (0.0031) +[2023-02-26 06:34:18,272][06480] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3907584. Throughput: 0: 876.4. Samples: 975346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:34:18,274][06480] Avg episode reward: [(0, '26.194')] +[2023-02-26 06:34:23,272][06480] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 3928064. Throughput: 0: 904.1. Samples: 981688. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 06:34:23,274][06480] Avg episode reward: [(0, '24.475')] +[2023-02-26 06:34:23,847][13252] Updated weights for policy 0, policy_version 960 (0.0027) +[2023-02-26 06:34:28,273][06480] Fps is (10 sec: 3686.0, 60 sec: 3549.9, 300 sec: 3512.8). Total num frames: 3944448. Throughput: 0: 865.8. Samples: 986592. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:34:28,284][06480] Avg episode reward: [(0, '24.273')] +[2023-02-26 06:34:33,272][06480] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3952640. Throughput: 0: 845.2. Samples: 988178. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:34:33,277][06480] Avg episode reward: [(0, '24.760')] +[2023-02-26 06:34:38,275][06480] Fps is (10 sec: 2047.5, 60 sec: 3344.9, 300 sec: 3443.4). Total num frames: 3964928. Throughput: 0: 821.9. Samples: 991330. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 06:34:38,278][06480] Avg episode reward: [(0, '25.315')] +[2023-02-26 06:34:40,519][13252] Updated weights for policy 0, policy_version 970 (0.0017) +[2023-02-26 06:34:43,272][06480] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 3981312. Throughput: 0: 816.6. Samples: 996712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 06:34:43,278][06480] Avg episode reward: [(0, '24.845')] +[2023-02-26 06:34:48,123][13238] Stopping Batcher_0... +[2023-02-26 06:34:48,124][13238] Loop batcher_evt_loop terminating... +[2023-02-26 06:34:48,124][06480] Component Batcher_0 stopped! +[2023-02-26 06:34:48,141][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 06:34:48,208][13252] Weights refcount: 2 0 +[2023-02-26 06:34:48,218][06480] Component InferenceWorker_p0-w0 stopped! +[2023-02-26 06:34:48,233][13252] Stopping InferenceWorker_p0-w0... +[2023-02-26 06:34:48,234][13252] Loop inference_proc0-0_evt_loop terminating... +[2023-02-26 06:34:48,258][06480] Component RolloutWorker_w3 stopped! +[2023-02-26 06:34:48,265][06480] Component RolloutWorker_w5 stopped! +[2023-02-26 06:34:48,267][13257] Stopping RolloutWorker_w5... +[2023-02-26 06:34:48,268][13257] Loop rollout_proc5_evt_loop terminating... +[2023-02-26 06:34:48,260][13255] Stopping RolloutWorker_w3... +[2023-02-26 06:34:48,272][13255] Loop rollout_proc3_evt_loop terminating... +[2023-02-26 06:34:48,292][06480] Component RolloutWorker_w1 stopped! +[2023-02-26 06:34:48,294][13254] Stopping RolloutWorker_w1... +[2023-02-26 06:34:48,305][13254] Loop rollout_proc1_evt_loop terminating... +[2023-02-26 06:34:48,315][06480] Component RolloutWorker_w7 stopped! +[2023-02-26 06:34:48,319][13259] Stopping RolloutWorker_w7... +[2023-02-26 06:34:48,320][13259] Loop rollout_proc7_evt_loop terminating... +[2023-02-26 06:34:48,325][13258] Stopping RolloutWorker_w4... +[2023-02-26 06:34:48,325][06480] Component RolloutWorker_w4 stopped! +[2023-02-26 06:34:48,346][13258] Loop rollout_proc4_evt_loop terminating... +[2023-02-26 06:34:48,360][13238] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000779_3190784.pth +[2023-02-26 06:34:48,371][13260] Stopping RolloutWorker_w6... +[2023-02-26 06:34:48,371][13260] Loop rollout_proc6_evt_loop terminating... +[2023-02-26 06:34:48,371][06480] Component RolloutWorker_w6 stopped! +[2023-02-26 06:34:48,381][13238] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 06:34:48,384][13256] Stopping RolloutWorker_w2... +[2023-02-26 06:34:48,384][13256] Loop rollout_proc2_evt_loop terminating... +[2023-02-26 06:34:48,384][06480] Component RolloutWorker_w2 stopped! +[2023-02-26 06:34:48,439][13253] Stopping RolloutWorker_w0... +[2023-02-26 06:34:48,440][13253] Loop rollout_proc0_evt_loop terminating... +[2023-02-26 06:34:48,439][06480] Component RolloutWorker_w0 stopped! +[2023-02-26 06:34:48,683][13238] Stopping LearnerWorker_p0... +[2023-02-26 06:34:48,683][06480] Component LearnerWorker_p0 stopped! +[2023-02-26 06:34:48,685][06480] Waiting for process learner_proc0 to stop... +[2023-02-26 06:34:48,699][13238] Loop learner_proc0_evt_loop terminating... +[2023-02-26 06:34:50,928][06480] Waiting for process inference_proc0-0 to join... +[2023-02-26 06:34:51,761][06480] Waiting for process rollout_proc0 to join... +[2023-02-26 06:34:52,689][06480] Waiting for process rollout_proc1 to join... +[2023-02-26 06:34:52,692][06480] Waiting for process rollout_proc2 to join... +[2023-02-26 06:34:52,699][06480] Waiting for process rollout_proc3 to join... +[2023-02-26 06:34:52,700][06480] Waiting for process rollout_proc4 to join... +[2023-02-26 06:34:52,703][06480] Waiting for process rollout_proc5 to join... +[2023-02-26 06:34:52,705][06480] Waiting for process rollout_proc6 to join... +[2023-02-26 06:34:52,708][06480] Waiting for process rollout_proc7 to join... +[2023-02-26 06:34:52,714][06480] Batcher 0 profile tree view: +batching: 25.3344, releasing_batches: 0.0277 +[2023-02-26 06:34:52,716][06480] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0001 + wait_policy_total: 582.9391 +update_model: 7.9647 + weight_update: 0.0014 +one_step: 0.0113 + handle_policy_step: 529.2481 + deserialize: 15.6504, stack: 3.1020, obs_to_device_normalize: 117.3064, forward: 254.7965, send_messages: 27.2844 + prepare_outputs: 84.1286 + to_cpu: 51.2895 +[2023-02-26 06:34:52,718][06480] Learner 0 profile tree view: +misc: 0.0078, prepare_batch: 17.2218 +train: 76.6284 + epoch_init: 0.0271, minibatch_init: 0.0107, losses_postprocess: 0.5949, kl_divergence: 0.5760, after_optimizer: 33.3189 + calculate_losses: 27.2007 + losses_init: 0.0035, forward_head: 1.9252, bptt_initial: 17.7151, tail: 1.1068, advantages_returns: 0.3227, losses: 3.5600 + bptt: 2.2171 + bptt_forward_core: 2.1199 + update: 14.2121 + clip: 1.4362 +[2023-02-26 06:34:52,720][06480] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3924, enqueue_policy_requests: 163.6997, env_step: 867.2261, overhead: 23.9751, complete_rollouts: 7.0514 +save_policy_outputs: 22.1973 + split_output_tensors: 10.7631 +[2023-02-26 06:34:52,723][06480] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3423, enqueue_policy_requests: 169.2092, env_step: 861.8023, overhead: 24.6130, complete_rollouts: 7.1703 +save_policy_outputs: 22.0299 + split_output_tensors: 10.8844 +[2023-02-26 06:34:52,725][06480] Loop Runner_EvtLoop terminating... +[2023-02-26 06:34:52,731][06480] Runner profile tree view: +main_loop: 1195.0150 +[2023-02-26 06:34:52,733][06480] Collected {0: 4005888}, FPS: 3352.2 +[2023-02-26 07:01:04,608][06480] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 07:01:04,610][06480] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 07:01:04,612][06480] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 07:01:04,613][06480] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 07:01:04,615][06480] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 07:01:04,617][06480] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 07:01:04,619][06480] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 07:01:04,621][06480] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 07:01:04,623][06480] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-26 07:01:04,625][06480] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-26 07:01:04,628][06480] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 07:01:04,631][06480] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 07:01:04,634][06480] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 07:01:04,637][06480] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 07:01:04,640][06480] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 07:01:04,670][06480] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 07:01:04,673][06480] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 07:01:04,679][06480] RunningMeanStd input shape: (1,) +[2023-02-26 07:01:04,697][06480] ConvEncoder: input_channels=3 +[2023-02-26 07:01:05,374][06480] Conv encoder output size: 512 +[2023-02-26 07:01:05,377][06480] Policy head output size: 512 +[2023-02-26 07:01:07,835][06480] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 07:01:09,126][06480] Num frames 100... +[2023-02-26 07:01:09,246][06480] Num frames 200... +[2023-02-26 07:01:09,369][06480] Num frames 300... +[2023-02-26 07:01:09,487][06480] Num frames 400... +[2023-02-26 07:01:09,606][06480] Num frames 500... +[2023-02-26 07:01:09,737][06480] Num frames 600... +[2023-02-26 07:01:09,861][06480] Num frames 700... +[2023-02-26 07:01:09,995][06480] Num frames 800... +[2023-02-26 07:01:10,112][06480] Num frames 900... +[2023-02-26 07:01:10,191][06480] Avg episode rewards: #0: 19.200, true rewards: #0: 9.200 +[2023-02-26 07:01:10,193][06480] Avg episode reward: 19.200, avg true_objective: 9.200 +[2023-02-26 07:01:10,289][06480] Num frames 1000... +[2023-02-26 07:01:10,404][06480] Num frames 1100... +[2023-02-26 07:01:10,518][06480] Num frames 1200... +[2023-02-26 07:01:10,642][06480] Num frames 1300... +[2023-02-26 07:01:10,787][06480] Avg episode rewards: #0: 12.340, true rewards: #0: 6.840 +[2023-02-26 07:01:10,790][06480] Avg episode reward: 12.340, avg true_objective: 6.840 +[2023-02-26 07:01:10,829][06480] Num frames 1400... +[2023-02-26 07:01:10,960][06480] Num frames 1500... +[2023-02-26 07:01:11,073][06480] Num frames 1600... +[2023-02-26 07:01:11,185][06480] Num frames 1700... +[2023-02-26 07:01:11,304][06480] Num frames 1800... +[2023-02-26 07:01:11,422][06480] Num frames 1900... +[2023-02-26 07:01:11,538][06480] Num frames 2000... +[2023-02-26 07:01:11,652][06480] Num frames 2100... +[2023-02-26 07:01:11,773][06480] Num frames 2200... +[2023-02-26 07:01:11,889][06480] Num frames 2300... +[2023-02-26 07:01:12,005][06480] Num frames 2400... +[2023-02-26 07:01:12,122][06480] Num frames 2500... +[2023-02-26 07:01:12,233][06480] Num frames 2600... +[2023-02-26 07:01:12,356][06480] Num frames 2700... +[2023-02-26 07:01:12,474][06480] Num frames 2800... +[2023-02-26 07:01:12,573][06480] Avg episode rewards: #0: 19.800, true rewards: #0: 9.467 +[2023-02-26 07:01:12,574][06480] Avg episode reward: 19.800, avg true_objective: 9.467 +[2023-02-26 07:01:12,645][06480] Num frames 2900... +[2023-02-26 07:01:12,766][06480] Num frames 3000... +[2023-02-26 07:01:12,880][06480] Num frames 3100... +[2023-02-26 07:01:13,000][06480] Num frames 3200... +[2023-02-26 07:01:13,116][06480] Num frames 3300... +[2023-02-26 07:01:13,255][06480] Num frames 3400... +[2023-02-26 07:01:13,408][06480] Num frames 3500... +[2023-02-26 07:01:13,583][06480] Avg episode rewards: #0: 18.190, true rewards: #0: 8.940 +[2023-02-26 07:01:13,586][06480] Avg episode reward: 18.190, avg true_objective: 8.940 +[2023-02-26 07:01:13,629][06480] Num frames 3600... +[2023-02-26 07:01:13,788][06480] Num frames 3700... +[2023-02-26 07:01:13,960][06480] Num frames 3800... +[2023-02-26 07:01:14,119][06480] Num frames 3900... +[2023-02-26 07:01:14,281][06480] Num frames 4000... +[2023-02-26 07:01:14,455][06480] Num frames 4100... +[2023-02-26 07:01:14,619][06480] Num frames 4200... +[2023-02-26 07:01:14,777][06480] Num frames 4300... +[2023-02-26 07:01:14,949][06480] Num frames 4400... +[2023-02-26 07:01:15,124][06480] Num frames 4500... +[2023-02-26 07:01:15,290][06480] Num frames 4600... +[2023-02-26 07:01:15,458][06480] Num frames 4700... +[2023-02-26 07:01:15,631][06480] Num frames 4800... +[2023-02-26 07:01:15,795][06480] Num frames 4900... +[2023-02-26 07:01:15,960][06480] Num frames 5000... +[2023-02-26 07:01:16,126][06480] Num frames 5100... +[2023-02-26 07:01:16,289][06480] Num frames 5200... +[2023-02-26 07:01:16,454][06480] Num frames 5300... +[2023-02-26 07:01:16,614][06480] Num frames 5400... +[2023-02-26 07:01:16,765][06480] Num frames 5500... +[2023-02-26 07:01:16,853][06480] Avg episode rewards: #0: 24.456, true rewards: #0: 11.056 +[2023-02-26 07:01:16,854][06480] Avg episode reward: 24.456, avg true_objective: 11.056 +[2023-02-26 07:01:16,953][06480] Num frames 5600... +[2023-02-26 07:01:17,073][06480] Num frames 5700... +[2023-02-26 07:01:17,201][06480] Num frames 5800... +[2023-02-26 07:01:17,324][06480] Num frames 5900... +[2023-02-26 07:01:17,436][06480] Num frames 6000... +[2023-02-26 07:01:17,549][06480] Num frames 6100... +[2023-02-26 07:01:17,667][06480] Num frames 6200... +[2023-02-26 07:01:17,787][06480] Num frames 6300... +[2023-02-26 07:01:17,900][06480] Num frames 6400... +[2023-02-26 07:01:18,015][06480] Num frames 6500... +[2023-02-26 07:01:18,127][06480] Num frames 6600... +[2023-02-26 07:01:18,243][06480] Num frames 6700... +[2023-02-26 07:01:18,359][06480] Num frames 6800... +[2023-02-26 07:01:18,469][06480] Num frames 6900... +[2023-02-26 07:01:18,541][06480] Avg episode rewards: #0: 26.022, true rewards: #0: 11.522 +[2023-02-26 07:01:18,542][06480] Avg episode reward: 26.022, avg true_objective: 11.522 +[2023-02-26 07:01:18,642][06480] Num frames 7000... +[2023-02-26 07:01:18,751][06480] Num frames 7100... +[2023-02-26 07:01:18,868][06480] Num frames 7200... +[2023-02-26 07:01:18,995][06480] Num frames 7300... +[2023-02-26 07:01:19,106][06480] Num frames 7400... +[2023-02-26 07:01:19,217][06480] Num frames 7500... +[2023-02-26 07:01:19,335][06480] Num frames 7600... +[2023-02-26 07:01:19,453][06480] Num frames 7700... +[2023-02-26 07:01:19,565][06480] Num frames 7800... +[2023-02-26 07:01:19,679][06480] Num frames 7900... +[2023-02-26 07:01:19,797][06480] Avg episode rewards: #0: 26.081, true rewards: #0: 11.367 +[2023-02-26 07:01:19,799][06480] Avg episode reward: 26.081, avg true_objective: 11.367 +[2023-02-26 07:01:19,850][06480] Num frames 8000... +[2023-02-26 07:01:19,969][06480] Num frames 8100... +[2023-02-26 07:01:20,082][06480] Num frames 8200... +[2023-02-26 07:01:20,201][06480] Num frames 8300... +[2023-02-26 07:01:20,313][06480] Num frames 8400... +[2023-02-26 07:01:20,430][06480] Num frames 8500... +[2023-02-26 07:01:20,544][06480] Num frames 8600... +[2023-02-26 07:01:20,661][06480] Num frames 8700... +[2023-02-26 07:01:20,777][06480] Num frames 8800... +[2023-02-26 07:01:20,893][06480] Num frames 8900... +[2023-02-26 07:01:21,016][06480] Num frames 9000... +[2023-02-26 07:01:21,129][06480] Num frames 9100... +[2023-02-26 07:01:21,248][06480] Num frames 9200... +[2023-02-26 07:01:21,364][06480] Num frames 9300... +[2023-02-26 07:01:21,483][06480] Num frames 9400... +[2023-02-26 07:01:21,577][06480] Avg episode rewards: #0: 27.536, true rewards: #0: 11.786 +[2023-02-26 07:01:21,578][06480] Avg episode reward: 27.536, avg true_objective: 11.786 +[2023-02-26 07:01:21,667][06480] Num frames 9500... +[2023-02-26 07:01:21,781][06480] Num frames 9600... +[2023-02-26 07:01:21,900][06480] Num frames 9700... +[2023-02-26 07:01:22,023][06480] Num frames 9800... +[2023-02-26 07:01:22,141][06480] Num frames 9900... +[2023-02-26 07:01:22,255][06480] Num frames 10000... +[2023-02-26 07:01:22,370][06480] Num frames 10100... +[2023-02-26 07:01:22,487][06480] Num frames 10200... +[2023-02-26 07:01:22,603][06480] Num frames 10300... +[2023-02-26 07:01:22,716][06480] Num frames 10400... +[2023-02-26 07:01:22,827][06480] Num frames 10500... +[2023-02-26 07:01:22,944][06480] Avg episode rewards: #0: 27.165, true rewards: #0: 11.721 +[2023-02-26 07:01:22,946][06480] Avg episode reward: 27.165, avg true_objective: 11.721 +[2023-02-26 07:01:23,014][06480] Num frames 10600... +[2023-02-26 07:01:23,126][06480] Num frames 10700... +[2023-02-26 07:01:23,239][06480] Num frames 10800... +[2023-02-26 07:01:23,349][06480] Num frames 10900... +[2023-02-26 07:01:23,457][06480] Num frames 11000... +[2023-02-26 07:01:23,573][06480] Num frames 11100... +[2023-02-26 07:01:23,692][06480] Avg episode rewards: #0: 25.357, true rewards: #0: 11.157 +[2023-02-26 07:01:23,694][06480] Avg episode reward: 25.357, avg true_objective: 11.157 +[2023-02-26 07:02:30,978][06480] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-26 07:14:21,650][06480] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 07:14:21,652][06480] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 07:14:21,654][06480] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 07:14:21,656][06480] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 07:14:21,658][06480] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 07:14:21,661][06480] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 07:14:21,662][06480] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-26 07:14:21,664][06480] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 07:14:21,666][06480] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-26 07:14:21,667][06480] Adding new argument 'hf_repository'='sd99/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-26 07:14:21,668][06480] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 07:14:21,670][06480] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 07:14:21,671][06480] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 07:14:21,672][06480] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 07:14:21,673][06480] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 07:14:21,702][06480] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 07:14:21,704][06480] RunningMeanStd input shape: (1,) +[2023-02-26 07:14:21,719][06480] ConvEncoder: input_channels=3 +[2023-02-26 07:14:21,755][06480] Conv encoder output size: 512 +[2023-02-26 07:14:21,757][06480] Policy head output size: 512 +[2023-02-26 07:14:21,778][06480] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 07:14:22,441][06480] Num frames 100... +[2023-02-26 07:14:22,600][06480] Num frames 200... +[2023-02-26 07:14:22,752][06480] Num frames 300... +[2023-02-26 07:14:22,907][06480] Num frames 400... +[2023-02-26 07:14:23,059][06480] Num frames 500... +[2023-02-26 07:14:23,237][06480] Avg episode rewards: #0: 12.760, true rewards: #0: 5.760 +[2023-02-26 07:14:23,239][06480] Avg episode reward: 12.760, avg true_objective: 5.760 +[2023-02-26 07:14:23,281][06480] Num frames 600... +[2023-02-26 07:14:23,432][06480] Num frames 700... +[2023-02-26 07:14:23,585][06480] Num frames 800... +[2023-02-26 07:14:23,745][06480] Num frames 900... +[2023-02-26 07:14:23,900][06480] Num frames 1000... +[2023-02-26 07:14:24,054][06480] Num frames 1100... +[2023-02-26 07:14:24,161][06480] Avg episode rewards: #0: 12.160, true rewards: #0: 5.660 +[2023-02-26 07:14:24,164][06480] Avg episode reward: 12.160, avg true_objective: 5.660 +[2023-02-26 07:14:24,276][06480] Num frames 1200... +[2023-02-26 07:14:24,443][06480] Num frames 1300... +[2023-02-26 07:14:24,610][06480] Num frames 1400... +[2023-02-26 07:14:24,775][06480] Num frames 1500... +[2023-02-26 07:14:24,937][06480] Num frames 1600... +[2023-02-26 07:14:25,095][06480] Num frames 1700... +[2023-02-26 07:14:25,254][06480] Num frames 1800... +[2023-02-26 07:14:25,418][06480] Num frames 1900... +[2023-02-26 07:14:25,530][06480] Num frames 2000... +[2023-02-26 07:14:25,639][06480] Num frames 2100... +[2023-02-26 07:14:25,751][06480] Num frames 2200... +[2023-02-26 07:14:25,871][06480] Num frames 2300... +[2023-02-26 07:14:25,987][06480] Num frames 2400... +[2023-02-26 07:14:26,120][06480] Avg episode rewards: #0: 17.897, true rewards: #0: 8.230 +[2023-02-26 07:14:26,121][06480] Avg episode reward: 17.897, avg true_objective: 8.230 +[2023-02-26 07:14:26,159][06480] Num frames 2500... +[2023-02-26 07:14:26,275][06480] Num frames 2600... +[2023-02-26 07:14:26,384][06480] Num frames 2700... +[2023-02-26 07:14:26,502][06480] Num frames 2800... +[2023-02-26 07:14:26,612][06480] Num frames 2900... +[2023-02-26 07:14:26,720][06480] Num frames 3000... +[2023-02-26 07:14:26,833][06480] Num frames 3100... +[2023-02-26 07:14:26,945][06480] Num frames 3200... +[2023-02-26 07:14:27,054][06480] Num frames 3300... +[2023-02-26 07:14:27,167][06480] Num frames 3400... +[2023-02-26 07:14:27,281][06480] Num frames 3500... +[2023-02-26 07:14:27,394][06480] Num frames 3600... +[2023-02-26 07:14:27,504][06480] Num frames 3700... +[2023-02-26 07:14:27,617][06480] Num frames 3800... +[2023-02-26 07:14:27,722][06480] Avg episode rewards: #0: 21.613, true rewards: #0: 9.612 +[2023-02-26 07:14:27,725][06480] Avg episode reward: 21.613, avg true_objective: 9.612 +[2023-02-26 07:14:27,791][06480] Num frames 3900... +[2023-02-26 07:14:27,899][06480] Num frames 4000... +[2023-02-26 07:14:28,019][06480] Num frames 4100... +[2023-02-26 07:14:28,141][06480] Num frames 4200... +[2023-02-26 07:14:28,257][06480] Num frames 4300... +[2023-02-26 07:14:28,369][06480] Num frames 4400... +[2023-02-26 07:14:28,481][06480] Num frames 4500... +[2023-02-26 07:14:28,593][06480] Num frames 4600... +[2023-02-26 07:14:28,715][06480] Num frames 4700... +[2023-02-26 07:14:28,830][06480] Num frames 4800... +[2023-02-26 07:14:28,949][06480] Num frames 4900... +[2023-02-26 07:14:29,082][06480] Avg episode rewards: #0: 22.340, true rewards: #0: 9.940 +[2023-02-26 07:14:29,084][06480] Avg episode reward: 22.340, avg true_objective: 9.940 +[2023-02-26 07:14:29,122][06480] Num frames 5000... +[2023-02-26 07:14:29,241][06480] Num frames 5100... +[2023-02-26 07:14:29,352][06480] Num frames 5200... +[2023-02-26 07:14:29,465][06480] Num frames 5300... +[2023-02-26 07:14:29,581][06480] Num frames 5400... +[2023-02-26 07:14:29,691][06480] Num frames 5500... +[2023-02-26 07:14:29,809][06480] Num frames 5600... +[2023-02-26 07:14:29,923][06480] Num frames 5700... +[2023-02-26 07:14:30,021][06480] Avg episode rewards: #0: 21.230, true rewards: #0: 9.563 +[2023-02-26 07:14:30,022][06480] Avg episode reward: 21.230, avg true_objective: 9.563 +[2023-02-26 07:14:30,096][06480] Num frames 5800... +[2023-02-26 07:14:30,210][06480] Num frames 5900... +[2023-02-26 07:14:30,326][06480] Num frames 6000... +[2023-02-26 07:14:30,440][06480] Num frames 6100... +[2023-02-26 07:14:30,552][06480] Num frames 6200... +[2023-02-26 07:14:30,665][06480] Num frames 6300... +[2023-02-26 07:14:30,775][06480] Num frames 6400... +[2023-02-26 07:14:30,889][06480] Num frames 6500... +[2023-02-26 07:14:31,006][06480] Num frames 6600... +[2023-02-26 07:14:31,123][06480] Num frames 6700... +[2023-02-26 07:14:31,238][06480] Num frames 6800... +[2023-02-26 07:14:31,358][06480] Num frames 6900... +[2023-02-26 07:14:31,468][06480] Num frames 7000... +[2023-02-26 07:14:31,581][06480] Num frames 7100... +[2023-02-26 07:14:31,693][06480] Num frames 7200... +[2023-02-26 07:14:31,807][06480] Num frames 7300... +[2023-02-26 07:14:31,925][06480] Num frames 7400... +[2023-02-26 07:14:32,036][06480] Num frames 7500... +[2023-02-26 07:14:32,151][06480] Num frames 7600... +[2023-02-26 07:14:32,266][06480] Num frames 7700... +[2023-02-26 07:14:32,384][06480] Num frames 7800... +[2023-02-26 07:14:32,484][06480] Avg episode rewards: #0: 25.483, true rewards: #0: 11.197 +[2023-02-26 07:14:32,486][06480] Avg episode reward: 25.483, avg true_objective: 11.197 +[2023-02-26 07:14:32,563][06480] Num frames 7900... +[2023-02-26 07:14:32,675][06480] Num frames 8000... +[2023-02-26 07:14:32,785][06480] Num frames 8100... +[2023-02-26 07:14:32,910][06480] Num frames 8200... +[2023-02-26 07:14:32,990][06480] Avg episode rewards: #0: 23.152, true rewards: #0: 10.277 +[2023-02-26 07:14:32,992][06480] Avg episode reward: 23.152, avg true_objective: 10.277 +[2023-02-26 07:14:33,079][06480] Num frames 8300... +[2023-02-26 07:14:33,189][06480] Num frames 8400... +[2023-02-26 07:14:33,307][06480] Num frames 8500... +[2023-02-26 07:14:33,425][06480] Num frames 8600... +[2023-02-26 07:14:33,536][06480] Num frames 8700... +[2023-02-26 07:14:33,647][06480] Num frames 8800... +[2023-02-26 07:14:33,758][06480] Num frames 8900... +[2023-02-26 07:14:33,867][06480] Num frames 9000... +[2023-02-26 07:14:33,982][06480] Num frames 9100... +[2023-02-26 07:14:34,104][06480] Num frames 9200... +[2023-02-26 07:14:34,217][06480] Num frames 9300... +[2023-02-26 07:14:34,339][06480] Num frames 9400... +[2023-02-26 07:14:34,451][06480] Num frames 9500... +[2023-02-26 07:14:34,565][06480] Num frames 9600... +[2023-02-26 07:14:34,678][06480] Num frames 9700... +[2023-02-26 07:14:34,796][06480] Num frames 9800... +[2023-02-26 07:14:34,909][06480] Num frames 9900... +[2023-02-26 07:14:35,025][06480] Num frames 10000... +[2023-02-26 07:14:35,155][06480] Num frames 10100... +[2023-02-26 07:14:35,223][06480] Avg episode rewards: #0: 25.455, true rewards: #0: 11.233 +[2023-02-26 07:14:35,225][06480] Avg episode reward: 25.455, avg true_objective: 11.233 +[2023-02-26 07:14:35,330][06480] Num frames 10200... +[2023-02-26 07:14:35,465][06480] Num frames 10300... +[2023-02-26 07:14:35,623][06480] Num frames 10400... +[2023-02-26 07:14:35,775][06480] Num frames 10500... +[2023-02-26 07:14:35,913][06480] Avg episode rewards: #0: 23.750, true rewards: #0: 10.550 +[2023-02-26 07:14:35,919][06480] Avg episode reward: 23.750, avg true_objective: 10.550 +[2023-02-26 07:15:38,696][06480] Replay video saved to /content/train_dir/default_experiment/replay.mp4!