diff --git "a/sf_log.txt" "b/sf_log.txt" --- "a/sf_log.txt" +++ "b/sf_log.txt" @@ -1,50 +1,50 @@ -[2023-02-25 09:54:34,751][00973] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-25 09:54:34,754][00973] Rollout worker 0 uses device cpu -[2023-02-25 09:54:34,756][00973] Rollout worker 1 uses device cpu -[2023-02-25 09:54:34,760][00973] Rollout worker 2 uses device cpu -[2023-02-25 09:54:34,765][00973] Rollout worker 3 uses device cpu -[2023-02-25 09:54:34,766][00973] Rollout worker 4 uses device cpu -[2023-02-25 09:54:34,769][00973] Rollout worker 5 uses device cpu -[2023-02-25 09:54:34,771][00973] Rollout worker 6 uses device cpu -[2023-02-25 09:54:34,772][00973] Rollout worker 7 uses device cpu -[2023-02-25 09:54:34,967][00973] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 09:54:34,969][00973] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-25 09:54:35,007][00973] Starting all processes... -[2023-02-25 09:54:35,013][00973] Starting process learner_proc0 -[2023-02-25 09:54:35,070][00973] Starting all processes... -[2023-02-25 09:54:35,081][00973] Starting process inference_proc0-0 -[2023-02-25 09:54:35,082][00973] Starting process rollout_proc0 -[2023-02-25 09:54:35,085][00973] Starting process rollout_proc1 -[2023-02-25 09:54:35,085][00973] Starting process rollout_proc2 -[2023-02-25 09:54:35,085][00973] Starting process rollout_proc3 -[2023-02-25 09:54:35,085][00973] Starting process rollout_proc4 -[2023-02-25 09:54:35,085][00973] Starting process rollout_proc5 -[2023-02-25 09:54:35,086][00973] Starting process rollout_proc6 -[2023-02-25 09:54:35,086][00973] Starting process rollout_proc7 -[2023-02-25 09:54:47,653][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 09:54:47,660][11765] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-25 09:54:47,908][11782] Worker 2 uses CPU cores [0] -[2023-02-25 09:54:47,997][11780] Worker 0 uses CPU cores [0] -[2023-02-25 09:54:48,027][11779] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 09:54:48,028][11779] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-25 09:54:48,056][11781] Worker 1 uses CPU cores [1] -[2023-02-25 09:54:48,098][11784] Worker 3 uses CPU cores [1] -[2023-02-25 09:54:48,458][11783] Worker 4 uses CPU cores [0] -[2023-02-25 09:54:48,482][11786] Worker 7 uses CPU cores [1] -[2023-02-25 09:54:48,511][11787] Worker 6 uses CPU cores [0] -[2023-02-25 09:54:48,539][11785] Worker 5 uses CPU cores [1] -[2023-02-25 09:54:48,861][11765] Num visible devices: 1 -[2023-02-25 09:54:48,861][11779] Num visible devices: 1 -[2023-02-25 09:54:48,877][11765] Starting seed is not provided -[2023-02-25 09:54:48,877][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 09:54:48,878][11765] Initializing actor-critic model on device cuda:0 -[2023-02-25 09:54:48,878][11765] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 09:54:48,886][11765] RunningMeanStd input shape: (1,) -[2023-02-25 09:54:48,905][11765] ConvEncoder: input_channels=3 -[2023-02-25 09:54:49,383][11765] Conv encoder output size: 512 -[2023-02-25 09:54:49,384][11765] Policy head output size: 512 -[2023-02-25 09:54:49,456][11765] Created Actor Critic model with architecture: -[2023-02-25 09:54:49,456][11765] ActorCriticSharedWeights( +[2023-02-25 13:36:50,795][00699] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-25 13:36:50,798][00699] Rollout worker 0 uses device cpu +[2023-02-25 13:36:50,799][00699] Rollout worker 1 uses device cpu +[2023-02-25 13:36:50,803][00699] Rollout worker 2 uses device cpu +[2023-02-25 13:36:50,804][00699] Rollout worker 3 uses device cpu +[2023-02-25 13:36:50,806][00699] Rollout worker 4 uses device cpu +[2023-02-25 13:36:50,807][00699] Rollout worker 5 uses device cpu +[2023-02-25 13:36:50,809][00699] Rollout worker 6 uses device cpu +[2023-02-25 13:36:50,810][00699] Rollout worker 7 uses device cpu +[2023-02-25 13:36:51,020][00699] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:36:51,025][00699] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-25 13:36:51,055][00699] Starting all processes... +[2023-02-25 13:36:51,057][00699] Starting process learner_proc0 +[2023-02-25 13:36:51,111][00699] Starting all processes... +[2023-02-25 13:36:51,129][00699] Starting process inference_proc0-0 +[2023-02-25 13:36:51,133][00699] Starting process rollout_proc0 +[2023-02-25 13:36:51,133][00699] Starting process rollout_proc1 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc3 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc4 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc5 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc6 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc7 +[2023-02-25 13:36:51,141][00699] Starting process rollout_proc2 +[2023-02-25 13:37:01,955][10893] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:37:01,963][10893] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-25 13:37:02,647][10911] Worker 5 uses CPU cores [1] +[2023-02-25 13:37:03,151][10907] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:37:03,160][10907] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-25 13:37:03,356][10909] Worker 1 uses CPU cores [1] +[2023-02-25 13:37:03,421][10908] Worker 0 uses CPU cores [0] +[2023-02-25 13:37:03,684][10912] Worker 4 uses CPU cores [0] +[2023-02-25 13:37:03,765][10910] Worker 3 uses CPU cores [1] +[2023-02-25 13:37:03,771][10914] Worker 7 uses CPU cores [1] +[2023-02-25 13:37:03,932][10915] Worker 2 uses CPU cores [0] +[2023-02-25 13:37:03,933][10913] Worker 6 uses CPU cores [0] +[2023-02-25 13:37:04,054][10907] Num visible devices: 1 +[2023-02-25 13:37:04,057][10893] Num visible devices: 1 +[2023-02-25 13:37:04,078][10893] Starting seed is not provided +[2023-02-25 13:37:04,079][10893] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:37:04,079][10893] Initializing actor-critic model on device cuda:0 +[2023-02-25 13:37:04,079][10893] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:37:04,081][10893] RunningMeanStd input shape: (1,) +[2023-02-25 13:37:04,135][10893] ConvEncoder: input_channels=3 +[2023-02-25 13:37:04,617][10893] Conv encoder output size: 512 +[2023-02-25 13:37:04,618][10893] Policy head output size: 512 +[2023-02-25 13:37:04,697][10893] Created Actor Critic model with architecture: +[2023-02-25 13:37:04,697][10893] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( @@ -85,992 +85,505 @@ (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) -[2023-02-25 09:54:54,960][00973] Heartbeat connected on Batcher_0 -[2023-02-25 09:54:54,968][00973] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-25 09:54:54,978][00973] Heartbeat connected on RolloutWorker_w0 -[2023-02-25 09:54:54,982][00973] Heartbeat connected on RolloutWorker_w1 -[2023-02-25 09:54:54,986][00973] Heartbeat connected on RolloutWorker_w2 -[2023-02-25 09:54:54,993][00973] Heartbeat connected on RolloutWorker_w3 -[2023-02-25 09:54:54,995][00973] Heartbeat connected on RolloutWorker_w4 -[2023-02-25 09:54:54,998][00973] Heartbeat connected on RolloutWorker_w5 -[2023-02-25 09:54:55,002][00973] Heartbeat connected on RolloutWorker_w6 -[2023-02-25 09:54:55,010][00973] Heartbeat connected on RolloutWorker_w7 -[2023-02-25 09:54:57,371][11765] Using optimizer -[2023-02-25 09:54:57,372][11765] No checkpoints found -[2023-02-25 09:54:57,372][11765] Did not load from checkpoint, starting from scratch! -[2023-02-25 09:54:57,373][11765] Initialized policy 0 weights for model version 0 -[2023-02-25 09:54:57,376][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 09:54:57,383][11765] LearnerWorker_p0 finished initialization! -[2023-02-25 09:54:57,384][00973] Heartbeat connected on LearnerWorker_p0 -[2023-02-25 09:54:57,476][11779] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 09:54:57,478][11779] RunningMeanStd input shape: (1,) -[2023-02-25 09:54:57,495][11779] ConvEncoder: input_channels=3 -[2023-02-25 09:54:57,603][11779] Conv encoder output size: 512 -[2023-02-25 09:54:57,603][11779] Policy head output size: 512 -[2023-02-25 09:54:59,340][00973] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 09:55:00,514][00973] Inference worker 0-0 is ready! -[2023-02-25 09:55:00,516][00973] All inference workers are ready! Signal rollout workers to start! -[2023-02-25 09:55:00,699][11786] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,713][11785] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,719][11784] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,723][11781] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,725][11780] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,764][11783] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,804][11782] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:00,821][11787] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 09:55:02,156][11784] Decorrelating experience for 0 frames... -[2023-02-25 09:55:02,157][11786] Decorrelating experience for 0 frames... -[2023-02-25 09:55:02,159][11785] Decorrelating experience for 0 frames... -[2023-02-25 09:55:02,161][11780] Decorrelating experience for 0 frames... -[2023-02-25 09:55:02,158][11782] Decorrelating experience for 0 frames... -[2023-02-25 09:55:03,830][11786] Decorrelating experience for 32 frames... -[2023-02-25 09:55:03,835][11785] Decorrelating experience for 32 frames... -[2023-02-25 09:55:03,842][11784] Decorrelating experience for 32 frames... -[2023-02-25 09:55:03,867][11781] Decorrelating experience for 0 frames... -[2023-02-25 09:55:03,942][11787] Decorrelating experience for 0 frames... -[2023-02-25 09:55:03,947][11783] Decorrelating experience for 0 frames... -[2023-02-25 09:55:03,961][11782] Decorrelating experience for 32 frames... -[2023-02-25 09:55:04,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 09:55:04,800][11781] Decorrelating experience for 32 frames... -[2023-02-25 09:55:04,970][11786] Decorrelating experience for 64 frames... -[2023-02-25 09:55:05,077][11787] Decorrelating experience for 32 frames... -[2023-02-25 09:55:05,085][11783] Decorrelating experience for 32 frames... -[2023-02-25 09:55:05,132][11780] Decorrelating experience for 32 frames... -[2023-02-25 09:55:06,019][11787] Decorrelating experience for 64 frames... -[2023-02-25 09:55:06,022][11783] Decorrelating experience for 64 frames... -[2023-02-25 09:55:06,217][11781] Decorrelating experience for 64 frames... -[2023-02-25 09:55:06,430][11786] Decorrelating experience for 96 frames... -[2023-02-25 09:55:06,496][11785] Decorrelating experience for 64 frames... -[2023-02-25 09:55:06,559][11784] Decorrelating experience for 64 frames... -[2023-02-25 09:55:06,778][11780] Decorrelating experience for 64 frames... -[2023-02-25 09:55:07,168][11783] Decorrelating experience for 96 frames... -[2023-02-25 09:55:07,419][11785] Decorrelating experience for 96 frames... -[2023-02-25 09:55:07,546][11787] Decorrelating experience for 96 frames... -[2023-02-25 09:55:07,732][11781] Decorrelating experience for 96 frames... -[2023-02-25 09:55:08,082][11784] Decorrelating experience for 96 frames... -[2023-02-25 09:55:08,300][11782] Decorrelating experience for 64 frames... -[2023-02-25 09:55:08,690][11780] Decorrelating experience for 96 frames... -[2023-02-25 09:55:08,851][11782] Decorrelating experience for 96 frames... -[2023-02-25 09:55:09,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 09:55:13,352][11765] Signal inference workers to stop experience collection... -[2023-02-25 09:55:13,366][11779] InferenceWorker_p0-w0: stopping experience collection -[2023-02-25 09:55:14,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 121.9. Samples: 1828. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 09:55:14,345][00973] Avg episode reward: [(0, '1.870')] -[2023-02-25 09:55:16,208][11765] Signal inference workers to resume experience collection... -[2023-02-25 09:55:16,208][11779] InferenceWorker_p0-w0: resuming experience collection -[2023-02-25 09:55:19,340][00973] Fps is (10 sec: 1228.8, 60 sec: 614.4, 300 sec: 614.4). Total num frames: 12288. Throughput: 0: 163.8. Samples: 3276. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-02-25 09:55:19,342][00973] Avg episode reward: [(0, '2.788')] -[2023-02-25 09:55:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 1146.9, 300 sec: 1146.9). Total num frames: 28672. Throughput: 0: 206.4. Samples: 5160. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) -[2023-02-25 09:55:24,343][00973] Avg episode reward: [(0, '3.525')] -[2023-02-25 09:55:26,894][11779] Updated weights for policy 0, policy_version 10 (0.0015) -[2023-02-25 09:55:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 49152. Throughput: 0: 369.8. Samples: 11094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:55:29,348][00973] Avg episode reward: [(0, '4.295')] -[2023-02-25 09:55:34,340][00973] Fps is (10 sec: 3686.2, 60 sec: 1872.4, 300 sec: 1872.4). Total num frames: 65536. Throughput: 0: 487.9. Samples: 17078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:55:34,345][00973] Avg episode reward: [(0, '4.595')] -[2023-02-25 09:55:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 1945.6, 300 sec: 1945.6). Total num frames: 77824. Throughput: 0: 467.7. Samples: 18708. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:55:39,343][00973] Avg episode reward: [(0, '4.600')] -[2023-02-25 09:55:40,648][11779] Updated weights for policy 0, policy_version 20 (0.0020) -[2023-02-25 09:55:44,340][00973] Fps is (10 sec: 2048.1, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 86016. Throughput: 0: 478.1. Samples: 21516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:55:44,342][00973] Avg episode reward: [(0, '4.534')] -[2023-02-25 09:55:49,340][00973] Fps is (10 sec: 2047.9, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 98304. Throughput: 0: 553.5. Samples: 24906. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:55:49,345][00973] Avg episode reward: [(0, '4.416')] -[2023-02-25 09:55:54,340][00973] Fps is (10 sec: 3276.8, 60 sec: 2159.7, 300 sec: 2159.7). Total num frames: 118784. Throughput: 0: 616.2. Samples: 27728. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:55:54,346][00973] Avg episode reward: [(0, '4.252')] -[2023-02-25 09:55:54,349][11765] Saving new best policy, reward=4.252! -[2023-02-25 09:55:54,766][11779] Updated weights for policy 0, policy_version 30 (0.0029) -[2023-02-25 09:55:59,346][00973] Fps is (10 sec: 4093.5, 60 sec: 2320.8, 300 sec: 2320.8). Total num frames: 139264. Throughput: 0: 721.5. Samples: 34302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:55:59,349][00973] Avg episode reward: [(0, '4.381')] -[2023-02-25 09:55:59,362][11765] Saving new best policy, reward=4.381! -[2023-02-25 09:56:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 2525.9, 300 sec: 2331.6). Total num frames: 151552. Throughput: 0: 782.2. Samples: 38476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:56:04,344][00973] Avg episode reward: [(0, '4.454')] -[2023-02-25 09:56:04,351][11765] Saving new best policy, reward=4.454! -[2023-02-25 09:56:07,654][11779] Updated weights for policy 0, policy_version 40 (0.0014) -[2023-02-25 09:56:09,340][00973] Fps is (10 sec: 2868.9, 60 sec: 2798.9, 300 sec: 2399.1). Total num frames: 167936. Throughput: 0: 782.9. Samples: 40392. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:56:09,350][00973] Avg episode reward: [(0, '4.481')] -[2023-02-25 09:56:09,363][11765] Saving new best policy, reward=4.481! -[2023-02-25 09:56:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2512.2). Total num frames: 188416. Throughput: 0: 772.2. Samples: 45842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 09:56:14,346][00973] Avg episode reward: [(0, '4.544')] -[2023-02-25 09:56:14,348][11765] Saving new best policy, reward=4.544! -[2023-02-25 09:56:18,071][11779] Updated weights for policy 0, policy_version 50 (0.0023) -[2023-02-25 09:56:19,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3276.8, 300 sec: 2611.2). Total num frames: 208896. Throughput: 0: 780.7. Samples: 52208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:56:19,342][00973] Avg episode reward: [(0, '4.530')] -[2023-02-25 09:56:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2602.2). Total num frames: 221184. Throughput: 0: 799.7. Samples: 54694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:56:24,343][00973] Avg episode reward: [(0, '4.405')] -[2023-02-25 09:56:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.3, 300 sec: 2639.6). Total num frames: 237568. Throughput: 0: 830.4. Samples: 58882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:56:29,346][00973] Avg episode reward: [(0, '4.463')] -[2023-02-25 09:56:29,356][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000058_237568.pth... -[2023-02-25 09:56:31,364][11779] Updated weights for policy 0, policy_version 60 (0.0027) -[2023-02-25 09:56:34,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3208.5, 300 sec: 2716.3). Total num frames: 258048. Throughput: 0: 875.0. Samples: 64280. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:56:34,348][00973] Avg episode reward: [(0, '4.554')] -[2023-02-25 09:56:34,351][11765] Saving new best policy, reward=4.554! -[2023-02-25 09:56:39,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3345.1, 300 sec: 2785.3). Total num frames: 278528. Throughput: 0: 883.4. Samples: 67480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 09:56:39,345][00973] Avg episode reward: [(0, '4.615')] -[2023-02-25 09:56:39,358][11765] Saving new best policy, reward=4.615! -[2023-02-25 09:56:41,504][11779] Updated weights for policy 0, policy_version 70 (0.0017) -[2023-02-25 09:56:44,340][00973] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 2808.7). Total num frames: 294912. Throughput: 0: 860.0. Samples: 72996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:56:44,343][00973] Avg episode reward: [(0, '4.596')] -[2023-02-25 09:56:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 2792.7). Total num frames: 307200. Throughput: 0: 859.2. Samples: 77142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:56:49,345][00973] Avg episode reward: [(0, '4.593')] -[2023-02-25 09:56:54,139][11779] Updated weights for policy 0, policy_version 80 (0.0018) -[2023-02-25 09:56:54,341][00973] Fps is (10 sec: 3276.4, 60 sec: 3481.5, 300 sec: 2849.4). Total num frames: 327680. Throughput: 0: 870.5. Samples: 79564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 09:56:54,345][00973] Avg episode reward: [(0, '4.663')] -[2023-02-25 09:56:54,348][11765] Saving new best policy, reward=4.663! -[2023-02-25 09:56:59,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3482.0, 300 sec: 2901.3). Total num frames: 348160. Throughput: 0: 892.8. Samples: 86018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:56:59,347][00973] Avg episode reward: [(0, '4.376')] -[2023-02-25 09:57:04,340][00973] Fps is (10 sec: 3686.9, 60 sec: 3549.9, 300 sec: 2916.4). Total num frames: 364544. Throughput: 0: 870.4. Samples: 91376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:57:04,342][00973] Avg episode reward: [(0, '4.325')] -[2023-02-25 09:57:05,467][11779] Updated weights for policy 0, policy_version 90 (0.0016) -[2023-02-25 09:57:09,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 2898.7). Total num frames: 376832. Throughput: 0: 860.7. Samples: 93424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:57:09,346][00973] Avg episode reward: [(0, '4.623')] -[2023-02-25 09:57:14,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 2912.7). Total num frames: 393216. Throughput: 0: 865.8. Samples: 97842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:57:14,342][00973] Avg episode reward: [(0, '4.870')] -[2023-02-25 09:57:14,381][11765] Saving new best policy, reward=4.870! -[2023-02-25 09:57:17,272][11779] Updated weights for policy 0, policy_version 100 (0.0031) -[2023-02-25 09:57:19,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 2984.2). Total num frames: 417792. Throughput: 0: 885.5. Samples: 104126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:57:19,342][00973] Avg episode reward: [(0, '4.650')] -[2023-02-25 09:57:24,345][00973] Fps is (10 sec: 4093.8, 60 sec: 3549.6, 300 sec: 2994.2). Total num frames: 434176. Throughput: 0: 887.2. Samples: 107410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:57:24,348][00973] Avg episode reward: [(0, '4.451')] -[2023-02-25 09:57:29,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 2976.4). Total num frames: 446464. Throughput: 0: 855.0. Samples: 111472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:57:29,346][00973] Avg episode reward: [(0, '4.453')] -[2023-02-25 09:57:30,040][11779] Updated weights for policy 0, policy_version 110 (0.0019) -[2023-02-25 09:57:34,340][00973] Fps is (10 sec: 2868.7, 60 sec: 3413.4, 300 sec: 2986.1). Total num frames: 462848. Throughput: 0: 866.1. Samples: 116116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:57:34,347][00973] Avg episode reward: [(0, '4.505')] -[2023-02-25 09:57:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3020.8). Total num frames: 483328. Throughput: 0: 883.0. Samples: 119296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 09:57:39,343][00973] Avg episode reward: [(0, '4.459')] -[2023-02-25 09:57:40,431][11779] Updated weights for policy 0, policy_version 120 (0.0022) -[2023-02-25 09:57:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3053.4). Total num frames: 503808. Throughput: 0: 878.0. Samples: 125526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 09:57:44,346][00973] Avg episode reward: [(0, '4.495')] -[2023-02-25 09:57:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3035.9). Total num frames: 516096. Throughput: 0: 848.8. Samples: 129570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:57:49,358][00973] Avg episode reward: [(0, '4.590')] -[2023-02-25 09:57:53,883][11779] Updated weights for policy 0, policy_version 130 (0.0050) -[2023-02-25 09:57:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3042.7). Total num frames: 532480. Throughput: 0: 849.2. Samples: 131640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 09:57:54,342][00973] Avg episode reward: [(0, '4.671')] -[2023-02-25 09:57:59,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3072.0). Total num frames: 552960. Throughput: 0: 880.4. Samples: 137462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:57:59,343][00973] Avg episode reward: [(0, '4.739')] -[2023-02-25 09:58:04,047][11779] Updated weights for policy 0, policy_version 140 (0.0030) -[2023-02-25 09:58:04,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3099.7). Total num frames: 573440. Throughput: 0: 877.9. Samples: 143630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:58:04,345][00973] Avg episode reward: [(0, '4.714')] -[2023-02-25 09:58:09,347][00973] Fps is (10 sec: 3274.4, 60 sec: 3481.2, 300 sec: 3082.7). Total num frames: 585728. Throughput: 0: 850.1. Samples: 145664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:58:09,354][00973] Avg episode reward: [(0, '4.554')] -[2023-02-25 09:58:14,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3087.8). Total num frames: 602112. Throughput: 0: 848.3. Samples: 149646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 09:58:14,343][00973] Avg episode reward: [(0, '4.442')] -[2023-02-25 09:58:17,043][11779] Updated weights for policy 0, policy_version 150 (0.0028) -[2023-02-25 09:58:19,340][00973] Fps is (10 sec: 3689.1, 60 sec: 3413.3, 300 sec: 3113.0). Total num frames: 622592. Throughput: 0: 882.1. Samples: 155812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:58:19,342][00973] Avg episode reward: [(0, '4.321')] -[2023-02-25 09:58:24,343][00973] Fps is (10 sec: 3685.0, 60 sec: 3413.4, 300 sec: 3116.9). Total num frames: 638976. Throughput: 0: 883.2. Samples: 159042. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 09:58:24,347][00973] Avg episode reward: [(0, '4.282')] -[2023-02-25 09:58:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3101.3). Total num frames: 651264. Throughput: 0: 825.8. Samples: 162688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 09:58:29,345][00973] Avg episode reward: [(0, '4.572')] -[2023-02-25 09:58:29,359][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000159_651264.pth... -[2023-02-25 09:58:30,403][11779] Updated weights for policy 0, policy_version 160 (0.0025) -[2023-02-25 09:58:34,340][00973] Fps is (10 sec: 2458.5, 60 sec: 3345.1, 300 sec: 3086.3). Total num frames: 663552. Throughput: 0: 805.5. Samples: 165818. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) -[2023-02-25 09:58:34,345][00973] Avg episode reward: [(0, '4.461')] -[2023-02-25 09:58:39,340][00973] Fps is (10 sec: 2457.7, 60 sec: 3208.5, 300 sec: 3072.0). Total num frames: 675840. Throughput: 0: 796.8. Samples: 167496. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 09:58:39,350][00973] Avg episode reward: [(0, '4.692')] -[2023-02-25 09:58:44,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.2, 300 sec: 3076.5). Total num frames: 692224. Throughput: 0: 785.6. Samples: 172816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:58:44,348][00973] Avg episode reward: [(0, '4.526')] -[2023-02-25 09:58:44,357][11779] Updated weights for policy 0, policy_version 170 (0.0019) -[2023-02-25 09:58:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3116.5). Total num frames: 716800. Throughput: 0: 791.4. Samples: 179242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:58:49,345][00973] Avg episode reward: [(0, '4.481')] -[2023-02-25 09:58:54,340][00973] Fps is (10 sec: 3686.5, 60 sec: 3276.8, 300 sec: 3102.5). Total num frames: 729088. Throughput: 0: 798.3. Samples: 181582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:58:54,346][00973] Avg episode reward: [(0, '4.511')] -[2023-02-25 09:58:56,297][11779] Updated weights for policy 0, policy_version 180 (0.0022) -[2023-02-25 09:58:59,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3089.1). Total num frames: 741376. Throughput: 0: 800.3. Samples: 185658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 09:58:59,347][00973] Avg episode reward: [(0, '4.547')] -[2023-02-25 09:59:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3126.3). Total num frames: 765952. Throughput: 0: 787.8. Samples: 191264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:59:04,348][00973] Avg episode reward: [(0, '4.664')] -[2023-02-25 09:59:07,165][11779] Updated weights for policy 0, policy_version 190 (0.0018) -[2023-02-25 09:59:09,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3345.5, 300 sec: 3145.7). Total num frames: 786432. Throughput: 0: 786.1. Samples: 194414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:59:09,342][00973] Avg episode reward: [(0, '4.829')] -[2023-02-25 09:59:14,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3132.2). Total num frames: 798720. Throughput: 0: 824.4. Samples: 199786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:59:14,344][00973] Avg episode reward: [(0, '4.781')] -[2023-02-25 09:59:19,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3135.0). Total num frames: 815104. Throughput: 0: 844.7. Samples: 203828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:59:19,347][00973] Avg episode reward: [(0, '4.849')] -[2023-02-25 09:59:20,506][11779] Updated weights for policy 0, policy_version 200 (0.0019) -[2023-02-25 09:59:24,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3208.7, 300 sec: 3137.7). Total num frames: 831488. Throughput: 0: 864.4. Samples: 206392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:59:24,348][00973] Avg episode reward: [(0, '5.087')] -[2023-02-25 09:59:24,370][11765] Saving new best policy, reward=5.087! -[2023-02-25 09:59:29,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3170.6). Total num frames: 856064. Throughput: 0: 886.2. Samples: 212694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:59:29,343][00973] Avg episode reward: [(0, '4.953')] -[2023-02-25 09:59:30,370][11779] Updated weights for policy 0, policy_version 210 (0.0022) -[2023-02-25 09:59:34,345][00973] Fps is (10 sec: 3684.6, 60 sec: 3413.1, 300 sec: 3157.6). Total num frames: 868352. Throughput: 0: 859.8. Samples: 217936. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) -[2023-02-25 09:59:34,357][00973] Avg episode reward: [(0, '4.877')] -[2023-02-25 09:59:39,340][00973] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3159.8). Total num frames: 884736. Throughput: 0: 852.0. Samples: 219922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 09:59:39,348][00973] Avg episode reward: [(0, '4.988')] -[2023-02-25 09:59:43,655][11779] Updated weights for policy 0, policy_version 220 (0.0013) -[2023-02-25 09:59:44,340][00973] Fps is (10 sec: 3278.4, 60 sec: 3481.6, 300 sec: 3161.8). Total num frames: 901120. Throughput: 0: 868.9. Samples: 224760. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 09:59:44,348][00973] Avg episode reward: [(0, '5.206')] -[2023-02-25 09:59:44,352][11765] Saving new best policy, reward=5.206! -[2023-02-25 09:59:49,340][00973] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3177.9). Total num frames: 921600. Throughput: 0: 883.6. Samples: 231028. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 09:59:49,345][00973] Avg episode reward: [(0, '5.193')] -[2023-02-25 09:59:54,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3179.6). Total num frames: 937984. Throughput: 0: 881.6. Samples: 234084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 09:59:54,346][00973] Avg episode reward: [(0, '5.000')] -[2023-02-25 09:59:54,394][11779] Updated weights for policy 0, policy_version 230 (0.0023) -[2023-02-25 09:59:59,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3235.1). Total num frames: 954368. Throughput: 0: 852.5. Samples: 238148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 09:59:59,347][00973] Avg episode reward: [(0, '4.887')] -[2023-02-25 10:00:04,342][00973] Fps is (10 sec: 3276.0, 60 sec: 3413.2, 300 sec: 3290.7). Total num frames: 970752. Throughput: 0: 869.6. Samples: 242962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:00:04,345][00973] Avg episode reward: [(0, '4.917')] -[2023-02-25 10:00:06,590][11779] Updated weights for policy 0, policy_version 240 (0.0021) -[2023-02-25 10:00:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 995328. Throughput: 0: 884.8. Samples: 246208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:00:09,347][00973] Avg episode reward: [(0, '5.000')] -[2023-02-25 10:00:14,340][00973] Fps is (10 sec: 4097.0, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 1011712. Throughput: 0: 880.3. Samples: 252308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:00:14,349][00973] Avg episode reward: [(0, '4.909')] -[2023-02-25 10:00:18,740][11779] Updated weights for policy 0, policy_version 250 (0.0032) -[2023-02-25 10:00:19,341][00973] Fps is (10 sec: 2866.8, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 1024000. Throughput: 0: 853.5. Samples: 256342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:00:19,347][00973] Avg episode reward: [(0, '4.980')] -[2023-02-25 10:00:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3360.1). Total num frames: 1040384. Throughput: 0: 855.0. Samples: 258398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 10:00:24,344][00973] Avg episode reward: [(0, '5.126')] -[2023-02-25 10:00:29,340][00973] Fps is (10 sec: 3686.9, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1060864. Throughput: 0: 882.0. Samples: 264448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:00:29,342][00973] Avg episode reward: [(0, '5.087')] -[2023-02-25 10:00:29,360][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000259_1060864.pth... -[2023-02-25 10:00:29,478][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000058_237568.pth -[2023-02-25 10:00:29,806][11779] Updated weights for policy 0, policy_version 260 (0.0029) -[2023-02-25 10:00:34,343][00973] Fps is (10 sec: 4094.6, 60 sec: 3550.0, 300 sec: 3401.7). Total num frames: 1081344. Throughput: 0: 873.7. Samples: 270346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:00:34,348][00973] Avg episode reward: [(0, '4.896')] -[2023-02-25 10:00:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1093632. Throughput: 0: 851.2. Samples: 272390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:00:39,342][00973] Avg episode reward: [(0, '4.871')] -[2023-02-25 10:00:43,350][11779] Updated weights for policy 0, policy_version 270 (0.0024) -[2023-02-25 10:00:44,340][00973] Fps is (10 sec: 2868.1, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1110016. Throughput: 0: 851.6. Samples: 276472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:00:44,342][00973] Avg episode reward: [(0, '5.040')] -[2023-02-25 10:00:49,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1130496. Throughput: 0: 886.2. Samples: 282838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:00:49,342][00973] Avg episode reward: [(0, '5.012')] -[2023-02-25 10:00:52,760][11779] Updated weights for policy 0, policy_version 280 (0.0013) -[2023-02-25 10:00:54,341][00973] Fps is (10 sec: 4095.5, 60 sec: 3549.8, 300 sec: 3429.6). Total num frames: 1150976. Throughput: 0: 883.7. Samples: 285976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:00:54,345][00973] Avg episode reward: [(0, '5.111')] -[2023-02-25 10:00:59,344][00973] Fps is (10 sec: 3275.5, 60 sec: 3481.4, 300 sec: 3429.5). Total num frames: 1163264. Throughput: 0: 852.8. Samples: 290688. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:00:59,346][00973] Avg episode reward: [(0, '5.119')] -[2023-02-25 10:01:04,340][00973] Fps is (10 sec: 2867.5, 60 sec: 3481.7, 300 sec: 3429.5). Total num frames: 1179648. Throughput: 0: 855.1. Samples: 294822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:01:04,343][00973] Avg episode reward: [(0, '5.469')] -[2023-02-25 10:01:04,351][11765] Saving new best policy, reward=5.469! -[2023-02-25 10:01:06,132][11779] Updated weights for policy 0, policy_version 290 (0.0013) -[2023-02-25 10:01:09,340][00973] Fps is (10 sec: 3687.9, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1200128. Throughput: 0: 878.7. Samples: 297938. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:01:09,342][00973] Avg episode reward: [(0, '5.625')] -[2023-02-25 10:01:09,356][11765] Saving new best policy, reward=5.625! -[2023-02-25 10:01:14,343][00973] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1216512. Throughput: 0: 869.3. Samples: 303566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:01:14,347][00973] Avg episode reward: [(0, '6.109')] -[2023-02-25 10:01:14,353][11765] Saving new best policy, reward=6.109! -[2023-02-25 10:01:19,340][00973] Fps is (10 sec: 2457.5, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1224704. Throughput: 0: 811.5. Samples: 306860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:01:19,343][00973] Avg episode reward: [(0, '5.947')] -[2023-02-25 10:01:19,835][11779] Updated weights for policy 0, policy_version 300 (0.0016) -[2023-02-25 10:01:24,340][00973] Fps is (10 sec: 2047.9, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1236992. Throughput: 0: 800.5. Samples: 308414. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 10:01:24,346][00973] Avg episode reward: [(0, '6.019')] -[2023-02-25 10:01:29,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3374.0). Total num frames: 1253376. Throughput: 0: 787.7. Samples: 311920. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 10:01:29,342][00973] Avg episode reward: [(0, '6.018')] -[2023-02-25 10:01:33,210][11779] Updated weights for policy 0, policy_version 310 (0.0022) -[2023-02-25 10:01:34,340][00973] Fps is (10 sec: 3686.6, 60 sec: 3208.7, 300 sec: 3374.0). Total num frames: 1273856. Throughput: 0: 789.0. Samples: 318342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:01:34,347][00973] Avg episode reward: [(0, '5.850')] -[2023-02-25 10:01:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1290240. Throughput: 0: 790.3. Samples: 321538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:01:39,347][00973] Avg episode reward: [(0, '5.517')] -[2023-02-25 10:01:44,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1306624. Throughput: 0: 785.4. Samples: 326026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:01:44,348][00973] Avg episode reward: [(0, '5.538')] -[2023-02-25 10:01:45,539][11779] Updated weights for policy 0, policy_version 320 (0.0033) -[2023-02-25 10:01:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.2, 300 sec: 3360.1). Total num frames: 1318912. Throughput: 0: 785.7. Samples: 330178. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-25 10:01:49,347][00973] Avg episode reward: [(0, '5.656')] -[2023-02-25 10:01:54,344][00973] Fps is (10 sec: 3684.9, 60 sec: 3208.4, 300 sec: 3373.9). Total num frames: 1343488. Throughput: 0: 788.7. Samples: 333434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:01:54,347][00973] Avg episode reward: [(0, '5.996')] -[2023-02-25 10:01:56,213][11779] Updated weights for policy 0, policy_version 330 (0.0017) -[2023-02-25 10:01:59,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3277.0, 300 sec: 3374.0). Total num frames: 1359872. Throughput: 0: 806.2. Samples: 339844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:01:59,345][00973] Avg episode reward: [(0, '5.944')] -[2023-02-25 10:02:04,340][00973] Fps is (10 sec: 3278.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1376256. Throughput: 0: 829.7. Samples: 344194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:02:04,346][00973] Avg episode reward: [(0, '5.896')] -[2023-02-25 10:02:09,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3374.0). Total num frames: 1388544. Throughput: 0: 839.6. Samples: 346196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:02:09,344][00973] Avg episode reward: [(0, '6.013')] -[2023-02-25 10:02:09,734][11779] Updated weights for policy 0, policy_version 340 (0.0021) -[2023-02-25 10:02:14,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3360.1). Total num frames: 1409024. Throughput: 0: 885.2. Samples: 351754. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:02:14,347][00973] Avg episode reward: [(0, '6.206')] -[2023-02-25 10:02:14,351][11765] Saving new best policy, reward=6.206! -[2023-02-25 10:02:19,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3374.1). Total num frames: 1429504. Throughput: 0: 883.3. Samples: 358090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:02:19,351][00973] Avg episode reward: [(0, '6.269')] -[2023-02-25 10:02:19,371][11765] Saving new best policy, reward=6.269! -[2023-02-25 10:02:19,855][11779] Updated weights for policy 0, policy_version 350 (0.0027) -[2023-02-25 10:02:24,341][00973] Fps is (10 sec: 3686.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 1445888. Throughput: 0: 858.9. Samples: 360188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:02:24,345][00973] Avg episode reward: [(0, '6.210')] -[2023-02-25 10:02:29,342][00973] Fps is (10 sec: 2866.6, 60 sec: 3413.2, 300 sec: 3374.0). Total num frames: 1458176. Throughput: 0: 845.3. Samples: 364064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:02:29,350][00973] Avg episode reward: [(0, '6.184')] -[2023-02-25 10:02:29,366][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000356_1458176.pth... -[2023-02-25 10:02:29,561][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000159_651264.pth -[2023-02-25 10:02:33,301][11779] Updated weights for policy 0, policy_version 360 (0.0036) -[2023-02-25 10:02:34,340][00973] Fps is (10 sec: 3277.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1478656. Throughput: 0: 875.3. Samples: 369568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:02:34,342][00973] Avg episode reward: [(0, '6.513')] -[2023-02-25 10:02:34,349][11765] Saving new best policy, reward=6.513! -[2023-02-25 10:02:39,340][00973] Fps is (10 sec: 4096.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1499136. Throughput: 0: 872.3. Samples: 372686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:02:39,348][00973] Avg episode reward: [(0, '7.097')] -[2023-02-25 10:02:39,360][11765] Saving new best policy, reward=7.097! -[2023-02-25 10:02:44,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1511424. Throughput: 0: 846.4. Samples: 377932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:02:44,343][00973] Avg episode reward: [(0, '7.363')] -[2023-02-25 10:02:44,350][11765] Saving new best policy, reward=7.363! -[2023-02-25 10:02:44,656][11779] Updated weights for policy 0, policy_version 370 (0.0020) -[2023-02-25 10:02:49,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1527808. Throughput: 0: 837.4. Samples: 381878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:02:49,343][00973] Avg episode reward: [(0, '7.667')] -[2023-02-25 10:02:49,358][11765] Saving new best policy, reward=7.667! -[2023-02-25 10:02:54,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3345.3, 300 sec: 3360.1). Total num frames: 1544192. Throughput: 0: 849.9. Samples: 384440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:02:54,345][00973] Avg episode reward: [(0, '7.314')] -[2023-02-25 10:02:56,298][11779] Updated weights for policy 0, policy_version 380 (0.0033) -[2023-02-25 10:02:59,342][00973] Fps is (10 sec: 4095.1, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 1568768. Throughput: 0: 869.4. Samples: 390880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:02:59,349][00973] Avg episode reward: [(0, '7.921')] -[2023-02-25 10:02:59,360][11765] Saving new best policy, reward=7.921! -[2023-02-25 10:03:04,342][00973] Fps is (10 sec: 3685.5, 60 sec: 3413.2, 300 sec: 3374.1). Total num frames: 1581056. Throughput: 0: 843.9. Samples: 396066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:03:04,347][00973] Avg episode reward: [(0, '7.578')] -[2023-02-25 10:03:09,119][11779] Updated weights for policy 0, policy_version 390 (0.0018) -[2023-02-25 10:03:09,340][00973] Fps is (10 sec: 2867.9, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1597440. Throughput: 0: 843.3. Samples: 398136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:03:09,346][00973] Avg episode reward: [(0, '7.861')] -[2023-02-25 10:03:14,340][00973] Fps is (10 sec: 3277.6, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 1613824. Throughput: 0: 862.5. Samples: 402876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:03:14,344][00973] Avg episode reward: [(0, '7.311')] -[2023-02-25 10:03:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1634304. Throughput: 0: 880.8. Samples: 409202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:03:19,342][00973] Avg episode reward: [(0, '7.926')] -[2023-02-25 10:03:19,355][11765] Saving new best policy, reward=7.926! -[2023-02-25 10:03:19,628][11779] Updated weights for policy 0, policy_version 400 (0.0025) -[2023-02-25 10:03:24,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3387.9). Total num frames: 1650688. Throughput: 0: 878.0. Samples: 412196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:03:24,343][00973] Avg episode reward: [(0, '7.924')] -[2023-02-25 10:03:29,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3401.8). Total num frames: 1667072. Throughput: 0: 856.2. Samples: 416462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:03:29,347][00973] Avg episode reward: [(0, '8.465')] -[2023-02-25 10:03:29,374][11765] Saving new best policy, reward=8.465! -[2023-02-25 10:03:32,795][11779] Updated weights for policy 0, policy_version 410 (0.0030) -[2023-02-25 10:03:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1683456. Throughput: 0: 873.6. Samples: 421192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:03:34,342][00973] Avg episode reward: [(0, '8.780')] -[2023-02-25 10:03:34,346][11765] Saving new best policy, reward=8.780! -[2023-02-25 10:03:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1703936. Throughput: 0: 887.6. Samples: 424380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:03:39,346][00973] Avg episode reward: [(0, '8.629')] -[2023-02-25 10:03:42,437][11779] Updated weights for policy 0, policy_version 420 (0.0014) -[2023-02-25 10:03:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 1724416. Throughput: 0: 884.4. Samples: 430674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:03:44,343][00973] Avg episode reward: [(0, '8.427')] -[2023-02-25 10:03:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1736704. Throughput: 0: 858.4. Samples: 434692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:03:49,349][00973] Avg episode reward: [(0, '7.943')] -[2023-02-25 10:03:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1753088. Throughput: 0: 857.8. Samples: 436736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:03:54,343][00973] Avg episode reward: [(0, '7.938')] -[2023-02-25 10:03:55,675][11779] Updated weights for policy 0, policy_version 430 (0.0023) -[2023-02-25 10:03:59,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3401.8). Total num frames: 1769472. Throughput: 0: 881.9. Samples: 442560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:03:59,342][00973] Avg episode reward: [(0, '7.741')] -[2023-02-25 10:04:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3387.9). Total num frames: 1785856. Throughput: 0: 831.1. Samples: 446602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:04:04,347][00973] Avg episode reward: [(0, '8.023')] -[2023-02-25 10:04:09,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1794048. Throughput: 0: 800.3. Samples: 448208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:04:09,347][00973] Avg episode reward: [(0, '8.386')] -[2023-02-25 10:04:11,356][11779] Updated weights for policy 0, policy_version 440 (0.0012) -[2023-02-25 10:04:14,340][00973] Fps is (10 sec: 2457.5, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1810432. Throughput: 0: 789.1. Samples: 451970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:04:14,349][00973] Avg episode reward: [(0, '8.694')] -[2023-02-25 10:04:19,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1830912. Throughput: 0: 804.6. Samples: 457398. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:04:19,343][00973] Avg episode reward: [(0, '9.099')] -[2023-02-25 10:04:19,357][11765] Saving new best policy, reward=9.099! -[2023-02-25 10:04:22,225][11779] Updated weights for policy 0, policy_version 450 (0.0018) -[2023-02-25 10:04:24,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1851392. Throughput: 0: 805.3. Samples: 460620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:04:24,343][00973] Avg episode reward: [(0, '9.553')] -[2023-02-25 10:04:24,350][11765] Saving new best policy, reward=9.553! -[2023-02-25 10:04:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3345.0, 300 sec: 3387.9). Total num frames: 1867776. Throughput: 0: 793.9. Samples: 466402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:04:29,343][00973] Avg episode reward: [(0, '9.601')] -[2023-02-25 10:04:29,351][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000456_1867776.pth... -[2023-02-25 10:04:29,476][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000259_1060864.pth -[2023-02-25 10:04:29,492][11765] Saving new best policy, reward=9.601! -[2023-02-25 10:04:34,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1880064. Throughput: 0: 794.9. Samples: 470464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:04:34,346][00973] Avg episode reward: [(0, '9.125')] -[2023-02-25 10:04:35,251][11779] Updated weights for policy 0, policy_version 460 (0.0030) -[2023-02-25 10:04:39,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3374.0). Total num frames: 1896448. Throughput: 0: 796.4. Samples: 472572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:04:39,348][00973] Avg episode reward: [(0, '9.435')] -[2023-02-25 10:04:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1921024. Throughput: 0: 814.8. Samples: 479224. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:04:44,342][00973] Avg episode reward: [(0, '9.517')] -[2023-02-25 10:04:45,078][11779] Updated weights for policy 0, policy_version 470 (0.0024) -[2023-02-25 10:04:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1937408. Throughput: 0: 847.1. Samples: 484720. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:04:49,342][00973] Avg episode reward: [(0, '10.231')] -[2023-02-25 10:04:49,361][11765] Saving new best policy, reward=10.231! -[2023-02-25 10:04:54,342][00973] Fps is (10 sec: 2866.6, 60 sec: 3276.7, 300 sec: 3374.0). Total num frames: 1949696. Throughput: 0: 854.1. Samples: 486644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:04:54,348][00973] Avg episode reward: [(0, '10.014')] -[2023-02-25 10:04:58,419][11779] Updated weights for policy 0, policy_version 480 (0.0037) -[2023-02-25 10:04:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1966080. Throughput: 0: 871.5. Samples: 491186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:04:59,343][00973] Avg episode reward: [(0, '9.786')] -[2023-02-25 10:05:04,340][00973] Fps is (10 sec: 4096.9, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1990656. Throughput: 0: 896.8. Samples: 497752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:05:04,343][00973] Avg episode reward: [(0, '9.063')] -[2023-02-25 10:05:08,283][11779] Updated weights for policy 0, policy_version 490 (0.0017) -[2023-02-25 10:05:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3374.0). Total num frames: 2007040. Throughput: 0: 899.8. Samples: 501112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:05:09,345][00973] Avg episode reward: [(0, '9.233')] -[2023-02-25 10:05:14,341][00973] Fps is (10 sec: 2866.8, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 2019328. Throughput: 0: 865.1. Samples: 505332. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:05:14,344][00973] Avg episode reward: [(0, '9.084')] -[2023-02-25 10:05:19,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 2039808. Throughput: 0: 877.6. Samples: 509954. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) -[2023-02-25 10:05:19,342][00973] Avg episode reward: [(0, '9.008')] -[2023-02-25 10:05:21,007][11779] Updated weights for policy 0, policy_version 500 (0.0019) -[2023-02-25 10:05:24,340][00973] Fps is (10 sec: 4096.5, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 2060288. Throughput: 0: 904.4. Samples: 513270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:05:24,343][00973] Avg episode reward: [(0, '9.847')] -[2023-02-25 10:05:29,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2080768. Throughput: 0: 897.9. Samples: 519628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:05:29,346][00973] Avg episode reward: [(0, '10.980')] -[2023-02-25 10:05:29,357][11765] Saving new best policy, reward=10.980! -[2023-02-25 10:05:32,239][11779] Updated weights for policy 0, policy_version 510 (0.0013) -[2023-02-25 10:05:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2093056. Throughput: 0: 869.3. Samples: 523838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:05:34,344][00973] Avg episode reward: [(0, '11.295')] -[2023-02-25 10:05:34,353][11765] Saving new best policy, reward=11.295! -[2023-02-25 10:05:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2109440. Throughput: 0: 871.7. Samples: 525868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:05:39,342][00973] Avg episode reward: [(0, '11.503')] -[2023-02-25 10:05:39,354][11765] Saving new best policy, reward=11.503! -[2023-02-25 10:05:43,448][11779] Updated weights for policy 0, policy_version 520 (0.0034) -[2023-02-25 10:05:44,342][00973] Fps is (10 sec: 3685.5, 60 sec: 3481.5, 300 sec: 3387.9). Total num frames: 2129920. Throughput: 0: 907.6. Samples: 532030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:05:44,345][00973] Avg episode reward: [(0, '12.100')] -[2023-02-25 10:05:44,406][11765] Saving new best policy, reward=12.100! -[2023-02-25 10:05:49,343][00973] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2150400. Throughput: 0: 898.4. Samples: 538178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:05:49,346][00973] Avg episode reward: [(0, '11.840')] -[2023-02-25 10:05:54,345][00973] Fps is (10 sec: 3275.8, 60 sec: 3549.7, 300 sec: 3387.9). Total num frames: 2162688. Throughput: 0: 867.1. Samples: 540134. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:05:54,350][00973] Avg episode reward: [(0, '11.884')] -[2023-02-25 10:05:55,931][11779] Updated weights for policy 0, policy_version 530 (0.0015) -[2023-02-25 10:05:59,342][00973] Fps is (10 sec: 2866.5, 60 sec: 3549.7, 300 sec: 3387.9). Total num frames: 2179072. Throughput: 0: 867.1. Samples: 544352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:05:59,348][00973] Avg episode reward: [(0, '11.715')] -[2023-02-25 10:06:04,340][00973] Fps is (10 sec: 4098.1, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 2203648. Throughput: 0: 905.4. Samples: 550696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:06:04,345][00973] Avg episode reward: [(0, '11.562')] -[2023-02-25 10:06:06,200][11779] Updated weights for policy 0, policy_version 540 (0.0012) -[2023-02-25 10:06:09,345][00973] Fps is (10 sec: 4094.8, 60 sec: 3549.6, 300 sec: 3401.7). Total num frames: 2220032. Throughput: 0: 904.0. Samples: 553956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:06:09,357][00973] Avg episode reward: [(0, '10.078')] -[2023-02-25 10:06:14,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3618.2, 300 sec: 3429.5). Total num frames: 2236416. Throughput: 0: 870.3. Samples: 558792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:06:14,347][00973] Avg episode reward: [(0, '10.272')] -[2023-02-25 10:06:19,340][00973] Fps is (10 sec: 2868.7, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2248704. Throughput: 0: 868.2. Samples: 562906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:06:19,343][00973] Avg episode reward: [(0, '9.997')] -[2023-02-25 10:06:19,442][11779] Updated weights for policy 0, policy_version 550 (0.0037) -[2023-02-25 10:06:24,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2273280. Throughput: 0: 893.6. Samples: 566082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:06:24,343][00973] Avg episode reward: [(0, '9.779')] -[2023-02-25 10:06:28,825][11779] Updated weights for policy 0, policy_version 560 (0.0012) -[2023-02-25 10:06:29,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2293760. Throughput: 0: 906.0. Samples: 572800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:06:29,344][00973] Avg episode reward: [(0, '9.551')] -[2023-02-25 10:06:29,366][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000560_2293760.pth... -[2023-02-25 10:06:29,506][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000356_1458176.pth -[2023-02-25 10:06:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2306048. Throughput: 0: 868.0. Samples: 577240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:06:34,342][00973] Avg episode reward: [(0, '9.562')] -[2023-02-25 10:06:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2322432. Throughput: 0: 870.1. Samples: 579282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:06:39,343][00973] Avg episode reward: [(0, '8.640')] -[2023-02-25 10:06:42,392][11779] Updated weights for policy 0, policy_version 570 (0.0031) -[2023-02-25 10:06:44,346][00973] Fps is (10 sec: 3274.7, 60 sec: 3481.4, 300 sec: 3457.2). Total num frames: 2338816. Throughput: 0: 886.9. Samples: 584264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:06:44,353][00973] Avg episode reward: [(0, '8.455')] -[2023-02-25 10:06:49,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.7). Total num frames: 2351104. Throughput: 0: 836.6. Samples: 588344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:06:49,344][00973] Avg episode reward: [(0, '8.946')] -[2023-02-25 10:06:54,340][00973] Fps is (10 sec: 2459.2, 60 sec: 3345.4, 300 sec: 3401.8). Total num frames: 2363392. Throughput: 0: 802.6. Samples: 590068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:06:54,345][00973] Avg episode reward: [(0, '9.212')] -[2023-02-25 10:06:58,114][11779] Updated weights for policy 0, policy_version 580 (0.0032) -[2023-02-25 10:06:59,341][00973] Fps is (10 sec: 2457.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2375680. Throughput: 0: 784.6. Samples: 594102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:06:59,344][00973] Avg episode reward: [(0, '9.381')] -[2023-02-25 10:07:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 2396160. Throughput: 0: 809.0. Samples: 599312. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:07:04,348][00973] Avg episode reward: [(0, '10.310')] -[2023-02-25 10:07:08,658][11779] Updated weights for policy 0, policy_version 590 (0.0025) -[2023-02-25 10:07:09,340][00973] Fps is (10 sec: 4096.5, 60 sec: 3277.1, 300 sec: 3415.6). Total num frames: 2416640. Throughput: 0: 810.2. Samples: 602542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:07:09,343][00973] Avg episode reward: [(0, '10.403')] -[2023-02-25 10:07:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2433024. Throughput: 0: 791.2. Samples: 608406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:07:14,342][00973] Avg episode reward: [(0, '9.986')] -[2023-02-25 10:07:19,341][00973] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 2449408. Throughput: 0: 784.2. Samples: 612528. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:07:19,350][00973] Avg episode reward: [(0, '9.112')] -[2023-02-25 10:07:22,080][11779] Updated weights for policy 0, policy_version 600 (0.0022) -[2023-02-25 10:07:24,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3208.5, 300 sec: 3415.7). Total num frames: 2465792. Throughput: 0: 784.8. Samples: 614600. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) -[2023-02-25 10:07:24,343][00973] Avg episode reward: [(0, '8.909')] -[2023-02-25 10:07:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 2486272. Throughput: 0: 816.9. Samples: 621018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:07:29,346][00973] Avg episode reward: [(0, '9.283')] -[2023-02-25 10:07:31,657][11779] Updated weights for policy 0, policy_version 610 (0.0026) -[2023-02-25 10:07:34,342][00973] Fps is (10 sec: 3685.6, 60 sec: 3276.7, 300 sec: 3401.7). Total num frames: 2502656. Throughput: 0: 849.6. Samples: 626576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:07:34,346][00973] Avg episode reward: [(0, '9.720')] -[2023-02-25 10:07:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3415.7). Total num frames: 2519040. Throughput: 0: 856.4. Samples: 628604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:07:39,342][00973] Avg episode reward: [(0, '9.803')] -[2023-02-25 10:07:44,340][00973] Fps is (10 sec: 3277.6, 60 sec: 3277.2, 300 sec: 3415.6). Total num frames: 2535424. Throughput: 0: 868.1. Samples: 633166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:07:44,345][00973] Avg episode reward: [(0, '10.261')] -[2023-02-25 10:07:44,784][11779] Updated weights for policy 0, policy_version 620 (0.0012) -[2023-02-25 10:07:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2560000. Throughput: 0: 897.4. Samples: 639696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:07:49,343][00973] Avg episode reward: [(0, '10.078')] -[2023-02-25 10:07:54,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3415.7). Total num frames: 2576384. Throughput: 0: 898.4. Samples: 642970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:07:54,346][00973] Avg episode reward: [(0, '10.387')] -[2023-02-25 10:07:55,282][11779] Updated weights for policy 0, policy_version 630 (0.0022) -[2023-02-25 10:07:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3550.0, 300 sec: 3415.7). Total num frames: 2588672. Throughput: 0: 862.0. Samples: 647198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:07:59,349][00973] Avg episode reward: [(0, '10.726')] -[2023-02-25 10:08:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2609152. Throughput: 0: 875.1. Samples: 651906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:08:04,343][00973] Avg episode reward: [(0, '12.357')] -[2023-02-25 10:08:04,352][11765] Saving new best policy, reward=12.357! -[2023-02-25 10:08:07,123][11779] Updated weights for policy 0, policy_version 640 (0.0018) -[2023-02-25 10:08:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2629632. Throughput: 0: 899.2. Samples: 655066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:08:09,343][00973] Avg episode reward: [(0, '12.903')] -[2023-02-25 10:08:09,355][11765] Saving new best policy, reward=12.903! -[2023-02-25 10:08:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2646016. Throughput: 0: 903.2. Samples: 661664. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:08:14,343][00973] Avg episode reward: [(0, '13.720')] -[2023-02-25 10:08:14,348][11765] Saving new best policy, reward=13.720! -[2023-02-25 10:08:18,932][11779] Updated weights for policy 0, policy_version 650 (0.0012) -[2023-02-25 10:08:19,347][00973] Fps is (10 sec: 3274.4, 60 sec: 3549.4, 300 sec: 3429.4). Total num frames: 2662400. Throughput: 0: 871.6. Samples: 665800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:08:19,350][00973] Avg episode reward: [(0, '13.213')] -[2023-02-25 10:08:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2678784. Throughput: 0: 871.4. Samples: 667818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:08:24,347][00973] Avg episode reward: [(0, '13.421')] -[2023-02-25 10:08:29,340][00973] Fps is (10 sec: 3689.1, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2699264. Throughput: 0: 902.1. Samples: 673762. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:08:29,348][00973] Avg episode reward: [(0, '14.502')] -[2023-02-25 10:08:29,364][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth... -[2023-02-25 10:08:29,492][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000456_1867776.pth -[2023-02-25 10:08:29,503][11765] Saving new best policy, reward=14.502! -[2023-02-25 10:08:30,147][11779] Updated weights for policy 0, policy_version 660 (0.0024) -[2023-02-25 10:08:34,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3550.0, 300 sec: 3429.5). Total num frames: 2715648. Throughput: 0: 891.2. Samples: 679802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:08:34,347][00973] Avg episode reward: [(0, '14.911')] -[2023-02-25 10:08:34,350][11765] Saving new best policy, reward=14.911! -[2023-02-25 10:08:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 2732032. Throughput: 0: 864.6. Samples: 681878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:08:39,348][00973] Avg episode reward: [(0, '14.547')] -[2023-02-25 10:08:43,403][11779] Updated weights for policy 0, policy_version 670 (0.0015) -[2023-02-25 10:08:44,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2744320. Throughput: 0: 861.2. Samples: 685954. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) -[2023-02-25 10:08:44,343][00973] Avg episode reward: [(0, '15.188')] -[2023-02-25 10:08:44,345][11765] Saving new best policy, reward=15.188! -[2023-02-25 10:08:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2764800. Throughput: 0: 886.5. Samples: 691798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:08:49,343][00973] Avg episode reward: [(0, '14.590')] -[2023-02-25 10:08:53,259][11779] Updated weights for policy 0, policy_version 680 (0.0013) -[2023-02-25 10:08:54,347][00973] Fps is (10 sec: 4093.0, 60 sec: 3481.2, 300 sec: 3443.3). Total num frames: 2785280. Throughput: 0: 889.0. Samples: 695076. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:08:54,354][00973] Avg episode reward: [(0, '14.000')] -[2023-02-25 10:08:59,344][00973] Fps is (10 sec: 3684.8, 60 sec: 3549.6, 300 sec: 3443.4). Total num frames: 2801664. Throughput: 0: 856.4. Samples: 700208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:08:59,351][00973] Avg episode reward: [(0, '13.908')] -[2023-02-25 10:09:04,340][00973] Fps is (10 sec: 3279.0, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2818048. Throughput: 0: 858.8. Samples: 704438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:09:04,349][00973] Avg episode reward: [(0, '14.016')] -[2023-02-25 10:09:06,400][11779] Updated weights for policy 0, policy_version 690 (0.0029) -[2023-02-25 10:09:09,340][00973] Fps is (10 sec: 3688.0, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2838528. Throughput: 0: 878.8. Samples: 707362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:09:09,343][00973] Avg episode reward: [(0, '14.066')] -[2023-02-25 10:09:14,340][00973] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2859008. Throughput: 0: 895.4. Samples: 714054. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:09:14,342][00973] Avg episode reward: [(0, '14.408')] -[2023-02-25 10:09:16,039][11779] Updated weights for policy 0, policy_version 700 (0.0014) -[2023-02-25 10:09:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3550.3, 300 sec: 3471.2). Total num frames: 2875392. Throughput: 0: 871.8. Samples: 719032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:09:19,342][00973] Avg episode reward: [(0, '15.386')] -[2023-02-25 10:09:19,355][11765] Saving new best policy, reward=15.386! -[2023-02-25 10:09:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2887680. Throughput: 0: 870.6. Samples: 721056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:09:24,345][00973] Avg episode reward: [(0, '14.425')] -[2023-02-25 10:09:29,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 2904064. Throughput: 0: 883.1. Samples: 725694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:09:29,343][00973] Avg episode reward: [(0, '13.731')] -[2023-02-25 10:09:30,416][11779] Updated weights for policy 0, policy_version 710 (0.0038) -[2023-02-25 10:09:34,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 2916352. Throughput: 0: 844.8. Samples: 729816. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:09:34,346][00973] Avg episode reward: [(0, '13.368')] -[2023-02-25 10:09:39,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 2928640. Throughput: 0: 813.0. Samples: 731656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:09:39,342][00973] Avg episode reward: [(0, '12.963')] -[2023-02-25 10:09:44,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2940928. Throughput: 0: 786.3. Samples: 735588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:09:44,346][00973] Avg episode reward: [(0, '11.955')] -[2023-02-25 10:09:45,976][11779] Updated weights for policy 0, policy_version 720 (0.0026) -[2023-02-25 10:09:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3429.6). Total num frames: 2961408. Throughput: 0: 799.3. Samples: 740404. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:09:49,342][00973] Avg episode reward: [(0, '12.376')] -[2023-02-25 10:09:54,340][00973] Fps is (10 sec: 4095.9, 60 sec: 3277.2, 300 sec: 3443.4). Total num frames: 2981888. Throughput: 0: 808.0. Samples: 743720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:09:54,343][00973] Avg episode reward: [(0, '13.819')] -[2023-02-25 10:09:55,744][11779] Updated weights for policy 0, policy_version 730 (0.0016) -[2023-02-25 10:09:59,347][00973] Fps is (10 sec: 3683.7, 60 sec: 3276.6, 300 sec: 3415.6). Total num frames: 2998272. Throughput: 0: 792.6. Samples: 749728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:09:59,350][00973] Avg episode reward: [(0, '15.726')] -[2023-02-25 10:09:59,371][11765] Saving new best policy, reward=15.726! -[2023-02-25 10:10:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 3014656. Throughput: 0: 771.5. Samples: 753748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:10:04,347][00973] Avg episode reward: [(0, '15.942')] -[2023-02-25 10:10:04,350][11765] Saving new best policy, reward=15.942! -[2023-02-25 10:10:09,324][11779] Updated weights for policy 0, policy_version 740 (0.0016) -[2023-02-25 10:10:09,340][00973] Fps is (10 sec: 3279.2, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 3031040. Throughput: 0: 769.7. Samples: 755692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:10:09,344][00973] Avg episode reward: [(0, '15.750')] -[2023-02-25 10:10:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 3051520. Throughput: 0: 800.8. Samples: 761728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:10:14,345][00973] Avg episode reward: [(0, '14.744')] -[2023-02-25 10:10:19,340][00973] Fps is (10 sec: 3686.2, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 3067904. Throughput: 0: 842.1. Samples: 767710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:10:19,344][00973] Avg episode reward: [(0, '14.873')] -[2023-02-25 10:10:19,822][11779] Updated weights for policy 0, policy_version 750 (0.0025) -[2023-02-25 10:10:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3084288. Throughput: 0: 848.3. Samples: 769830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:10:24,347][00973] Avg episode reward: [(0, '14.879')] -[2023-02-25 10:10:29,340][00973] Fps is (10 sec: 3277.0, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 3100672. Throughput: 0: 853.2. Samples: 773984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:10:29,342][00973] Avg episode reward: [(0, '15.624')] -[2023-02-25 10:10:29,356][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000757_3100672.pth... -[2023-02-25 10:10:29,478][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000560_2293760.pth -[2023-02-25 10:10:32,117][11779] Updated weights for policy 0, policy_version 760 (0.0019) -[2023-02-25 10:10:34,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 3121152. Throughput: 0: 887.3. Samples: 780332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:10:34,343][00973] Avg episode reward: [(0, '16.201')] -[2023-02-25 10:10:34,351][11765] Saving new best policy, reward=16.201! -[2023-02-25 10:10:39,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3429.6). Total num frames: 3141632. Throughput: 0: 883.6. Samples: 783482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:10:39,346][00973] Avg episode reward: [(0, '17.320')] -[2023-02-25 10:10:39,357][11765] Saving new best policy, reward=17.320! -[2023-02-25 10:10:43,766][11779] Updated weights for policy 0, policy_version 770 (0.0029) -[2023-02-25 10:10:44,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3153920. Throughput: 0: 851.6. Samples: 788044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:10:44,344][00973] Avg episode reward: [(0, '16.805')] -[2023-02-25 10:10:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 3170304. Throughput: 0: 855.1. Samples: 792226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:10:49,342][00973] Avg episode reward: [(0, '16.849')] -[2023-02-25 10:10:54,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 3190784. Throughput: 0: 881.6. Samples: 795366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:10:54,343][00973] Avg episode reward: [(0, '16.498')] -[2023-02-25 10:10:55,280][11779] Updated weights for policy 0, policy_version 780 (0.0024) -[2023-02-25 10:10:59,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3550.3, 300 sec: 3415.7). Total num frames: 3211264. Throughput: 0: 892.1. Samples: 801874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:10:59,346][00973] Avg episode reward: [(0, '17.555')] -[2023-02-25 10:10:59,358][11765] Saving new best policy, reward=17.555! -[2023-02-25 10:11:04,342][00973] Fps is (10 sec: 3276.1, 60 sec: 3481.5, 300 sec: 3401.8). Total num frames: 3223552. Throughput: 0: 861.4. Samples: 806476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:11:04,344][00973] Avg episode reward: [(0, '16.913')] -[2023-02-25 10:11:08,008][11779] Updated weights for policy 0, policy_version 790 (0.0041) -[2023-02-25 10:11:09,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 3235840. Throughput: 0: 860.0. Samples: 808528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:11:09,346][00973] Avg episode reward: [(0, '17.433')] -[2023-02-25 10:11:14,346][00973] Fps is (10 sec: 3684.9, 60 sec: 3481.2, 300 sec: 3429.5). Total num frames: 3260416. Throughput: 0: 893.1. Samples: 814180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:11:14,348][00973] Avg episode reward: [(0, '17.639')] -[2023-02-25 10:11:14,351][11765] Saving new best policy, reward=17.639! -[2023-02-25 10:11:17,787][11779] Updated weights for policy 0, policy_version 800 (0.0019) -[2023-02-25 10:11:19,340][00973] Fps is (10 sec: 4505.5, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 3280896. Throughput: 0: 897.1. Samples: 820704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:11:19,343][00973] Avg episode reward: [(0, '17.091')] -[2023-02-25 10:11:24,340][00973] Fps is (10 sec: 3688.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3297280. Throughput: 0: 876.9. Samples: 822944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:11:24,348][00973] Avg episode reward: [(0, '18.097')] -[2023-02-25 10:11:24,350][11765] Saving new best policy, reward=18.097! -[2023-02-25 10:11:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 3309568. Throughput: 0: 866.0. Samples: 827014. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) -[2023-02-25 10:11:29,347][00973] Avg episode reward: [(0, '17.990')] -[2023-02-25 10:11:31,160][11779] Updated weights for policy 0, policy_version 810 (0.0018) -[2023-02-25 10:11:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 3330048. Throughput: 0: 898.0. Samples: 832638. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:11:34,342][00973] Avg episode reward: [(0, '18.579')] -[2023-02-25 10:11:34,345][11765] Saving new best policy, reward=18.579! -[2023-02-25 10:11:39,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 3350528. Throughput: 0: 901.6. Samples: 835936. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:11:39,343][00973] Avg episode reward: [(0, '19.553')] -[2023-02-25 10:11:39,351][11765] Saving new best policy, reward=19.553! -[2023-02-25 10:11:40,926][11779] Updated weights for policy 0, policy_version 820 (0.0018) -[2023-02-25 10:11:44,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3366912. Throughput: 0: 878.9. Samples: 841424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:11:44,344][00973] Avg episode reward: [(0, '19.930')] -[2023-02-25 10:11:44,349][11765] Saving new best policy, reward=19.930! -[2023-02-25 10:11:49,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3379200. Throughput: 0: 870.5. Samples: 845646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-25 10:11:49,348][00973] Avg episode reward: [(0, '19.825')] -[2023-02-25 10:11:53,647][11779] Updated weights for policy 0, policy_version 830 (0.0019) -[2023-02-25 10:11:54,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3399680. Throughput: 0: 882.7. Samples: 848250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:11:54,346][00973] Avg episode reward: [(0, '20.190')] -[2023-02-25 10:11:54,351][11765] Saving new best policy, reward=20.190! -[2023-02-25 10:11:59,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3424256. Throughput: 0: 900.4. Samples: 854690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:11:59,343][00973] Avg episode reward: [(0, '21.194')] -[2023-02-25 10:11:59,358][11765] Saving new best policy, reward=21.194! -[2023-02-25 10:12:04,341][00973] Fps is (10 sec: 3685.8, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3436544. Throughput: 0: 873.7. Samples: 860022. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:12:04,348][00973] Avg episode reward: [(0, '20.666')] -[2023-02-25 10:12:04,701][11779] Updated weights for policy 0, policy_version 840 (0.0016) -[2023-02-25 10:12:09,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 3452928. Throughput: 0: 866.5. Samples: 861938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:12:09,344][00973] Avg episode reward: [(0, '20.974')] -[2023-02-25 10:12:14,346][00973] Fps is (10 sec: 3275.2, 60 sec: 3481.6, 300 sec: 3457.2). Total num frames: 3469312. Throughput: 0: 880.5. Samples: 866640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:12:14,349][00973] Avg episode reward: [(0, '21.331')] -[2023-02-25 10:12:14,351][11765] Saving new best policy, reward=21.331! -[2023-02-25 10:12:18,764][11779] Updated weights for policy 0, policy_version 850 (0.0045) -[2023-02-25 10:12:19,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 3481600. Throughput: 0: 847.3. Samples: 870768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:12:19,344][00973] Avg episode reward: [(0, '21.171')] -[2023-02-25 10:12:24,341][00973] Fps is (10 sec: 2458.7, 60 sec: 3276.7, 300 sec: 3415.6). Total num frames: 3493888. Throughput: 0: 820.3. Samples: 872850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:12:24,344][00973] Avg episode reward: [(0, '22.179')] -[2023-02-25 10:12:24,350][11765] Saving new best policy, reward=22.179! -[2023-02-25 10:12:29,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3506176. Throughput: 0: 777.4. Samples: 876408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:12:29,345][00973] Avg episode reward: [(0, '22.592')] -[2023-02-25 10:12:29,362][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth... -[2023-02-25 10:12:29,535][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth -[2023-02-25 10:12:29,549][11765] Saving new best policy, reward=22.592! -[2023-02-25 10:12:34,105][11779] Updated weights for policy 0, policy_version 860 (0.0026) -[2023-02-25 10:12:34,340][00973] Fps is (10 sec: 2867.6, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 3522560. Throughput: 0: 779.2. Samples: 880712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:12:34,345][00973] Avg episode reward: [(0, '23.969')] -[2023-02-25 10:12:34,352][11765] Saving new best policy, reward=23.969! -[2023-02-25 10:12:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 3543040. Throughput: 0: 790.0. Samples: 883800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:12:39,350][00973] Avg episode reward: [(0, '24.993')] -[2023-02-25 10:12:39,361][11765] Saving new best policy, reward=24.993! -[2023-02-25 10:12:44,341][00973] Fps is (10 sec: 3686.1, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 3559424. Throughput: 0: 792.2. Samples: 890338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-25 10:12:44,344][00973] Avg episode reward: [(0, '24.266')] -[2023-02-25 10:12:44,434][11779] Updated weights for policy 0, policy_version 870 (0.0012) -[2023-02-25 10:12:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 3575808. Throughput: 0: 764.7. Samples: 894432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:12:49,344][00973] Avg episode reward: [(0, '23.781')] -[2023-02-25 10:12:54,340][00973] Fps is (10 sec: 3277.1, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 3592192. Throughput: 0: 766.7. Samples: 896438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:12:54,345][00973] Avg episode reward: [(0, '23.545')] -[2023-02-25 10:12:57,340][11779] Updated weights for policy 0, policy_version 880 (0.0032) -[2023-02-25 10:12:59,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3401.8). Total num frames: 3612672. Throughput: 0: 787.3. Samples: 902064. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:12:59,343][00973] Avg episode reward: [(0, '21.619')] -[2023-02-25 10:13:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.6, 300 sec: 3387.9). Total num frames: 3629056. Throughput: 0: 836.9. Samples: 908428. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:13:04,343][00973] Avg episode reward: [(0, '21.183')] -[2023-02-25 10:13:08,808][11779] Updated weights for policy 0, policy_version 890 (0.0025) -[2023-02-25 10:13:09,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 3645440. Throughput: 0: 836.3. Samples: 910482. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-25 10:13:09,344][00973] Avg episode reward: [(0, '20.958')] -[2023-02-25 10:13:14,343][00973] Fps is (10 sec: 2866.2, 60 sec: 3140.4, 300 sec: 3374.0). Total num frames: 3657728. Throughput: 0: 849.6. Samples: 914642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:13:14,350][00973] Avg episode reward: [(0, '21.649')] -[2023-02-25 10:13:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 3682304. Throughput: 0: 885.2. Samples: 920544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:13:19,343][00973] Avg episode reward: [(0, '22.709')] -[2023-02-25 10:13:20,096][11779] Updated weights for policy 0, policy_version 900 (0.0025) -[2023-02-25 10:13:24,341][00973] Fps is (10 sec: 4506.5, 60 sec: 3481.6, 300 sec: 3401.7). Total num frames: 3702784. Throughput: 0: 885.8. Samples: 923662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:13:24,348][00973] Avg episode reward: [(0, '22.951')] -[2023-02-25 10:13:29,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3715072. Throughput: 0: 852.8. Samples: 928712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:13:29,343][00973] Avg episode reward: [(0, '23.482')] -[2023-02-25 10:13:33,434][11779] Updated weights for policy 0, policy_version 910 (0.0021) -[2023-02-25 10:13:34,345][00973] Fps is (10 sec: 2456.6, 60 sec: 3413.0, 300 sec: 3373.9). Total num frames: 3727360. Throughput: 0: 845.8. Samples: 932496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:13:34,348][00973] Avg episode reward: [(0, '22.566')] -[2023-02-25 10:13:39,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 3747840. Throughput: 0: 853.8. Samples: 934860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:13:39,345][00973] Avg episode reward: [(0, '23.062')] -[2023-02-25 10:13:43,773][11779] Updated weights for policy 0, policy_version 920 (0.0026) -[2023-02-25 10:13:44,340][00973] Fps is (10 sec: 4098.3, 60 sec: 3481.7, 300 sec: 3401.8). Total num frames: 3768320. Throughput: 0: 870.2. Samples: 941224. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:13:44,346][00973] Avg episode reward: [(0, '23.130')] -[2023-02-25 10:13:49,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3388.0). Total num frames: 3784704. Throughput: 0: 840.5. Samples: 946250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:13:49,345][00973] Avg episode reward: [(0, '22.801')] -[2023-02-25 10:13:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3796992. Throughput: 0: 837.6. Samples: 948172. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:13:54,342][00973] Avg episode reward: [(0, '23.083')] -[2023-02-25 10:13:58,032][11779] Updated weights for policy 0, policy_version 930 (0.0015) -[2023-02-25 10:13:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 3813376. Throughput: 0: 839.8. Samples: 952430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-25 10:13:59,343][00973] Avg episode reward: [(0, '22.552')] -[2023-02-25 10:14:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3833856. Throughput: 0: 844.0. Samples: 958522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:14:04,346][00973] Avg episode reward: [(0, '22.678')] -[2023-02-25 10:14:09,305][11779] Updated weights for policy 0, policy_version 940 (0.0015) -[2023-02-25 10:14:09,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3850240. Throughput: 0: 839.5. Samples: 961440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:14:09,346][00973] Avg episode reward: [(0, '22.876')] -[2023-02-25 10:14:14,340][00973] Fps is (10 sec: 2867.0, 60 sec: 3413.5, 300 sec: 3346.2). Total num frames: 3862528. Throughput: 0: 807.3. Samples: 965042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-25 10:14:14,344][00973] Avg episode reward: [(0, '22.983')] -[2023-02-25 10:14:19,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3346.2). Total num frames: 3874816. Throughput: 0: 810.6. Samples: 968970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:14:19,349][00973] Avg episode reward: [(0, '23.801')] -[2023-02-25 10:14:22,617][11779] Updated weights for policy 0, policy_version 950 (0.0047) -[2023-02-25 10:14:24,340][00973] Fps is (10 sec: 3277.0, 60 sec: 3208.6, 300 sec: 3360.1). Total num frames: 3895296. Throughput: 0: 830.4. Samples: 972226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:14:24,342][00973] Avg episode reward: [(0, '23.321')] -[2023-02-25 10:14:29,342][00973] Fps is (10 sec: 4095.0, 60 sec: 3345.0, 300 sec: 3387.9). Total num frames: 3915776. Throughput: 0: 828.3. Samples: 978498. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:14:29,350][00973] Avg episode reward: [(0, '23.643')] -[2023-02-25 10:14:29,362][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000956_3915776.pth... -[2023-02-25 10:14:29,570][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000757_3100672.pth -[2023-02-25 10:14:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3345.4, 300 sec: 3387.9). Total num frames: 3928064. Throughput: 0: 814.8. Samples: 982916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:14:34,342][00973] Avg episode reward: [(0, '23.400')] -[2023-02-25 10:14:34,411][11779] Updated weights for policy 0, policy_version 960 (0.0019) -[2023-02-25 10:14:39,340][00973] Fps is (10 sec: 2867.8, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3944448. Throughput: 0: 817.0. Samples: 984938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:14:39,346][00973] Avg episode reward: [(0, '22.333')] -[2023-02-25 10:14:44,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3964928. Throughput: 0: 841.7. Samples: 990306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-25 10:14:44,348][00973] Avg episode reward: [(0, '20.671')] -[2023-02-25 10:14:45,788][11779] Updated weights for policy 0, policy_version 970 (0.0026) -[2023-02-25 10:14:49,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 3985408. Throughput: 0: 850.8. Samples: 996810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:14:49,347][00973] Avg episode reward: [(0, '20.031')] -[2023-02-25 10:14:54,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 4001792. Throughput: 0: 837.7. Samples: 999136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-25 10:14:54,346][00973] Avg episode reward: [(0, '20.402')] -[2023-02-25 10:14:55,613][11765] Stopping Batcher_0... -[2023-02-25 10:14:55,614][11765] Loop batcher_evt_loop terminating... -[2023-02-25 10:14:55,614][00973] Component Batcher_0 stopped! -[2023-02-25 10:14:55,620][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-25 10:14:55,654][11779] Weights refcount: 2 0 -[2023-02-25 10:14:55,670][11779] Stopping InferenceWorker_p0-w0... -[2023-02-25 10:14:55,671][00973] Component InferenceWorker_p0-w0 stopped! -[2023-02-25 10:14:55,672][11779] Loop inference_proc0-0_evt_loop terminating... -[2023-02-25 10:14:55,703][11783] Stopping RolloutWorker_w4... -[2023-02-25 10:14:55,701][11787] Stopping RolloutWorker_w6... -[2023-02-25 10:14:55,704][00973] Component RolloutWorker_w6 stopped! -[2023-02-25 10:14:55,706][00973] Component RolloutWorker_w4 stopped! -[2023-02-25 10:14:55,705][11783] Loop rollout_proc4_evt_loop terminating... -[2023-02-25 10:14:55,706][11787] Loop rollout_proc6_evt_loop terminating... -[2023-02-25 10:14:55,729][00973] Component RolloutWorker_w0 stopped! -[2023-02-25 10:14:55,728][11780] Stopping RolloutWorker_w0... -[2023-02-25 10:14:55,746][11780] Loop rollout_proc0_evt_loop terminating... -[2023-02-25 10:14:55,763][11782] Stopping RolloutWorker_w2... -[2023-02-25 10:14:55,763][00973] Component RolloutWorker_w2 stopped! -[2023-02-25 10:14:55,766][11782] Loop rollout_proc2_evt_loop terminating... -[2023-02-25 10:14:55,783][00973] Component RolloutWorker_w3 stopped! -[2023-02-25 10:14:55,786][11784] Stopping RolloutWorker_w3... -[2023-02-25 10:14:55,788][11784] Loop rollout_proc3_evt_loop terminating... -[2023-02-25 10:14:55,812][00973] Component RolloutWorker_w5 stopped! -[2023-02-25 10:14:55,815][11785] Stopping RolloutWorker_w5... -[2023-02-25 10:14:55,816][11785] Loop rollout_proc5_evt_loop terminating... -[2023-02-25 10:14:55,846][00973] Component RolloutWorker_w7 stopped! -[2023-02-25 10:14:55,848][11786] Stopping RolloutWorker_w7... -[2023-02-25 10:14:55,852][11786] Loop rollout_proc7_evt_loop terminating... -[2023-02-25 10:14:55,869][00973] Component RolloutWorker_w1 stopped! -[2023-02-25 10:14:55,871][11781] Stopping RolloutWorker_w1... -[2023-02-25 10:14:55,873][11781] Loop rollout_proc1_evt_loop terminating... -[2023-02-25 10:14:55,893][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth -[2023-02-25 10:14:55,905][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-25 10:14:56,195][00973] Component LearnerWorker_p0 stopped! -[2023-02-25 10:14:56,198][00973] Waiting for process learner_proc0 to stop... -[2023-02-25 10:14:56,202][11765] Stopping LearnerWorker_p0... -[2023-02-25 10:14:56,202][11765] Loop learner_proc0_evt_loop terminating... -[2023-02-25 10:14:58,355][00973] Waiting for process inference_proc0-0 to join... -[2023-02-25 10:14:59,128][00973] Waiting for process rollout_proc0 to join... -[2023-02-25 10:14:59,190][00973] Waiting for process rollout_proc1 to join... -[2023-02-25 10:14:59,786][00973] Waiting for process rollout_proc2 to join... -[2023-02-25 10:14:59,788][00973] Waiting for process rollout_proc3 to join... -[2023-02-25 10:14:59,790][00973] Waiting for process rollout_proc4 to join... -[2023-02-25 10:14:59,791][00973] Waiting for process rollout_proc5 to join... -[2023-02-25 10:14:59,792][00973] Waiting for process rollout_proc6 to join... -[2023-02-25 10:14:59,794][00973] Waiting for process rollout_proc7 to join... -[2023-02-25 10:14:59,795][00973] Batcher 0 profile tree view: -batching: 26.3925, releasing_batches: 0.0252 -[2023-02-25 10:14:59,797][00973] InferenceWorker_p0-w0 profile tree view: -wait_policy: 0.0000 - wait_policy_total: 581.9217 -update_model: 8.1537 - weight_update: 0.0023 -one_step: 0.0024 - handle_policy_step: 558.6319 - deserialize: 15.7481, stack: 3.1351, obs_to_device_normalize: 119.7986, forward: 274.0754, send_messages: 27.8994 - prepare_outputs: 90.0917 - to_cpu: 56.2430 -[2023-02-25 10:14:59,799][00973] Learner 0 profile tree view: -misc: 0.0071, prepare_batch: 17.3469 -train: 77.7597 - epoch_init: 0.0120, minibatch_init: 0.0061, losses_postprocess: 0.6025, kl_divergence: 0.5279, after_optimizer: 33.2662 - calculate_losses: 27.6484 - losses_init: 0.0036, forward_head: 1.8058, bptt_initial: 18.1834, tail: 1.1459, advantages_returns: 0.3229, losses: 3.4555 - bptt: 2.3611 - bptt_forward_core: 2.2454 - update: 15.0283 - clip: 1.4433 -[2023-02-25 10:14:59,801][00973] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.4007, enqueue_policy_requests: 163.2068, env_step: 891.1734, overhead: 24.9476, complete_rollouts: 7.8290 -save_policy_outputs: 22.3440 - split_output_tensors: 10.8092 -[2023-02-25 10:14:59,802][00973] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.3332, enqueue_policy_requests: 169.1143, env_step: 886.7080, overhead: 23.7999, complete_rollouts: 7.5443 -save_policy_outputs: 21.8265 - split_output_tensors: 10.5381 -[2023-02-25 10:14:59,806][00973] Loop Runner_EvtLoop terminating... -[2023-02-25 10:14:59,808][00973] Runner profile tree view: -main_loop: 1224.8012 -[2023-02-25 10:14:59,810][00973] Collected {0: 4005888}, FPS: 3270.6 -[2023-02-25 10:15:00,039][00973] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-25 10:15:00,040][00973] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-25 10:15:00,042][00973] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-25 10:15:00,043][00973] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-25 10:15:00,045][00973] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-25 10:15:00,047][00973] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-25 10:15:00,048][00973] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-25 10:15:00,049][00973] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-25 10:15:00,051][00973] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-25 10:15:00,052][00973] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-25 10:15:00,054][00973] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-25 10:15:00,055][00973] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-25 10:15:00,057][00973] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-25 10:15:00,058][00973] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-25 10:15:00,060][00973] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-25 10:15:00,100][00973] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:15:00,104][00973] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 10:15:00,107][00973] RunningMeanStd input shape: (1,) -[2023-02-25 10:15:00,127][00973] ConvEncoder: input_channels=3 -[2023-02-25 10:15:00,777][00973] Conv encoder output size: 512 -[2023-02-25 10:15:00,779][00973] Policy head output size: 512 -[2023-02-25 10:15:03,825][00973] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-25 10:15:05,246][00973] Num frames 100... -[2023-02-25 10:15:05,360][00973] Num frames 200... -[2023-02-25 10:15:05,475][00973] Num frames 300... -[2023-02-25 10:15:05,592][00973] Num frames 400... -[2023-02-25 10:15:05,710][00973] Num frames 500... -[2023-02-25 10:15:05,819][00973] Num frames 600... -[2023-02-25 10:15:05,935][00973] Num frames 700... -[2023-02-25 10:15:06,071][00973] Avg episode rewards: #0: 14.700, true rewards: #0: 7.700 -[2023-02-25 10:15:06,072][00973] Avg episode reward: 14.700, avg true_objective: 7.700 -[2023-02-25 10:15:06,110][00973] Num frames 800... -[2023-02-25 10:15:06,227][00973] Num frames 900... -[2023-02-25 10:15:06,345][00973] Num frames 1000... -[2023-02-25 10:15:06,453][00973] Num frames 1100... -[2023-02-25 10:15:06,568][00973] Num frames 1200... -[2023-02-25 10:15:06,687][00973] Num frames 1300... -[2023-02-25 10:15:06,801][00973] Num frames 1400... -[2023-02-25 10:15:06,916][00973] Num frames 1500... -[2023-02-25 10:15:07,026][00973] Num frames 1600... -[2023-02-25 10:15:07,142][00973] Num frames 1700... -[2023-02-25 10:15:07,263][00973] Num frames 1800... -[2023-02-25 10:15:07,381][00973] Num frames 1900... -[2023-02-25 10:15:07,462][00973] Avg episode rewards: #0: 20.110, true rewards: #0: 9.610 -[2023-02-25 10:15:07,463][00973] Avg episode reward: 20.110, avg true_objective: 9.610 -[2023-02-25 10:15:07,562][00973] Num frames 2000... -[2023-02-25 10:15:07,695][00973] Num frames 2100... -[2023-02-25 10:15:07,822][00973] Num frames 2200... -[2023-02-25 10:15:07,949][00973] Num frames 2300... -[2023-02-25 10:15:08,095][00973] Avg episode rewards: #0: 15.577, true rewards: #0: 7.910 -[2023-02-25 10:15:08,096][00973] Avg episode reward: 15.577, avg true_objective: 7.910 -[2023-02-25 10:15:08,132][00973] Num frames 2400... -[2023-02-25 10:15:08,253][00973] Num frames 2500... -[2023-02-25 10:15:08,368][00973] Num frames 2600... -[2023-02-25 10:15:08,477][00973] Num frames 2700... -[2023-02-25 10:15:08,588][00973] Num frames 2800... -[2023-02-25 10:15:08,705][00973] Num frames 2900... -[2023-02-25 10:15:08,820][00973] Num frames 3000... -[2023-02-25 10:15:08,933][00973] Num frames 3100... -[2023-02-25 10:15:09,042][00973] Num frames 3200... -[2023-02-25 10:15:09,152][00973] Num frames 3300... -[2023-02-25 10:15:09,271][00973] Num frames 3400... -[2023-02-25 10:15:09,357][00973] Avg episode rewards: #0: 18.315, true rewards: #0: 8.565 -[2023-02-25 10:15:09,358][00973] Avg episode reward: 18.315, avg true_objective: 8.565 -[2023-02-25 10:15:09,451][00973] Num frames 3500... -[2023-02-25 10:15:09,581][00973] Num frames 3600... -[2023-02-25 10:15:09,752][00973] Num frames 3700... -[2023-02-25 10:15:09,907][00973] Num frames 3800... -[2023-02-25 10:15:10,070][00973] Num frames 3900... -[2023-02-25 10:15:10,229][00973] Num frames 4000... -[2023-02-25 10:15:10,383][00973] Num frames 4100... -[2023-02-25 10:15:10,536][00973] Num frames 4200... -[2023-02-25 10:15:10,695][00973] Num frames 4300... -[2023-02-25 10:15:10,854][00973] Num frames 4400... -[2023-02-25 10:15:11,008][00973] Num frames 4500... -[2023-02-25 10:15:11,181][00973] Num frames 4600... -[2023-02-25 10:15:11,338][00973] Num frames 4700... -[2023-02-25 10:15:11,407][00973] Avg episode rewards: #0: 20.216, true rewards: #0: 9.416 -[2023-02-25 10:15:11,409][00973] Avg episode reward: 20.216, avg true_objective: 9.416 -[2023-02-25 10:15:11,557][00973] Num frames 4800... -[2023-02-25 10:15:11,714][00973] Num frames 4900... -[2023-02-25 10:15:11,878][00973] Num frames 5000... -[2023-02-25 10:15:12,039][00973] Num frames 5100... -[2023-02-25 10:15:12,197][00973] Num frames 5200... -[2023-02-25 10:15:12,361][00973] Num frames 5300... -[2023-02-25 10:15:12,525][00973] Avg episode rewards: #0: 18.947, true rewards: #0: 8.947 -[2023-02-25 10:15:12,528][00973] Avg episode reward: 18.947, avg true_objective: 8.947 -[2023-02-25 10:15:12,589][00973] Num frames 5400... -[2023-02-25 10:15:12,745][00973] Num frames 5500... -[2023-02-25 10:15:12,910][00973] Num frames 5600... -[2023-02-25 10:15:13,071][00973] Num frames 5700... -[2023-02-25 10:15:13,193][00973] Avg episode rewards: #0: 17.217, true rewards: #0: 8.217 -[2023-02-25 10:15:13,196][00973] Avg episode reward: 17.217, avg true_objective: 8.217 -[2023-02-25 10:15:13,255][00973] Num frames 5800... -[2023-02-25 10:15:13,379][00973] Num frames 5900... -[2023-02-25 10:15:13,499][00973] Num frames 6000... -[2023-02-25 10:15:13,613][00973] Num frames 6100... -[2023-02-25 10:15:13,728][00973] Num frames 6200... -[2023-02-25 10:15:13,844][00973] Num frames 6300... -[2023-02-25 10:15:14,021][00973] Avg episode rewards: #0: 17.115, true rewards: #0: 7.990 -[2023-02-25 10:15:14,023][00973] Avg episode reward: 17.115, avg true_objective: 7.990 -[2023-02-25 10:15:14,038][00973] Num frames 6400... -[2023-02-25 10:15:14,151][00973] Num frames 6500... -[2023-02-25 10:15:14,264][00973] Num frames 6600... -[2023-02-25 10:15:14,378][00973] Num frames 6700... -[2023-02-25 10:15:14,491][00973] Num frames 6800... -[2023-02-25 10:15:14,606][00973] Num frames 6900... -[2023-02-25 10:15:14,725][00973] Num frames 7000... -[2023-02-25 10:15:14,835][00973] Num frames 7100... -[2023-02-25 10:15:14,955][00973] Num frames 7200... -[2023-02-25 10:15:15,065][00973] Num frames 7300... -[2023-02-25 10:15:15,174][00973] Num frames 7400... -[2023-02-25 10:15:15,295][00973] Num frames 7500... -[2023-02-25 10:15:15,411][00973] Num frames 7600... -[2023-02-25 10:15:15,526][00973] Num frames 7700... -[2023-02-25 10:15:15,643][00973] Num frames 7800... -[2023-02-25 10:15:15,779][00973] Avg episode rewards: #0: 19.302, true rewards: #0: 8.747 -[2023-02-25 10:15:15,780][00973] Avg episode reward: 19.302, avg true_objective: 8.747 -[2023-02-25 10:15:15,819][00973] Num frames 7900... -[2023-02-25 10:15:15,943][00973] Num frames 8000... -[2023-02-25 10:15:16,053][00973] Num frames 8100... -[2023-02-25 10:15:16,177][00973] Num frames 8200... -[2023-02-25 10:15:16,291][00973] Num frames 8300... -[2023-02-25 10:15:16,404][00973] Num frames 8400... -[2023-02-25 10:15:16,516][00973] Num frames 8500... -[2023-02-25 10:15:16,634][00973] Num frames 8600... -[2023-02-25 10:15:16,749][00973] Num frames 8700... -[2023-02-25 10:15:16,864][00973] Num frames 8800... -[2023-02-25 10:15:16,985][00973] Num frames 8900... -[2023-02-25 10:15:17,102][00973] Num frames 9000... -[2023-02-25 10:15:17,218][00973] Num frames 9100... -[2023-02-25 10:15:17,330][00973] Num frames 9200... -[2023-02-25 10:15:17,447][00973] Num frames 9300... -[2023-02-25 10:15:17,568][00973] Num frames 9400... -[2023-02-25 10:15:17,683][00973] Num frames 9500... -[2023-02-25 10:15:17,793][00973] Num frames 9600... -[2023-02-25 10:15:17,908][00973] Num frames 9700... -[2023-02-25 10:15:18,029][00973] Num frames 9800... -[2023-02-25 10:15:18,139][00973] Num frames 9900... -[2023-02-25 10:15:18,276][00973] Avg episode rewards: #0: 23.572, true rewards: #0: 9.972 -[2023-02-25 10:15:18,278][00973] Avg episode reward: 23.572, avg true_objective: 9.972 -[2023-02-25 10:16:22,375][00973] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-25 10:26:19,083][22177] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-25 10:26:19,085][22177] Rollout worker 0 uses device cpu -[2023-02-25 10:26:19,087][22177] Rollout worker 1 uses device cpu -[2023-02-25 10:26:19,088][22177] Rollout worker 2 uses device cpu -[2023-02-25 10:26:19,090][22177] Rollout worker 3 uses device cpu -[2023-02-25 10:26:19,091][22177] Rollout worker 4 uses device cpu -[2023-02-25 10:26:19,092][22177] Rollout worker 5 uses device cpu -[2023-02-25 10:26:19,093][22177] Rollout worker 6 uses device cpu -[2023-02-25 10:26:19,095][22177] Rollout worker 7 uses device cpu -[2023-02-25 10:26:19,425][22177] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 10:26:19,429][22177] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-25 10:26:19,507][22177] Starting all processes... -[2023-02-25 10:26:19,509][22177] Starting process learner_proc0 -[2023-02-25 10:26:19,630][22177] Starting all processes... -[2023-02-25 10:26:19,660][22177] Starting process inference_proc0-0 -[2023-02-25 10:26:19,665][22177] Starting process rollout_proc2 -[2023-02-25 10:26:19,665][22177] Starting process rollout_proc1 -[2023-02-25 10:26:19,661][22177] Starting process rollout_proc0 -[2023-02-25 10:26:19,666][22177] Starting process rollout_proc3 -[2023-02-25 10:26:19,666][22177] Starting process rollout_proc4 -[2023-02-25 10:26:19,666][22177] Starting process rollout_proc5 -[2023-02-25 10:26:19,666][22177] Starting process rollout_proc6 -[2023-02-25 10:26:19,666][22177] Starting process rollout_proc7 -[2023-02-25 10:26:29,701][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 10:26:29,701][22567] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-25 10:26:30,480][22581] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 10:26:30,480][22581] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-25 10:26:30,617][22589] Worker 7 uses CPU cores [1] -[2023-02-25 10:26:31,124][22582] Worker 2 uses CPU cores [0] -[2023-02-25 10:26:31,141][22586] Worker 5 uses CPU cores [1] -[2023-02-25 10:26:31,239][22583] Worker 1 uses CPU cores [1] -[2023-02-25 10:26:31,245][22584] Worker 0 uses CPU cores [0] -[2023-02-25 10:26:31,279][22588] Worker 6 uses CPU cores [0] -[2023-02-25 10:26:31,373][22587] Worker 4 uses CPU cores [0] -[2023-02-25 10:26:31,421][22585] Worker 3 uses CPU cores [1] -[2023-02-25 10:26:31,698][22581] Num visible devices: 1 -[2023-02-25 10:26:31,705][22567] Num visible devices: 1 -[2023-02-25 10:26:31,758][22567] Starting seed is not provided -[2023-02-25 10:26:31,758][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 10:26:31,759][22567] Initializing actor-critic model on device cuda:0 -[2023-02-25 10:26:31,759][22567] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 10:26:31,775][22567] RunningMeanStd input shape: (1,) -[2023-02-25 10:26:31,826][22567] ConvEncoder: input_channels=3 -[2023-02-25 10:26:32,100][22567] Conv encoder output size: 512 -[2023-02-25 10:26:32,102][22567] Policy head output size: 512 -[2023-02-25 10:26:32,186][22567] Created Actor Critic model with architecture: -[2023-02-25 10:26:32,192][22567] ActorCriticSharedWeights( +[2023-02-25 13:37:11,013][00699] Heartbeat connected on Batcher_0 +[2023-02-25 13:37:11,021][00699] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-25 13:37:11,031][00699] Heartbeat connected on RolloutWorker_w0 +[2023-02-25 13:37:11,035][00699] Heartbeat connected on RolloutWorker_w1 +[2023-02-25 13:37:11,038][00699] Heartbeat connected on RolloutWorker_w2 +[2023-02-25 13:37:11,041][00699] Heartbeat connected on RolloutWorker_w3 +[2023-02-25 13:37:11,045][00699] Heartbeat connected on RolloutWorker_w4 +[2023-02-25 13:37:11,050][00699] Heartbeat connected on RolloutWorker_w6 +[2023-02-25 13:37:11,051][00699] Heartbeat connected on RolloutWorker_w5 +[2023-02-25 13:37:11,054][00699] Heartbeat connected on RolloutWorker_w7 +[2023-02-25 13:37:13,611][10893] Using optimizer +[2023-02-25 13:37:13,612][10893] No checkpoints found +[2023-02-25 13:37:13,612][10893] Did not load from checkpoint, starting from scratch! +[2023-02-25 13:37:13,613][10893] Initialized policy 0 weights for model version 0 +[2023-02-25 13:37:13,617][10893] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:37:13,624][10893] LearnerWorker_p0 finished initialization! +[2023-02-25 13:37:13,625][00699] Heartbeat connected on LearnerWorker_p0 +[2023-02-25 13:37:13,719][10907] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:37:13,721][10907] RunningMeanStd input shape: (1,) +[2023-02-25 13:37:13,739][10907] ConvEncoder: input_channels=3 +[2023-02-25 13:37:13,836][10907] Conv encoder output size: 512 +[2023-02-25 13:37:13,836][10907] Policy head output size: 512 +[2023-02-25 13:37:15,425][00699] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 13:37:16,389][00699] Inference worker 0-0 is ready! +[2023-02-25 13:37:16,391][00699] All inference workers are ready! Signal rollout workers to start! +[2023-02-25 13:37:16,499][10913] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,511][10915] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,515][10912] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,556][00699] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 699], exiting... +[2023-02-25 13:37:16,562][10893] Stopping Batcher_0... +[2023-02-25 13:37:16,563][10893] Loop batcher_evt_loop terminating... +[2023-02-25 13:37:16,561][00699] Runner profile tree view: +main_loop: 25.5057 +[2023-02-25 13:37:16,568][10893] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth... +[2023-02-25 13:37:16,566][00699] Collected {0: 0}, FPS: 0.0 +[2023-02-25 13:37:16,555][10908] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,627][10907] Weights refcount: 2 0 +[2023-02-25 13:37:16,639][10907] Stopping InferenceWorker_p0-w0... +[2023-02-25 13:37:16,643][10907] Loop inference_proc0-0_evt_loop terminating... +[2023-02-25 13:37:16,653][00699] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 13:37:16,666][10893] Stopping LearnerWorker_p0... +[2023-02-25 13:37:16,666][10893] Loop learner_proc0_evt_loop terminating... +[2023-02-25 13:37:16,661][00699] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-25 13:37:16,667][00699] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-25 13:37:16,672][00699] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-25 13:37:16,678][00699] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 13:37:16,683][00699] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-25 13:37:16,689][00699] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 13:37:16,690][00699] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-25 13:37:16,693][00699] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-25 13:37:16,695][00699] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-25 13:37:16,700][00699] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-25 13:37:16,706][00699] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-25 13:37:16,711][00699] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-25 13:37:16,713][00699] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-25 13:37:16,719][00699] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-25 13:37:16,786][00699] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,795][00699] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:37:16,803][00699] RunningMeanStd input shape: (1,) +[2023-02-25 13:37:16,866][10911] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,862][00699] ConvEncoder: input_channels=3 +[2023-02-25 13:37:16,906][10914] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:16,918][10910] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:17,007][10909] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:37:17,199][00699] Conv encoder output size: 512 +[2023-02-25 13:37:17,214][00699] Policy head output size: 512 +[2023-02-25 13:37:20,333][10915] Decorrelating experience for 0 frames... +[2023-02-25 13:37:20,335][10913] Decorrelating experience for 0 frames... +[2023-02-25 13:37:20,337][10908] Decorrelating experience for 0 frames... +[2023-02-25 13:37:20,515][10911] Decorrelating experience for 0 frames... +[2023-02-25 13:37:22,092][10914] Decorrelating experience for 0 frames... +[2023-02-25 13:37:22,349][00699] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth... +[2023-02-25 13:37:22,486][10912] Decorrelating experience for 0 frames... +[2023-02-25 13:37:22,488][10913] Decorrelating experience for 32 frames... +[2023-02-25 13:37:22,490][10915] Decorrelating experience for 32 frames... +[2023-02-25 13:37:22,495][10908] Decorrelating experience for 32 frames... +[2023-02-25 13:37:22,798][10914] Decorrelating experience for 32 frames... +[2023-02-25 13:37:22,874][00699] VizDoom game.init() threw an exception SignalException('Signal SIGINT received. ViZDoom instance has been closed.'). Terminate process... +[2023-02-25 13:37:22,881][10915] VizDoom game.init() threw an exception SignalException('Signal SIGINT received. ViZDoom instance has been closed.'). Terminate process... +[2023-02-25 13:37:22,889][10908] VizDoom game.init() threw an exception SignalException('Signal SIGINT received. ViZDoom instance has been closed.'). Terminate process... +[2023-02-25 13:37:22,892][10912] VizDoom game.init() threw an exception SignalException('Signal SIGINT received. ViZDoom instance has been closed.'). Terminate process... +[2023-02-25 13:37:22,897][10909] VizDoom game.init() threw an exception SignalException('Signal SIGINT received. ViZDoom instance has been closed.'). Terminate process... +[2023-02-25 13:37:22,889][10914] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init + env_runner.init(self.timing) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init + self._reset() + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 439, in _reset + observations, rew, terminated, truncated, info = e.step(actions) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 319, in step + return self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step + obs, rew, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 384, in step + observation, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 88, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 319, in step + return self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step + obs, reward, terminated, truncated, info = self.env.step(action) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step + reward = self.game.make_action(actions_flattened, self.skip_frames) +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. +[2023-02-25 13:37:22,914][10914] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop +[2023-02-25 13:37:22,902][10912] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init + self.game.init() +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init + env_runner.init(self.timing) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init + self._reset() + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset + observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 379, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 84, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset + self._ensure_initialized() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized + self.initialize() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize + self._game_init() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init + raise EnvCriticalError() +sample_factory.envs.env_utils.EnvCriticalError +[2023-02-25 13:37:22,915][10912] Unhandled exception in evt loop rollout_proc4_evt_loop +[2023-02-25 13:37:22,902][10909] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init + self.game.init() +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init + env_runner.init(self.timing) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init + self._reset() + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset + observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 379, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 84, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset + self._ensure_initialized() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized + self.initialize() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize + self._game_init() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init + raise EnvCriticalError() +sample_factory.envs.env_utils.EnvCriticalError +[2023-02-25 13:37:22,930][10909] Unhandled exception in evt loop rollout_proc1_evt_loop +[2023-02-25 13:37:22,882][10915] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init + self.game.init() +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init + env_runner.init(self.timing) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init + self._reset() + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset + observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 379, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 84, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset + self._ensure_initialized() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized + self.initialize() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize + self._game_init() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init + raise EnvCriticalError() +sample_factory.envs.env_utils.EnvCriticalError +[2023-02-25 13:37:22,931][10915] Unhandled exception in evt loop rollout_proc2_evt_loop +[2023-02-25 13:37:22,893][10908] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='init' connected to emitter=Emitter(object_id='Sampler', signal_name='_inference_workers_initialized'), args=() +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 228, in _game_init + self.game.init() +vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "/usr/local/lib/python3.8/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal + slot_callable(*args) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 150, in init + env_runner.init(self.timing) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 418, in init + self._reset() + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 430, in _reset + observations, info = e.reset(seed=seed) # new way of doing seeding since Gym 0.26.0 + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 125, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/algo/utils/make_env.py", line 110, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 379, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sample_factory/envs/env_wrappers.py", line 84, in reset + obs, info = self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/gym/core.py", line 323, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset + return self.env.reset(**kwargs) + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 323, in reset + self._ensure_initialized() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 274, in _ensure_initialized + self.initialize() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 269, in initialize + self._game_init() + File "/usr/local/lib/python3.8/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 244, in _game_init + raise EnvCriticalError() +sample_factory.envs.env_utils.EnvCriticalError +[2023-02-25 13:37:22,935][10908] Unhandled exception in evt loop rollout_proc0_evt_loop +[2023-02-25 13:37:24,225][10910] Decorrelating experience for 0 frames... +[2023-02-25 13:37:24,579][10913] Decorrelating experience for 64 frames... +[2023-02-25 13:37:24,610][10910] Decorrelating experience for 32 frames... +[2023-02-25 13:37:25,165][10911] Decorrelating experience for 32 frames... +[2023-02-25 13:37:25,263][10910] Decorrelating experience for 64 frames... +[2023-02-25 13:37:25,515][10913] Decorrelating experience for 96 frames... +[2023-02-25 13:37:25,597][10913] Stopping RolloutWorker_w6... +[2023-02-25 13:37:25,598][10913] Loop rollout_proc6_evt_loop terminating... +[2023-02-25 13:37:25,929][10911] Decorrelating experience for 64 frames... +[2023-02-25 13:37:25,966][10910] Decorrelating experience for 96 frames... +[2023-02-25 13:37:26,051][10910] Stopping RolloutWorker_w3... +[2023-02-25 13:37:26,052][10910] Loop rollout_proc3_evt_loop terminating... +[2023-02-25 13:37:26,458][10911] Decorrelating experience for 96 frames... +[2023-02-25 13:37:26,524][10911] Stopping RolloutWorker_w5... +[2023-02-25 13:37:26,524][10911] Loop rollout_proc5_evt_loop terminating... +[2023-02-25 13:37:49,127][00699] Environment doom_basic already registered, overwriting... +[2023-02-25 13:37:49,129][00699] Environment doom_two_colors_easy already registered, overwriting... +[2023-02-25 13:37:49,131][00699] Environment doom_two_colors_hard already registered, overwriting... +[2023-02-25 13:37:49,135][00699] Environment doom_dm already registered, overwriting... +[2023-02-25 13:37:49,138][00699] Environment doom_dwango5 already registered, overwriting... +[2023-02-25 13:37:49,139][00699] Environment doom_my_way_home_flat_actions already registered, overwriting... +[2023-02-25 13:37:49,140][00699] Environment doom_defend_the_center_flat_actions already registered, overwriting... +[2023-02-25 13:37:49,141][00699] Environment doom_my_way_home already registered, overwriting... +[2023-02-25 13:37:49,142][00699] Environment doom_deadly_corridor already registered, overwriting... +[2023-02-25 13:37:49,143][00699] Environment doom_defend_the_center already registered, overwriting... +[2023-02-25 13:37:49,145][00699] Environment doom_defend_the_line already registered, overwriting... +[2023-02-25 13:37:49,146][00699] Environment doom_health_gathering already registered, overwriting... +[2023-02-25 13:37:49,147][00699] Environment doom_health_gathering_supreme already registered, overwriting... +[2023-02-25 13:37:49,148][00699] Environment doom_battle already registered, overwriting... +[2023-02-25 13:37:49,149][00699] Environment doom_battle2 already registered, overwriting... +[2023-02-25 13:37:49,150][00699] Environment doom_duel_bots already registered, overwriting... +[2023-02-25 13:37:49,151][00699] Environment doom_deathmatch_bots already registered, overwriting... +[2023-02-25 13:37:49,153][00699] Environment doom_duel already registered, overwriting... +[2023-02-25 13:37:49,154][00699] Environment doom_deathmatch_full already registered, overwriting... +[2023-02-25 13:37:49,155][00699] Environment doom_benchmark already registered, overwriting... +[2023-02-25 13:37:49,156][00699] register_encoder_factory: +[2023-02-25 13:37:49,184][00699] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 13:37:49,190][00699] Experiment dir /content/train_dir/default_experiment already exists! +[2023-02-25 13:37:49,192][00699] Resuming existing experiment from /content/train_dir/default_experiment... +[2023-02-25 13:37:49,193][00699] Weights and Biases integration disabled +[2023-02-25 13:37:49,196][00699] Environment var CUDA_VISIBLE_DEVICES is 0 + +[2023-02-25 13:37:50,634][00699] Starting experiment with the following configuration: +help=False +algo=APPO +env=doom_health_gathering_supreme +experiment=default_experiment +train_dir=/content/train_dir +restart_behavior=resume +device=gpu +seed=None +num_policies=1 +async_rl=True +serial_mode=False +batched_sampling=False +num_batches_to_accumulate=2 +worker_num_splits=2 +policy_workers_per_policy=1 +max_policy_lag=1000 +num_workers=8 +num_envs_per_worker=4 +batch_size=1024 +num_batches_per_epoch=1 +num_epochs=1 +rollout=32 +recurrence=32 +shuffle_minibatches=False +gamma=0.99 +reward_scale=1.0 +reward_clip=1000.0 +value_bootstrap=False +normalize_returns=True +exploration_loss_coeff=0.001 +value_loss_coeff=0.5 +kl_loss_coeff=0.0 +exploration_loss=symmetric_kl +gae_lambda=0.95 +ppo_clip_ratio=0.1 +ppo_clip_value=0.2 +with_vtrace=False +vtrace_rho=1.0 +vtrace_c=1.0 +optimizer=adam +adam_eps=1e-06 +adam_beta1=0.9 +adam_beta2=0.999 +max_grad_norm=4.0 +learning_rate=0.0001 +lr_schedule=constant +lr_schedule_kl_threshold=0.008 +lr_adaptive_min=1e-06 +lr_adaptive_max=0.01 +obs_subtract_mean=0.0 +obs_scale=255.0 +normalize_input=True +normalize_input_keys=None +decorrelate_experience_max_seconds=0 +decorrelate_envs_on_one_worker=True +actor_worker_gpus=[] +set_workers_cpu_affinity=True +force_envs_single_thread=False +default_niceness=0 +log_to_file=True +experiment_summaries_interval=10 +flush_summaries_interval=30 +stats_avg=100 +summaries_use_frameskip=True +heartbeat_interval=20 +heartbeat_reporting_interval=600 +train_for_env_steps=4000000 +train_for_seconds=10000000000 +save_every_sec=120 +keep_checkpoints=2 +load_checkpoint_kind=latest +save_milestones_sec=-1 +save_best_every_sec=5 +save_best_metric=reward +save_best_after=100000 +benchmark=False +encoder_mlp_layers=[512, 512] +encoder_conv_architecture=convnet_simple +encoder_conv_mlp_layers=[512] +use_rnn=True +rnn_size=512 +rnn_type=gru +rnn_num_layers=1 +decoder_mlp_layers=[] +nonlinearity=elu +policy_initialization=orthogonal +policy_init_gain=1.0 +actor_critic_share_weights=True +adaptive_stddev=True +continuous_tanh_scale=0.0 +initial_stddev=1.0 +use_env_info_cache=False +env_gpu_actions=False +env_gpu_observations=True +env_frameskip=4 +env_framestack=1 +pixel_format=CHW +use_record_episode_statistics=False +with_wandb=False +wandb_user=None +wandb_project=sample_factory +wandb_group=None +wandb_job_type=SF +wandb_tags=[] +with_pbt=False +pbt_mix_policies_in_one_env=True +pbt_period_env_steps=5000000 +pbt_start_mutation=20000000 +pbt_replace_fraction=0.3 +pbt_mutation_rate=0.15 +pbt_replace_reward_gap=0.1 +pbt_replace_reward_gap_absolute=1e-06 +pbt_optimize_gamma=False +pbt_target_objective=true_objective +pbt_perturb_min=1.1 +pbt_perturb_max=1.5 +num_agents=-1 +num_humans=0 +num_bots=-1 +start_bot_difficulty=None +timelimit=None +res_w=128 +res_h=72 +wide_aspect_ratio=False +eval_env_frameskip=1 +fps=35 +command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 +cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} +git_hash=unknown +git_repo_name=not a git repository +[2023-02-25 13:37:50,636][00699] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-25 13:37:50,643][00699] Rollout worker 0 uses device cpu +[2023-02-25 13:37:50,644][00699] Rollout worker 1 uses device cpu +[2023-02-25 13:37:50,647][00699] Rollout worker 2 uses device cpu +[2023-02-25 13:37:50,649][00699] Rollout worker 3 uses device cpu +[2023-02-25 13:37:50,653][00699] Rollout worker 4 uses device cpu +[2023-02-25 13:37:50,658][00699] Rollout worker 5 uses device cpu +[2023-02-25 13:37:50,659][00699] Rollout worker 6 uses device cpu +[2023-02-25 13:37:50,663][00699] Rollout worker 7 uses device cpu +[2023-02-25 13:37:50,791][00699] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:37:50,792][00699] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-25 13:37:50,826][00699] Starting all processes... +[2023-02-25 13:37:50,827][00699] Starting process learner_proc0 +[2023-02-25 13:37:50,924][00699] Starting all processes... +[2023-02-25 13:37:50,934][00699] Starting process inference_proc0-0 +[2023-02-25 13:37:50,934][00699] Starting process rollout_proc0 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc1 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc2 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc3 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc4 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc5 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc6 +[2023-02-25 13:37:50,939][00699] Starting process rollout_proc7 +[2023-02-25 13:38:02,761][12789] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:38:02,761][12789] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-25 13:38:02,816][12789] Num visible devices: 1 +[2023-02-25 13:38:02,863][12789] Starting seed is not provided +[2023-02-25 13:38:02,864][12789] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:38:02,864][12789] Initializing actor-critic model on device cuda:0 +[2023-02-25 13:38:02,865][12789] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:38:02,866][12789] RunningMeanStd input shape: (1,) +[2023-02-25 13:38:02,945][12789] ConvEncoder: input_channels=3 +[2023-02-25 13:38:03,118][12808] Worker 2 uses CPU cores [0] +[2023-02-25 13:38:03,401][12803] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:38:03,402][12803] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-25 13:38:03,459][12803] Num visible devices: 1 +[2023-02-25 13:38:03,661][12805] Worker 0 uses CPU cores [0] +[2023-02-25 13:38:03,678][12789] Conv encoder output size: 512 +[2023-02-25 13:38:03,678][12789] Policy head output size: 512 +[2023-02-25 13:38:03,680][12804] Worker 1 uses CPU cores [1] +[2023-02-25 13:38:03,782][12789] Created Actor Critic model with architecture: +[2023-02-25 13:38:03,788][12789] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( @@ -1111,492 +624,1030 @@ main_loop: 1224.8012 (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) -[2023-02-25 10:26:39,404][22177] Heartbeat connected on Batcher_0 -[2023-02-25 10:26:39,425][22177] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-25 10:26:39,440][22177] Heartbeat connected on RolloutWorker_w0 -[2023-02-25 10:26:39,453][22177] Heartbeat connected on RolloutWorker_w1 -[2023-02-25 10:26:39,464][22177] Heartbeat connected on RolloutWorker_w2 -[2023-02-25 10:26:39,467][22177] Heartbeat connected on RolloutWorker_w3 -[2023-02-25 10:26:39,478][22177] Heartbeat connected on RolloutWorker_w4 -[2023-02-25 10:26:39,487][22177] Heartbeat connected on RolloutWorker_w5 -[2023-02-25 10:26:39,499][22177] Heartbeat connected on RolloutWorker_w6 -[2023-02-25 10:26:39,505][22177] Heartbeat connected on RolloutWorker_w7 -[2023-02-25 10:26:39,702][22567] Using optimizer -[2023-02-25 10:26:39,703][22567] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-25 10:26:39,759][22567] Loading model from checkpoint -[2023-02-25 10:26:39,766][22567] Loaded experiment state at self.train_step=978, self.env_steps=4005888 -[2023-02-25 10:26:39,767][22567] Initialized policy 0 weights for model version 978 -[2023-02-25 10:26:39,772][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-25 10:26:39,775][22177] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 10:26:39,781][22567] LearnerWorker_p0 finished initialization! -[2023-02-25 10:26:39,782][22177] Heartbeat connected on LearnerWorker_p0 -[2023-02-25 10:26:39,971][22581] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 10:26:39,972][22581] RunningMeanStd input shape: (1,) -[2023-02-25 10:26:40,083][22581] ConvEncoder: input_channels=3 -[2023-02-25 10:26:40,395][22581] Conv encoder output size: 512 -[2023-02-25 10:26:40,396][22581] Policy head output size: 512 -[2023-02-25 10:26:42,715][22177] Inference worker 0-0 is ready! -[2023-02-25 10:26:42,718][22177] All inference workers are ready! Signal rollout workers to start! -[2023-02-25 10:26:42,814][22589] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,816][22586] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,817][22585] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,818][22583] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,823][22584] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,824][22588] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,820][22587] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:42,822][22582] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:26:43,623][22585] Decorrelating experience for 0 frames... -[2023-02-25 10:26:43,628][22586] Decorrelating experience for 0 frames... -[2023-02-25 10:26:43,973][22585] Decorrelating experience for 32 frames... -[2023-02-25 10:26:44,043][22587] Decorrelating experience for 0 frames... -[2023-02-25 10:26:44,051][22588] Decorrelating experience for 0 frames... -[2023-02-25 10:26:44,046][22584] Decorrelating experience for 0 frames... -[2023-02-25 10:26:44,749][22582] Decorrelating experience for 0 frames... -[2023-02-25 10:26:44,775][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 10:26:44,993][22588] Decorrelating experience for 32 frames... -[2023-02-25 10:26:44,995][22584] Decorrelating experience for 32 frames... -[2023-02-25 10:26:45,034][22585] Decorrelating experience for 64 frames... -[2023-02-25 10:26:45,177][22586] Decorrelating experience for 32 frames... -[2023-02-25 10:26:45,203][22583] Decorrelating experience for 0 frames... -[2023-02-25 10:26:45,782][22585] Decorrelating experience for 96 frames... -[2023-02-25 10:26:46,802][22582] Decorrelating experience for 32 frames... -[2023-02-25 10:26:47,304][22584] Decorrelating experience for 64 frames... -[2023-02-25 10:26:47,324][22588] Decorrelating experience for 64 frames... -[2023-02-25 10:26:47,567][22587] Decorrelating experience for 32 frames... -[2023-02-25 10:26:47,877][22586] Decorrelating experience for 64 frames... -[2023-02-25 10:26:49,775][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 9.4. Samples: 94. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 10:26:49,783][22177] Avg episode reward: [(0, '0.800')] -[2023-02-25 10:26:49,800][22583] Decorrelating experience for 32 frames... -[2023-02-25 10:26:50,101][22589] Decorrelating experience for 0 frames... -[2023-02-25 10:26:50,161][22584] Decorrelating experience for 96 frames... -[2023-02-25 10:26:50,196][22588] Decorrelating experience for 96 frames... -[2023-02-25 10:26:50,413][22582] Decorrelating experience for 64 frames... -[2023-02-25 10:26:50,517][22586] Decorrelating experience for 96 frames... -[2023-02-25 10:26:54,391][22587] Decorrelating experience for 64 frames... -[2023-02-25 10:26:54,512][22589] Decorrelating experience for 32 frames... -[2023-02-25 10:26:54,777][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 128.7. Samples: 1930. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-25 10:26:54,782][22177] Avg episode reward: [(0, '5.506')] -[2023-02-25 10:26:54,830][22582] Decorrelating experience for 96 frames... -[2023-02-25 10:26:55,502][22567] Signal inference workers to stop experience collection... -[2023-02-25 10:26:55,520][22581] InferenceWorker_p0-w0: stopping experience collection -[2023-02-25 10:26:56,006][22587] Decorrelating experience for 96 frames... -[2023-02-25 10:26:56,300][22583] Decorrelating experience for 64 frames... -[2023-02-25 10:26:56,354][22567] Signal inference workers to resume experience collection... -[2023-02-25 10:26:56,356][22581] InferenceWorker_p0-w0: resuming experience collection -[2023-02-25 10:26:56,362][22567] Stopping Batcher_0... -[2023-02-25 10:26:56,363][22567] Loop batcher_evt_loop terminating... -[2023-02-25 10:26:56,364][22177] Component Batcher_0 stopped! -[2023-02-25 10:26:56,378][22177] Component RolloutWorker_w5 stopped! -[2023-02-25 10:26:56,378][22586] Stopping RolloutWorker_w5... -[2023-02-25 10:26:56,392][22586] Loop rollout_proc5_evt_loop terminating... -[2023-02-25 10:26:56,400][22585] Stopping RolloutWorker_w3... -[2023-02-25 10:26:56,401][22177] Component RolloutWorker_w3 stopped! -[2023-02-25 10:26:56,401][22585] Loop rollout_proc3_evt_loop terminating... -[2023-02-25 10:26:56,406][22177] Component RolloutWorker_w4 stopped! -[2023-02-25 10:26:56,408][22587] Stopping RolloutWorker_w4... -[2023-02-25 10:26:56,431][22177] Component RolloutWorker_w2 stopped! -[2023-02-25 10:26:56,433][22582] Stopping RolloutWorker_w2... -[2023-02-25 10:26:56,438][22582] Loop rollout_proc2_evt_loop terminating... -[2023-02-25 10:26:56,436][22587] Loop rollout_proc4_evt_loop terminating... -[2023-02-25 10:26:56,444][22177] Component RolloutWorker_w6 stopped! -[2023-02-25 10:26:56,446][22588] Stopping RolloutWorker_w6... -[2023-02-25 10:26:56,449][22588] Loop rollout_proc6_evt_loop terminating... -[2023-02-25 10:26:56,452][22177] Component RolloutWorker_w0 stopped! -[2023-02-25 10:26:56,455][22584] Stopping RolloutWorker_w0... -[2023-02-25 10:26:56,458][22584] Loop rollout_proc0_evt_loop terminating... -[2023-02-25 10:26:56,469][22589] Decorrelating experience for 64 frames... -[2023-02-25 10:26:56,486][22581] Weights refcount: 2 0 -[2023-02-25 10:26:56,497][22177] Component InferenceWorker_p0-w0 stopped! -[2023-02-25 10:26:56,497][22581] Stopping InferenceWorker_p0-w0... -[2023-02-25 10:26:56,502][22581] Loop inference_proc0-0_evt_loop terminating... -[2023-02-25 10:26:58,478][22567] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-25 10:26:58,574][22567] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000956_3915776.pth -[2023-02-25 10:26:58,587][22567] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-25 10:26:58,714][22177] Component LearnerWorker_p0 stopped! -[2023-02-25 10:26:58,716][22567] Stopping LearnerWorker_p0... -[2023-02-25 10:26:58,724][22567] Loop learner_proc0_evt_loop terminating... -[2023-02-25 10:26:59,202][22583] Decorrelating experience for 96 frames... -[2023-02-25 10:26:59,531][22177] Component RolloutWorker_w1 stopped! -[2023-02-25 10:26:59,539][22583] Stopping RolloutWorker_w1... -[2023-02-25 10:26:59,540][22583] Loop rollout_proc1_evt_loop terminating... -[2023-02-25 10:26:59,549][22589] Decorrelating experience for 96 frames... -[2023-02-25 10:26:59,741][22177] Component RolloutWorker_w7 stopped! -[2023-02-25 10:26:59,748][22177] Waiting for process learner_proc0 to stop... -[2023-02-25 10:26:59,753][22589] Stopping RolloutWorker_w7... -[2023-02-25 10:26:59,755][22589] Loop rollout_proc7_evt_loop terminating... -[2023-02-25 10:27:00,072][22177] Waiting for process inference_proc0-0 to join... -[2023-02-25 10:27:00,074][22177] Waiting for process rollout_proc0 to join... -[2023-02-25 10:27:00,078][22177] Waiting for process rollout_proc1 to join... -[2023-02-25 10:27:00,264][22177] Waiting for process rollout_proc2 to join... -[2023-02-25 10:27:00,265][22177] Waiting for process rollout_proc3 to join... -[2023-02-25 10:27:00,266][22177] Waiting for process rollout_proc4 to join... -[2023-02-25 10:27:00,268][22177] Waiting for process rollout_proc5 to join... -[2023-02-25 10:27:00,269][22177] Waiting for process rollout_proc6 to join... -[2023-02-25 10:27:00,270][22177] Waiting for process rollout_proc7 to join... -[2023-02-25 10:27:00,313][22177] Batcher 0 profile tree view: -batching: 0.0368, releasing_batches: 0.0014 -[2023-02-25 10:27:00,314][22177] InferenceWorker_p0-w0 profile tree view: -wait_policy: 0.0052 - wait_policy_total: 8.4476 -update_model: 0.0237 - weight_update: 0.0014 -one_step: 0.0791 - handle_policy_step: 4.2077 - deserialize: 0.0563, stack: 0.0162, obs_to_device_normalize: 0.3851, forward: 3.3138, send_messages: 0.0713 - prepare_outputs: 0.2529 - to_cpu: 0.1335 -[2023-02-25 10:27:00,317][22177] Learner 0 profile tree view: -misc: 0.0000, prepare_batch: 5.5813 -train: 0.6991 - epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0006, kl_divergence: 0.0006, after_optimizer: 0.0044 - calculate_losses: 0.1493 - losses_init: 0.0000, forward_head: 0.1165, bptt_initial: 0.0176, tail: 0.0018, advantages_returns: 0.0031, losses: 0.0031 - bptt: 0.0066 - bptt_forward_core: 0.0065 - update: 0.5433 - clip: 0.0039 -[2023-02-25 10:27:00,320][22177] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.0010, enqueue_policy_requests: 1.0403, env_step: 3.2737, overhead: 0.1866, complete_rollouts: 0.0033 -save_policy_outputs: 0.0888 - split_output_tensors: 0.0503 -[2023-02-25 10:27:00,322][22177] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0005 -[2023-02-25 10:27:00,325][22177] Loop Runner_EvtLoop terminating... -[2023-02-25 10:27:00,327][22177] Runner profile tree view: -main_loop: 40.8230 -[2023-02-25 10:27:00,330][22177] Collected {0: 4014080}, FPS: 200.7 -[2023-02-25 10:27:00,364][22177] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-25 10:27:00,365][22177] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-25 10:27:00,367][22177] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-25 10:27:00,370][22177] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-25 10:27:00,372][22177] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-25 10:27:00,373][22177] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-25 10:27:00,374][22177] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-25 10:27:00,376][22177] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-25 10:27:00,377][22177] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-25 10:27:00,378][22177] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-25 10:27:00,379][22177] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-25 10:27:00,386][22177] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-25 10:27:00,388][22177] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-25 10:27:00,390][22177] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-25 10:27:00,392][22177] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-25 10:27:00,420][22177] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-25 10:27:00,423][22177] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 10:27:00,426][22177] RunningMeanStd input shape: (1,) -[2023-02-25 10:27:00,441][22177] ConvEncoder: input_channels=3 -[2023-02-25 10:27:01,082][22177] Conv encoder output size: 512 -[2023-02-25 10:27:01,085][22177] Policy head output size: 512 -[2023-02-25 10:27:03,413][22177] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-25 10:27:04,921][22177] Num frames 100... -[2023-02-25 10:27:05,081][22177] Num frames 200... -[2023-02-25 10:27:05,242][22177] Num frames 300... -[2023-02-25 10:27:05,411][22177] Num frames 400... -[2023-02-25 10:27:05,571][22177] Num frames 500... -[2023-02-25 10:27:05,727][22177] Num frames 600... -[2023-02-25 10:27:05,887][22177] Num frames 700... -[2023-02-25 10:27:06,043][22177] Num frames 800... -[2023-02-25 10:27:06,217][22177] Num frames 900... -[2023-02-25 10:27:06,377][22177] Num frames 1000... -[2023-02-25 10:27:06,537][22177] Num frames 1100... -[2023-02-25 10:27:06,718][22177] Num frames 1200... -[2023-02-25 10:27:06,883][22177] Num frames 1300... -[2023-02-25 10:27:07,105][22177] Num frames 1400... -[2023-02-25 10:27:07,390][22177] Num frames 1500... -[2023-02-25 10:27:07,589][22177] Num frames 1600... -[2023-02-25 10:27:07,758][22177] Num frames 1700... -[2023-02-25 10:27:07,922][22177] Num frames 1800... -[2023-02-25 10:27:08,071][22177] Num frames 1900... -[2023-02-25 10:27:08,190][22177] Num frames 2000... -[2023-02-25 10:27:08,305][22177] Num frames 2100... -[2023-02-25 10:27:08,358][22177] Avg episode rewards: #0: 59.999, true rewards: #0: 21.000 -[2023-02-25 10:27:08,359][22177] Avg episode reward: 59.999, avg true_objective: 21.000 -[2023-02-25 10:27:08,494][22177] Num frames 2200... -[2023-02-25 10:27:08,602][22177] Num frames 2300... -[2023-02-25 10:27:08,717][22177] Num frames 2400... -[2023-02-25 10:27:08,828][22177] Num frames 2500... -[2023-02-25 10:27:08,946][22177] Num frames 2600... -[2023-02-25 10:27:09,063][22177] Num frames 2700... -[2023-02-25 10:27:09,177][22177] Num frames 2800... -[2023-02-25 10:27:09,259][22177] Avg episode rewards: #0: 38.109, true rewards: #0: 14.110 -[2023-02-25 10:27:09,261][22177] Avg episode reward: 38.109, avg true_objective: 14.110 -[2023-02-25 10:27:09,355][22177] Num frames 2900... -[2023-02-25 10:27:09,483][22177] Num frames 3000... -[2023-02-25 10:27:09,604][22177] Num frames 3100... -[2023-02-25 10:27:09,724][22177] Num frames 3200... -[2023-02-25 10:27:09,841][22177] Num frames 3300... -[2023-02-25 10:27:09,954][22177] Num frames 3400... -[2023-02-25 10:27:10,072][22177] Num frames 3500... -[2023-02-25 10:27:10,195][22177] Num frames 3600... -[2023-02-25 10:27:10,306][22177] Num frames 3700... -[2023-02-25 10:27:10,423][22177] Num frames 3800... -[2023-02-25 10:27:10,570][22177] Num frames 3900... -[2023-02-25 10:27:10,696][22177] Num frames 4000... -[2023-02-25 10:27:10,825][22177] Num frames 4100... -[2023-02-25 10:27:10,942][22177] Num frames 4200... -[2023-02-25 10:27:11,059][22177] Num frames 4300... -[2023-02-25 10:27:11,180][22177] Avg episode rewards: #0: 37.523, true rewards: #0: 14.523 -[2023-02-25 10:27:11,182][22177] Avg episode reward: 37.523, avg true_objective: 14.523 -[2023-02-25 10:27:11,239][22177] Num frames 4400... -[2023-02-25 10:27:11,362][22177] Num frames 4500... -[2023-02-25 10:27:11,474][22177] Num frames 4600... -[2023-02-25 10:27:11,597][22177] Num frames 4700... -[2023-02-25 10:27:11,713][22177] Num frames 4800... -[2023-02-25 10:27:11,830][22177] Num frames 4900... -[2023-02-25 10:27:11,968][22177] Num frames 5000... -[2023-02-25 10:27:12,174][22177] Num frames 5100... -[2023-02-25 10:27:12,364][22177] Num frames 5200... -[2023-02-25 10:27:12,579][22177] Num frames 5300... -[2023-02-25 10:27:12,764][22177] Num frames 5400... -[2023-02-25 10:27:12,957][22177] Num frames 5500... -[2023-02-25 10:27:13,132][22177] Num frames 5600... -[2023-02-25 10:27:13,327][22177] Num frames 5700... -[2023-02-25 10:27:13,533][22177] Num frames 5800... -[2023-02-25 10:27:13,709][22177] Num frames 5900... -[2023-02-25 10:27:13,806][22177] Avg episode rewards: #0: 38.052, true rewards: #0: 14.802 -[2023-02-25 10:27:13,810][22177] Avg episode reward: 38.052, avg true_objective: 14.802 -[2023-02-25 10:27:13,943][22177] Num frames 6000... -[2023-02-25 10:27:14,155][22177] Num frames 6100... -[2023-02-25 10:27:14,370][22177] Num frames 6200... -[2023-02-25 10:27:14,595][22177] Num frames 6300... -[2023-02-25 10:27:14,766][22177] Num frames 6400... -[2023-02-25 10:27:14,936][22177] Num frames 6500... -[2023-02-25 10:27:15,148][22177] Num frames 6600... -[2023-02-25 10:27:15,377][22177] Num frames 6700... -[2023-02-25 10:27:15,572][22177] Num frames 6800... -[2023-02-25 10:27:15,836][22177] Num frames 6900... -[2023-02-25 10:27:16,032][22177] Num frames 7000... -[2023-02-25 10:27:16,327][22177] Num frames 7100... -[2023-02-25 10:27:16,518][22177] Num frames 7200... -[2023-02-25 10:27:16,873][22177] Num frames 7300... -[2023-02-25 10:27:17,109][22177] Num frames 7400... -[2023-02-25 10:27:17,417][22177] Num frames 7500... -[2023-02-25 10:27:17,615][22177] Num frames 7600... -[2023-02-25 10:27:17,808][22177] Num frames 7700... -[2023-02-25 10:27:18,005][22177] Num frames 7800... -[2023-02-25 10:27:18,128][22177] Avg episode rewards: #0: 39.833, true rewards: #0: 15.634 -[2023-02-25 10:27:18,130][22177] Avg episode reward: 39.833, avg true_objective: 15.634 -[2023-02-25 10:27:18,459][22177] Num frames 7900... -[2023-02-25 10:27:18,862][22177] Num frames 8000... -[2023-02-25 10:27:19,074][22177] Num frames 8100... -[2023-02-25 10:27:19,234][22177] Num frames 8200... -[2023-02-25 10:27:19,397][22177] Num frames 8300... -[2023-02-25 10:27:19,559][22177] Num frames 8400... -[2023-02-25 10:27:19,716][22177] Num frames 8500... -[2023-02-25 10:27:19,889][22177] Num frames 8600... -[2023-02-25 10:27:20,049][22177] Num frames 8700... -[2023-02-25 10:27:20,209][22177] Num frames 8800... -[2023-02-25 10:27:20,373][22177] Num frames 8900... -[2023-02-25 10:27:20,539][22177] Num frames 9000... -[2023-02-25 10:27:20,700][22177] Num frames 9100... -[2023-02-25 10:27:20,864][22177] Num frames 9200... -[2023-02-25 10:27:21,030][22177] Num frames 9300... -[2023-02-25 10:27:21,226][22177] Avg episode rewards: #0: 39.133, true rewards: #0: 15.633 -[2023-02-25 10:27:21,229][22177] Avg episode reward: 39.133, avg true_objective: 15.633 -[2023-02-25 10:27:21,267][22177] Num frames 9400... -[2023-02-25 10:27:21,424][22177] Num frames 9500... -[2023-02-25 10:27:21,588][22177] Num frames 9600... -[2023-02-25 10:27:21,748][22177] Num frames 9700... -[2023-02-25 10:27:21,867][22177] Num frames 9800... -[2023-02-25 10:27:21,990][22177] Num frames 9900... -[2023-02-25 10:27:22,107][22177] Num frames 10000... -[2023-02-25 10:27:22,222][22177] Num frames 10100... -[2023-02-25 10:27:22,334][22177] Num frames 10200... -[2023-02-25 10:27:22,446][22177] Num frames 10300... -[2023-02-25 10:27:22,559][22177] Num frames 10400... -[2023-02-25 10:27:22,676][22177] Num frames 10500... -[2023-02-25 10:27:22,792][22177] Num frames 10600... -[2023-02-25 10:27:22,908][22177] Num frames 10700... -[2023-02-25 10:27:23,029][22177] Num frames 10800... -[2023-02-25 10:27:23,141][22177] Num frames 10900... -[2023-02-25 10:27:23,254][22177] Num frames 11000... -[2023-02-25 10:27:23,367][22177] Num frames 11100... -[2023-02-25 10:27:23,483][22177] Num frames 11200... -[2023-02-25 10:27:23,597][22177] Num frames 11300... -[2023-02-25 10:27:23,714][22177] Num frames 11400... -[2023-02-25 10:27:23,858][22177] Avg episode rewards: #0: 40.828, true rewards: #0: 16.400 -[2023-02-25 10:27:23,860][22177] Avg episode reward: 40.828, avg true_objective: 16.400 -[2023-02-25 10:27:23,889][22177] Num frames 11500... -[2023-02-25 10:27:24,009][22177] Num frames 11600... -[2023-02-25 10:27:24,119][22177] Num frames 11700... -[2023-02-25 10:27:24,232][22177] Num frames 11800... -[2023-02-25 10:27:24,391][22177] Avg episode rewards: #0: 36.495, true rewards: #0: 14.870 -[2023-02-25 10:27:24,396][22177] Avg episode reward: 36.495, avg true_objective: 14.870 -[2023-02-25 10:27:24,405][22177] Num frames 11900... -[2023-02-25 10:27:24,529][22177] Num frames 12000... -[2023-02-25 10:27:24,652][22177] Num frames 12100... -[2023-02-25 10:27:24,765][22177] Num frames 12200... -[2023-02-25 10:27:24,887][22177] Num frames 12300... -[2023-02-25 10:27:25,009][22177] Num frames 12400... -[2023-02-25 10:27:25,128][22177] Num frames 12500... -[2023-02-25 10:27:25,240][22177] Num frames 12600... -[2023-02-25 10:27:25,312][22177] Avg episode rewards: #0: 33.792, true rewards: #0: 14.014 -[2023-02-25 10:27:25,313][22177] Avg episode reward: 33.792, avg true_objective: 14.014 -[2023-02-25 10:27:25,420][22177] Num frames 12700... -[2023-02-25 10:27:25,550][22177] Num frames 12800... -[2023-02-25 10:27:25,672][22177] Num frames 12900... -[2023-02-25 10:27:25,799][22177] Num frames 13000... -[2023-02-25 10:27:25,920][22177] Num frames 13100... -[2023-02-25 10:27:26,040][22177] Num frames 13200... -[2023-02-25 10:27:26,158][22177] Num frames 13300... -[2023-02-25 10:27:26,278][22177] Num frames 13400... -[2023-02-25 10:27:26,388][22177] Num frames 13500... -[2023-02-25 10:27:26,503][22177] Num frames 13600... -[2023-02-25 10:27:26,615][22177] Num frames 13700... -[2023-02-25 10:27:26,726][22177] Num frames 13800... -[2023-02-25 10:27:26,850][22177] Num frames 13900... -[2023-02-25 10:27:26,968][22177] Num frames 14000... -[2023-02-25 10:27:27,080][22177] Num frames 14100... -[2023-02-25 10:27:27,196][22177] Num frames 14200... -[2023-02-25 10:27:27,267][22177] Avg episode rewards: #0: 34.513, true rewards: #0: 14.213 -[2023-02-25 10:27:27,275][22177] Avg episode reward: 34.513, avg true_objective: 14.213 -[2023-02-25 10:29:01,782][22177] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-25 10:29:02,433][22177] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-25 10:29:02,436][22177] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-25 10:29:02,439][22177] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-25 10:29:02,442][22177] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-25 10:29:02,444][22177] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-25 10:29:02,446][22177] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-25 10:29:02,449][22177] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! -[2023-02-25 10:29:02,450][22177] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-25 10:29:02,452][22177] Adding new argument 'push_to_hub'=True that is not in the saved config file! -[2023-02-25 10:29:02,453][22177] Adding new argument 'hf_repository'='RegisGraptin/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! -[2023-02-25 10:29:02,454][22177] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-25 10:29:02,456][22177] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-25 10:29:02,457][22177] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-25 10:29:02,458][22177] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-25 10:29:02,460][22177] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-25 10:29:02,485][22177] RunningMeanStd input shape: (3, 72, 128) -[2023-02-25 10:29:02,490][22177] RunningMeanStd input shape: (1,) -[2023-02-25 10:29:02,513][22177] ConvEncoder: input_channels=3 -[2023-02-25 10:29:02,600][22177] Conv encoder output size: 512 -[2023-02-25 10:29:02,606][22177] Policy head output size: 512 -[2023-02-25 10:29:02,653][22177] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-25 10:29:03,375][22177] Num frames 100... -[2023-02-25 10:29:03,530][22177] Num frames 200... -[2023-02-25 10:29:03,639][22177] Num frames 300... -[2023-02-25 10:29:03,746][22177] Num frames 400... -[2023-02-25 10:29:03,859][22177] Num frames 500... -[2023-02-25 10:29:03,971][22177] Num frames 600... -[2023-02-25 10:29:04,036][22177] Avg episode rewards: #0: 9.080, true rewards: #0: 6.080 -[2023-02-25 10:29:04,040][22177] Avg episode reward: 9.080, avg true_objective: 6.080 -[2023-02-25 10:29:04,156][22177] Num frames 700... -[2023-02-25 10:29:04,283][22177] Num frames 800... -[2023-02-25 10:29:04,403][22177] Num frames 900... -[2023-02-25 10:29:04,607][22177] Num frames 1000... -[2023-02-25 10:29:04,720][22177] Num frames 1100... -[2023-02-25 10:29:04,839][22177] Num frames 1200... -[2023-02-25 10:29:04,956][22177] Num frames 1300... -[2023-02-25 10:29:05,068][22177] Num frames 1400... -[2023-02-25 10:29:05,187][22177] Num frames 1500... -[2023-02-25 10:29:05,297][22177] Num frames 1600... -[2023-02-25 10:29:05,430][22177] Num frames 1700... -[2023-02-25 10:29:05,545][22177] Num frames 1800... -[2023-02-25 10:29:05,676][22177] Num frames 1900... -[2023-02-25 10:29:05,788][22177] Num frames 2000... -[2023-02-25 10:29:05,921][22177] Num frames 2100... -[2023-02-25 10:29:06,039][22177] Num frames 2200... -[2023-02-25 10:29:06,155][22177] Num frames 2300... -[2023-02-25 10:29:06,268][22177] Num frames 2400... -[2023-02-25 10:29:06,378][22177] Num frames 2500... -[2023-02-25 10:29:06,488][22177] Num frames 2600... -[2023-02-25 10:29:06,603][22177] Num frames 2700... -[2023-02-25 10:29:06,668][22177] Avg episode rewards: #0: 33.539, true rewards: #0: 13.540 -[2023-02-25 10:29:06,670][22177] Avg episode reward: 33.539, avg true_objective: 13.540 -[2023-02-25 10:29:06,774][22177] Num frames 2800... -[2023-02-25 10:29:06,898][22177] Num frames 2900... -[2023-02-25 10:29:07,014][22177] Num frames 3000... -[2023-02-25 10:29:07,121][22177] Num frames 3100... -[2023-02-25 10:29:07,229][22177] Num frames 3200... -[2023-02-25 10:29:07,306][22177] Avg episode rewards: #0: 24.400, true rewards: #0: 10.733 -[2023-02-25 10:29:07,309][22177] Avg episode reward: 24.400, avg true_objective: 10.733 -[2023-02-25 10:29:07,407][22177] Num frames 3300... -[2023-02-25 10:29:07,530][22177] Num frames 3400... -[2023-02-25 10:29:07,648][22177] Num frames 3500... -[2023-02-25 10:29:07,760][22177] Num frames 3600... -[2023-02-25 10:29:07,876][22177] Num frames 3700... -[2023-02-25 10:29:08,006][22177] Num frames 3800... -[2023-02-25 10:29:08,120][22177] Num frames 3900... -[2023-02-25 10:29:08,202][22177] Avg episode rewards: #0: 22.310, true rewards: #0: 9.810 -[2023-02-25 10:29:08,206][22177] Avg episode reward: 22.310, avg true_objective: 9.810 -[2023-02-25 10:29:08,296][22177] Num frames 4000... -[2023-02-25 10:29:08,408][22177] Num frames 4100... -[2023-02-25 10:29:08,535][22177] Num frames 4200... -[2023-02-25 10:29:08,605][22177] Avg episode rewards: #0: 18.824, true rewards: #0: 8.424 -[2023-02-25 10:29:08,607][22177] Avg episode reward: 18.824, avg true_objective: 8.424 -[2023-02-25 10:29:08,717][22177] Num frames 4300... -[2023-02-25 10:29:08,841][22177] Num frames 4400... -[2023-02-25 10:29:08,976][22177] Num frames 4500... -[2023-02-25 10:29:09,096][22177] Num frames 4600... -[2023-02-25 10:29:09,210][22177] Num frames 4700... -[2023-02-25 10:29:09,320][22177] Num frames 4800... -[2023-02-25 10:29:09,436][22177] Num frames 4900... -[2023-02-25 10:29:09,553][22177] Num frames 5000... -[2023-02-25 10:29:09,667][22177] Num frames 5100... -[2023-02-25 10:29:09,776][22177] Num frames 5200... -[2023-02-25 10:29:09,890][22177] Num frames 5300... -[2023-02-25 10:29:10,011][22177] Num frames 5400... -[2023-02-25 10:29:10,127][22177] Num frames 5500... -[2023-02-25 10:29:10,239][22177] Num frames 5600... -[2023-02-25 10:29:10,362][22177] Num frames 5700... -[2023-02-25 10:29:10,521][22177] Avg episode rewards: #0: 22.815, true rewards: #0: 9.648 -[2023-02-25 10:29:10,524][22177] Avg episode reward: 22.815, avg true_objective: 9.648 -[2023-02-25 10:29:10,539][22177] Num frames 5800... -[2023-02-25 10:29:10,658][22177] Num frames 5900... -[2023-02-25 10:29:10,782][22177] Num frames 6000... -[2023-02-25 10:29:10,904][22177] Num frames 6100... -[2023-02-25 10:29:11,030][22177] Num frames 6200... -[2023-02-25 10:29:11,148][22177] Num frames 6300... -[2023-02-25 10:29:11,259][22177] Num frames 6400... -[2023-02-25 10:29:11,369][22177] Num frames 6500... -[2023-02-25 10:29:11,479][22177] Num frames 6600... -[2023-02-25 10:29:11,588][22177] Num frames 6700... -[2023-02-25 10:29:11,700][22177] Avg episode rewards: #0: 22.780, true rewards: #0: 9.637 -[2023-02-25 10:29:11,702][22177] Avg episode reward: 22.780, avg true_objective: 9.637 -[2023-02-25 10:29:11,767][22177] Num frames 6800... -[2023-02-25 10:29:11,885][22177] Num frames 6900... -[2023-02-25 10:29:12,007][22177] Num frames 7000... -[2023-02-25 10:29:12,129][22177] Num frames 7100... -[2023-02-25 10:29:12,239][22177] Num frames 7200... -[2023-02-25 10:29:12,353][22177] Num frames 7300... -[2023-02-25 10:29:12,462][22177] Num frames 7400... -[2023-02-25 10:29:12,580][22177] Num frames 7500... -[2023-02-25 10:29:12,690][22177] Num frames 7600... -[2023-02-25 10:29:12,853][22177] Num frames 7700... -[2023-02-25 10:29:13,023][22177] Num frames 7800... -[2023-02-25 10:29:13,184][22177] Num frames 7900... -[2023-02-25 10:29:13,337][22177] Num frames 8000... -[2023-02-25 10:29:13,495][22177] Num frames 8100... -[2023-02-25 10:29:13,646][22177] Num frames 8200... -[2023-02-25 10:29:13,800][22177] Num frames 8300... -[2023-02-25 10:29:13,881][22177] Avg episode rewards: #0: 24.892, true rewards: #0: 10.392 -[2023-02-25 10:29:13,884][22177] Avg episode reward: 24.892, avg true_objective: 10.392 -[2023-02-25 10:29:14,029][22177] Num frames 8400... -[2023-02-25 10:29:14,196][22177] Num frames 8500... -[2023-02-25 10:29:14,359][22177] Num frames 8600... -[2023-02-25 10:29:14,524][22177] Num frames 8700... -[2023-02-25 10:29:14,681][22177] Num frames 8800... -[2023-02-25 10:29:14,845][22177] Num frames 8900... -[2023-02-25 10:29:15,023][22177] Num frames 9000... -[2023-02-25 10:29:15,226][22177] Avg episode rewards: #0: 23.647, true rewards: #0: 10.091 -[2023-02-25 10:29:15,228][22177] Avg episode reward: 23.647, avg true_objective: 10.091 -[2023-02-25 10:29:15,265][22177] Num frames 9100... -[2023-02-25 10:29:15,435][22177] Num frames 9200... -[2023-02-25 10:29:15,606][22177] Num frames 9300... -[2023-02-25 10:29:15,733][22177] Num frames 9400... -[2023-02-25 10:29:15,848][22177] Num frames 9500... -[2023-02-25 10:29:15,967][22177] Num frames 9600... -[2023-02-25 10:29:16,088][22177] Num frames 9700... -[2023-02-25 10:29:16,213][22177] Num frames 9800... -[2023-02-25 10:29:16,329][22177] Num frames 9900... -[2023-02-25 10:29:16,447][22177] Num frames 10000... -[2023-02-25 10:29:16,564][22177] Num frames 10100... -[2023-02-25 10:29:16,683][22177] Num frames 10200... -[2023-02-25 10:29:16,795][22177] Num frames 10300... -[2023-02-25 10:29:16,910][22177] Num frames 10400... -[2023-02-25 10:29:17,027][22177] Num frames 10500... -[2023-02-25 10:29:17,158][22177] Num frames 10600... -[2023-02-25 10:29:17,282][22177] Num frames 10700... -[2023-02-25 10:29:17,399][22177] Num frames 10800... -[2023-02-25 10:29:17,476][22177] Avg episode rewards: #0: 25.919, true rewards: #0: 10.819 -[2023-02-25 10:29:17,477][22177] Avg episode reward: 25.919, avg true_objective: 10.819 -[2023-02-25 10:30:25,058][22177] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-25 13:38:04,128][12809] Worker 4 uses CPU cores [0] +[2023-02-25 13:38:04,180][12813] Worker 3 uses CPU cores [1] +[2023-02-25 13:38:04,182][12822] Worker 7 uses CPU cores [1] +[2023-02-25 13:38:04,211][12819] Worker 5 uses CPU cores [1] +[2023-02-25 13:38:04,243][12814] Worker 6 uses CPU cores [0] +[2023-02-25 13:38:06,384][12789] Using optimizer +[2023-02-25 13:38:06,385][12789] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth... +[2023-02-25 13:38:06,399][12789] Loading model from checkpoint +[2023-02-25 13:38:06,400][12789] Loaded experiment state at self.train_step=0, self.env_steps=0 +[2023-02-25 13:38:06,401][12789] Initialized policy 0 weights for model version 0 +[2023-02-25 13:38:06,403][12789] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 13:38:06,414][12789] LearnerWorker_p0 finished initialization! +[2023-02-25 13:38:06,528][12803] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:38:06,530][12803] RunningMeanStd input shape: (1,) +[2023-02-25 13:38:06,549][12803] ConvEncoder: input_channels=3 +[2023-02-25 13:38:06,657][12803] Conv encoder output size: 512 +[2023-02-25 13:38:06,657][12803] Policy head output size: 512 +[2023-02-25 13:38:09,041][00699] Inference worker 0-0 is ready! +[2023-02-25 13:38:09,043][00699] All inference workers are ready! Signal rollout workers to start! +[2023-02-25 13:38:09,155][12819] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,169][12822] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,168][12804] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,166][12813] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,196][00699] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 13:38:09,211][12808] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,220][12814] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,223][12809] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,231][12805] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 13:38:09,665][12808] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,341][12808] Decorrelating experience for 32 frames... +[2023-02-25 13:38:10,368][12814] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,631][12819] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,641][12813] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,639][12804] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,646][12822] Decorrelating experience for 0 frames... +[2023-02-25 13:38:10,783][00699] Heartbeat connected on Batcher_0 +[2023-02-25 13:38:10,789][00699] Heartbeat connected on LearnerWorker_p0 +[2023-02-25 13:38:10,827][00699] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-25 13:38:10,966][12808] Decorrelating experience for 64 frames... +[2023-02-25 13:38:11,731][12819] Decorrelating experience for 32 frames... +[2023-02-25 13:38:11,747][12813] Decorrelating experience for 32 frames... +[2023-02-25 13:38:11,745][12822] Decorrelating experience for 32 frames... +[2023-02-25 13:38:11,774][12805] Decorrelating experience for 0 frames... +[2023-02-25 13:38:11,778][12809] Decorrelating experience for 0 frames... +[2023-02-25 13:38:12,181][12808] Decorrelating experience for 96 frames... +[2023-02-25 13:38:12,375][00699] Heartbeat connected on RolloutWorker_w2 +[2023-02-25 13:38:12,915][12814] Decorrelating experience for 32 frames... +[2023-02-25 13:38:12,938][12805] Decorrelating experience for 32 frames... +[2023-02-25 13:38:13,102][12804] Decorrelating experience for 32 frames... +[2023-02-25 13:38:13,276][12809] Decorrelating experience for 32 frames... +[2023-02-25 13:38:13,696][12805] Decorrelating experience for 64 frames... +[2023-02-25 13:38:13,771][12813] Decorrelating experience for 64 frames... +[2023-02-25 13:38:14,196][00699] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 13:38:14,266][12809] Decorrelating experience for 64 frames... +[2023-02-25 13:38:14,973][12814] Decorrelating experience for 64 frames... +[2023-02-25 13:38:15,029][12809] Decorrelating experience for 96 frames... +[2023-02-25 13:38:15,172][00699] Heartbeat connected on RolloutWorker_w4 +[2023-02-25 13:38:15,531][12822] Decorrelating experience for 64 frames... +[2023-02-25 13:38:16,136][12804] Decorrelating experience for 64 frames... +[2023-02-25 13:38:16,653][12814] Decorrelating experience for 96 frames... +[2023-02-25 13:38:16,733][00699] Heartbeat connected on RolloutWorker_w6 +[2023-02-25 13:38:17,407][12819] Decorrelating experience for 64 frames... +[2023-02-25 13:38:17,940][12805] Decorrelating experience for 96 frames... +[2023-02-25 13:38:18,032][00699] Heartbeat connected on RolloutWorker_w0 +[2023-02-25 13:38:18,531][12822] Decorrelating experience for 96 frames... +[2023-02-25 13:38:18,964][12813] Decorrelating experience for 96 frames... +[2023-02-25 13:38:19,024][00699] Heartbeat connected on RolloutWorker_w7 +[2023-02-25 13:38:19,142][12804] Decorrelating experience for 96 frames... +[2023-02-25 13:38:19,196][00699] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 2.6. Samples: 26. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 13:38:19,318][00699] Heartbeat connected on RolloutWorker_w3 +[2023-02-25 13:38:19,633][00699] Heartbeat connected on RolloutWorker_w1 +[2023-02-25 13:38:20,921][12819] Decorrelating experience for 96 frames... +[2023-02-25 13:38:21,595][00699] Heartbeat connected on RolloutWorker_w5 +[2023-02-25 13:38:23,189][12789] Signal inference workers to stop experience collection... +[2023-02-25 13:38:23,196][12803] InferenceWorker_p0-w0: stopping experience collection +[2023-02-25 13:38:24,196][00699] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 177.5. Samples: 2662. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 13:38:24,202][00699] Avg episode reward: [(0, '2.199')] +[2023-02-25 13:38:25,924][12789] Signal inference workers to resume experience collection... +[2023-02-25 13:38:25,925][12803] InferenceWorker_p0-w0: resuming experience collection +[2023-02-25 13:38:29,199][00699] Fps is (10 sec: 1638.0, 60 sec: 819.1, 300 sec: 819.1). Total num frames: 16384. Throughput: 0: 179.2. Samples: 3584. Policy #0 lag: (min: 0.0, avg: 1.1, max: 2.0) +[2023-02-25 13:38:29,203][00699] Avg episode reward: [(0, '3.435')] +[2023-02-25 13:38:34,197][00699] Fps is (10 sec: 3276.7, 60 sec: 1310.7, 300 sec: 1310.7). Total num frames: 32768. Throughput: 0: 362.8. Samples: 9070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:38:34,204][00699] Avg episode reward: [(0, '3.939')] +[2023-02-25 13:38:35,652][12803] Updated weights for policy 0, policy_version 10 (0.0018) +[2023-02-25 13:38:39,200][00699] Fps is (10 sec: 3276.2, 60 sec: 1638.2, 300 sec: 1638.2). Total num frames: 49152. Throughput: 0: 445.7. Samples: 13374. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:38:39,211][00699] Avg episode reward: [(0, '4.401')] +[2023-02-25 13:38:44,196][00699] Fps is (10 sec: 3276.9, 60 sec: 1872.5, 300 sec: 1872.5). Total num frames: 65536. Throughput: 0: 443.0. Samples: 15504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:38:44,200][00699] Avg episode reward: [(0, '4.297')] +[2023-02-25 13:38:47,260][12803] Updated weights for policy 0, policy_version 20 (0.0013) +[2023-02-25 13:38:49,196][00699] Fps is (10 sec: 4097.6, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 90112. Throughput: 0: 551.7. Samples: 22066. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:38:49,204][00699] Avg episode reward: [(0, '4.240')] +[2023-02-25 13:38:54,196][00699] Fps is (10 sec: 4505.6, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 110592. Throughput: 0: 624.9. Samples: 28122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:38:54,201][00699] Avg episode reward: [(0, '4.268')] +[2023-02-25 13:38:54,212][12789] Saving new best policy, reward=4.268! +[2023-02-25 13:38:59,197][00699] Fps is (10 sec: 2867.2, 60 sec: 2375.7, 300 sec: 2375.7). Total num frames: 118784. Throughput: 0: 668.4. Samples: 30078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:38:59,206][00699] Avg episode reward: [(0, '4.382')] +[2023-02-25 13:38:59,265][12789] Saving new best policy, reward=4.382! +[2023-02-25 13:38:59,287][12803] Updated weights for policy 0, policy_version 30 (0.0026) +[2023-02-25 13:39:04,196][00699] Fps is (10 sec: 2048.0, 60 sec: 2383.1, 300 sec: 2383.1). Total num frames: 131072. Throughput: 0: 739.6. Samples: 33310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:39:04,204][00699] Avg episode reward: [(0, '4.457')] +[2023-02-25 13:39:04,206][12789] Saving new best policy, reward=4.457! +[2023-02-25 13:39:09,197][00699] Fps is (10 sec: 2457.6, 60 sec: 2389.3, 300 sec: 2389.3). Total num frames: 143360. Throughput: 0: 761.1. Samples: 36912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:39:09,203][00699] Avg episode reward: [(0, '4.486')] +[2023-02-25 13:39:09,216][12789] Saving new best policy, reward=4.486! +[2023-02-25 13:39:13,707][12803] Updated weights for policy 0, policy_version 40 (0.0038) +[2023-02-25 13:39:14,196][00699] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2520.6). Total num frames: 163840. Throughput: 0: 801.7. Samples: 39660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:39:14,202][00699] Avg episode reward: [(0, '4.470')] +[2023-02-25 13:39:19,197][00699] Fps is (10 sec: 4096.1, 60 sec: 3072.0, 300 sec: 2633.1). Total num frames: 184320. Throughput: 0: 812.4. Samples: 45626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:39:19,202][00699] Avg episode reward: [(0, '4.309')] +[2023-02-25 13:39:24,199][00699] Fps is (10 sec: 3276.1, 60 sec: 3276.7, 300 sec: 2621.4). Total num frames: 196608. Throughput: 0: 812.0. Samples: 49914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:39:24,202][00699] Avg episode reward: [(0, '4.356')] +[2023-02-25 13:39:26,654][12803] Updated weights for policy 0, policy_version 50 (0.0022) +[2023-02-25 13:39:29,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3276.9, 300 sec: 2662.4). Total num frames: 212992. Throughput: 0: 810.8. Samples: 51992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:39:29,199][00699] Avg episode reward: [(0, '4.447')] +[2023-02-25 13:39:34,196][00699] Fps is (10 sec: 4096.9, 60 sec: 3413.3, 300 sec: 2794.9). Total num frames: 237568. Throughput: 0: 813.2. Samples: 58658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:39:34,204][00699] Avg episode reward: [(0, '4.415')] +[2023-02-25 13:39:36,075][12803] Updated weights for policy 0, policy_version 60 (0.0018) +[2023-02-25 13:39:39,198][00699] Fps is (10 sec: 4095.5, 60 sec: 3413.5, 300 sec: 2821.6). Total num frames: 253952. Throughput: 0: 807.1. Samples: 64442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:39:39,206][00699] Avg episode reward: [(0, '4.347')] +[2023-02-25 13:39:44,199][00699] Fps is (10 sec: 2866.5, 60 sec: 3344.9, 300 sec: 2802.5). Total num frames: 266240. Throughput: 0: 808.7. Samples: 66470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:39:44,206][00699] Avg episode reward: [(0, '4.475')] +[2023-02-25 13:39:49,198][00699] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 2826.2). Total num frames: 282624. Throughput: 0: 825.4. Samples: 70452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:39:49,205][00699] Avg episode reward: [(0, '4.382')] +[2023-02-25 13:39:49,218][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth... +[2023-02-25 13:39:49,889][12803] Updated weights for policy 0, policy_version 70 (0.0017) +[2023-02-25 13:39:54,196][00699] Fps is (10 sec: 3687.2, 60 sec: 3208.5, 300 sec: 2886.7). Total num frames: 303104. Throughput: 0: 882.8. Samples: 76640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:39:54,203][00699] Avg episode reward: [(0, '4.416')] +[2023-02-25 13:39:59,196][00699] Fps is (10 sec: 4096.5, 60 sec: 3413.3, 300 sec: 2941.7). Total num frames: 323584. Throughput: 0: 890.4. Samples: 79726. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:39:59,199][00699] Avg episode reward: [(0, '4.471')] +[2023-02-25 13:40:00,583][12803] Updated weights for policy 0, policy_version 80 (0.0013) +[2023-02-25 13:40:04,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2920.6). Total num frames: 335872. Throughput: 0: 854.7. Samples: 84088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:40:04,208][00699] Avg episode reward: [(0, '4.442')] +[2023-02-25 13:40:09,197][00699] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 2901.3). Total num frames: 348160. Throughput: 0: 852.1. Samples: 88258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:40:09,203][00699] Avg episode reward: [(0, '4.489')] +[2023-02-25 13:40:09,218][12789] Saving new best policy, reward=4.489! +[2023-02-25 13:40:13,388][12803] Updated weights for policy 0, policy_version 90 (0.0026) +[2023-02-25 13:40:14,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2949.1). Total num frames: 368640. Throughput: 0: 872.2. Samples: 91242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:40:14,199][00699] Avg episode reward: [(0, '4.461')] +[2023-02-25 13:40:19,199][00699] Fps is (10 sec: 4095.0, 60 sec: 3413.2, 300 sec: 2993.2). Total num frames: 389120. Throughput: 0: 863.8. Samples: 97530. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:40:19,207][00699] Avg episode reward: [(0, '4.482')] +[2023-02-25 13:40:24,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 2973.4). Total num frames: 401408. Throughput: 0: 831.8. Samples: 101874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:40:24,203][00699] Avg episode reward: [(0, '4.534')] +[2023-02-25 13:40:24,267][12789] Saving new best policy, reward=4.534! +[2023-02-25 13:40:25,830][12803] Updated weights for policy 0, policy_version 100 (0.0014) +[2023-02-25 13:40:29,196][00699] Fps is (10 sec: 2868.0, 60 sec: 3413.3, 300 sec: 2984.2). Total num frames: 417792. Throughput: 0: 828.8. Samples: 103766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:40:29,203][00699] Avg episode reward: [(0, '4.524')] +[2023-02-25 13:40:34,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3022.6). Total num frames: 438272. Throughput: 0: 865.7. Samples: 109408. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:40:34,203][00699] Avg episode reward: [(0, '4.426')] +[2023-02-25 13:40:36,606][12803] Updated weights for policy 0, policy_version 110 (0.0028) +[2023-02-25 13:40:39,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3058.3). Total num frames: 458752. Throughput: 0: 873.7. Samples: 115956. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:40:39,199][00699] Avg episode reward: [(0, '4.391')] +[2023-02-25 13:40:44,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3065.4). Total num frames: 475136. Throughput: 0: 852.6. Samples: 118092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:40:44,199][00699] Avg episode reward: [(0, '4.386')] +[2023-02-25 13:40:49,197][00699] Fps is (10 sec: 2867.1, 60 sec: 3413.4, 300 sec: 3046.4). Total num frames: 487424. Throughput: 0: 847.5. Samples: 122224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:40:49,201][00699] Avg episode reward: [(0, '4.395')] +[2023-02-25 13:40:49,737][12803] Updated weights for policy 0, policy_version 120 (0.0034) +[2023-02-25 13:40:54,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3078.2). Total num frames: 507904. Throughput: 0: 887.2. Samples: 128182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:40:54,204][00699] Avg episode reward: [(0, '4.368')] +[2023-02-25 13:40:59,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3108.1). Total num frames: 528384. Throughput: 0: 893.0. Samples: 131428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:40:59,198][00699] Avg episode reward: [(0, '4.583')] +[2023-02-25 13:40:59,214][12789] Saving new best policy, reward=4.583! +[2023-02-25 13:40:59,218][12803] Updated weights for policy 0, policy_version 130 (0.0026) +[2023-02-25 13:41:04,198][00699] Fps is (10 sec: 3685.8, 60 sec: 3481.5, 300 sec: 3112.9). Total num frames: 544768. Throughput: 0: 864.6. Samples: 136438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:41:04,206][00699] Avg episode reward: [(0, '4.595')] +[2023-02-25 13:41:04,208][12789] Saving new best policy, reward=4.595! +[2023-02-25 13:41:09,197][00699] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3094.7). Total num frames: 557056. Throughput: 0: 857.6. Samples: 140466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:41:09,205][00699] Avg episode reward: [(0, '4.544')] +[2023-02-25 13:41:12,959][12803] Updated weights for policy 0, policy_version 140 (0.0013) +[2023-02-25 13:41:14,196][00699] Fps is (10 sec: 3277.3, 60 sec: 3481.6, 300 sec: 3121.8). Total num frames: 577536. Throughput: 0: 871.3. Samples: 142976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:41:14,202][00699] Avg episode reward: [(0, '4.354')] +[2023-02-25 13:41:19,197][00699] Fps is (10 sec: 4096.2, 60 sec: 3481.7, 300 sec: 3147.4). Total num frames: 598016. Throughput: 0: 892.4. Samples: 149564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:41:19,202][00699] Avg episode reward: [(0, '4.352')] +[2023-02-25 13:41:23,498][12803] Updated weights for policy 0, policy_version 150 (0.0012) +[2023-02-25 13:41:24,214][00699] Fps is (10 sec: 3679.9, 60 sec: 3548.8, 300 sec: 3150.5). Total num frames: 614400. Throughput: 0: 858.0. Samples: 154580. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 13:41:24,217][00699] Avg episode reward: [(0, '4.365')] +[2023-02-25 13:41:29,197][00699] Fps is (10 sec: 2867.0, 60 sec: 3481.5, 300 sec: 3133.4). Total num frames: 626688. Throughput: 0: 855.9. Samples: 156610. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 13:41:29,204][00699] Avg episode reward: [(0, '4.388')] +[2023-02-25 13:41:34,196][00699] Fps is (10 sec: 3282.5, 60 sec: 3481.6, 300 sec: 3156.9). Total num frames: 647168. Throughput: 0: 872.9. Samples: 161504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:41:34,199][00699] Avg episode reward: [(0, '4.522')] +[2023-02-25 13:41:35,799][12803] Updated weights for policy 0, policy_version 160 (0.0020) +[2023-02-25 13:41:39,196][00699] Fps is (10 sec: 4096.4, 60 sec: 3481.6, 300 sec: 3179.3). Total num frames: 667648. Throughput: 0: 886.0. Samples: 168054. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:41:39,199][00699] Avg episode reward: [(0, '4.532')] +[2023-02-25 13:41:44,197][00699] Fps is (10 sec: 3686.1, 60 sec: 3481.6, 300 sec: 3181.5). Total num frames: 684032. Throughput: 0: 873.9. Samples: 170756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:41:44,200][00699] Avg episode reward: [(0, '4.544')] +[2023-02-25 13:41:47,745][12803] Updated weights for policy 0, policy_version 170 (0.0024) +[2023-02-25 13:41:49,197][00699] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3165.1). Total num frames: 696320. Throughput: 0: 852.1. Samples: 174782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:41:49,203][00699] Avg episode reward: [(0, '4.440')] +[2023-02-25 13:41:49,219][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000170_696320.pth... +[2023-02-25 13:41:49,435][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000000_0.pth +[2023-02-25 13:41:54,196][00699] Fps is (10 sec: 3277.1, 60 sec: 3481.6, 300 sec: 3185.8). Total num frames: 716800. Throughput: 0: 872.1. Samples: 179708. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:41:54,200][00699] Avg episode reward: [(0, '4.546')] +[2023-02-25 13:41:59,104][12803] Updated weights for policy 0, policy_version 180 (0.0024) +[2023-02-25 13:41:59,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3205.6). Total num frames: 737280. Throughput: 0: 883.7. Samples: 182742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:41:59,199][00699] Avg episode reward: [(0, '4.559')] +[2023-02-25 13:42:04,197][00699] Fps is (10 sec: 3686.3, 60 sec: 3481.7, 300 sec: 3207.1). Total num frames: 753664. Throughput: 0: 862.5. Samples: 188378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:42:04,201][00699] Avg episode reward: [(0, '4.434')] +[2023-02-25 13:42:09,201][00699] Fps is (10 sec: 2866.0, 60 sec: 3481.4, 300 sec: 3191.4). Total num frames: 765952. Throughput: 0: 834.1. Samples: 192102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:42:09,210][00699] Avg episode reward: [(0, '4.536')] +[2023-02-25 13:42:14,202][00699] Fps is (10 sec: 2047.0, 60 sec: 3276.5, 300 sec: 3159.7). Total num frames: 774144. Throughput: 0: 822.5. Samples: 193626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:42:14,207][00699] Avg episode reward: [(0, '4.519')] +[2023-02-25 13:42:14,734][12803] Updated weights for policy 0, policy_version 190 (0.0020) +[2023-02-25 13:42:19,196][00699] Fps is (10 sec: 2048.8, 60 sec: 3140.3, 300 sec: 3145.7). Total num frames: 786432. Throughput: 0: 793.6. Samples: 197218. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:42:19,205][00699] Avg episode reward: [(0, '4.542')] +[2023-02-25 13:42:24,196][00699] Fps is (10 sec: 3278.5, 60 sec: 3209.5, 300 sec: 3164.4). Total num frames: 806912. Throughput: 0: 771.0. Samples: 202748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:42:24,203][00699] Avg episode reward: [(0, '4.444')] +[2023-02-25 13:42:27,963][12803] Updated weights for policy 0, policy_version 200 (0.0015) +[2023-02-25 13:42:29,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3208.6, 300 sec: 3150.8). Total num frames: 819200. Throughput: 0: 757.3. Samples: 204834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:42:29,201][00699] Avg episode reward: [(0, '4.356')] +[2023-02-25 13:42:34,197][00699] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3153.1). Total num frames: 835584. Throughput: 0: 757.6. Samples: 208872. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:42:34,206][00699] Avg episode reward: [(0, '4.367')] +[2023-02-25 13:42:39,197][00699] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3170.6). Total num frames: 856064. Throughput: 0: 779.8. Samples: 214798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:42:39,199][00699] Avg episode reward: [(0, '4.418')] +[2023-02-25 13:42:39,787][12803] Updated weights for policy 0, policy_version 210 (0.0012) +[2023-02-25 13:42:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3208.6, 300 sec: 3187.4). Total num frames: 876544. Throughput: 0: 784.2. Samples: 218032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:42:44,199][00699] Avg episode reward: [(0, '4.571')] +[2023-02-25 13:42:49,198][00699] Fps is (10 sec: 3685.7, 60 sec: 3276.7, 300 sec: 3189.0). Total num frames: 892928. Throughput: 0: 770.9. Samples: 223068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:42:49,203][00699] Avg episode reward: [(0, '4.552')] +[2023-02-25 13:42:52,323][12803] Updated weights for policy 0, policy_version 220 (0.0023) +[2023-02-25 13:42:54,197][00699] Fps is (10 sec: 2867.1, 60 sec: 3140.3, 300 sec: 3176.2). Total num frames: 905216. Throughput: 0: 776.3. Samples: 227032. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:42:54,205][00699] Avg episode reward: [(0, '4.517')] +[2023-02-25 13:42:59,197][00699] Fps is (10 sec: 3277.4, 60 sec: 3140.3, 300 sec: 3192.1). Total num frames: 925696. Throughput: 0: 806.2. Samples: 229900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:42:59,205][00699] Avg episode reward: [(0, '4.448')] +[2023-02-25 13:43:02,756][12803] Updated weights for policy 0, policy_version 230 (0.0021) +[2023-02-25 13:43:04,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3208.6, 300 sec: 3207.4). Total num frames: 946176. Throughput: 0: 871.6. Samples: 236440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:43:04,207][00699] Avg episode reward: [(0, '4.605')] +[2023-02-25 13:43:04,209][12789] Saving new best policy, reward=4.605! +[2023-02-25 13:43:09,203][00699] Fps is (10 sec: 3684.1, 60 sec: 3276.7, 300 sec: 3262.8). Total num frames: 962560. Throughput: 0: 856.7. Samples: 241306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:43:09,206][00699] Avg episode reward: [(0, '4.533')] +[2023-02-25 13:43:14,199][00699] Fps is (10 sec: 2866.5, 60 sec: 3345.2, 300 sec: 3304.5). Total num frames: 974848. Throughput: 0: 854.8. Samples: 243302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:43:14,202][00699] Avg episode reward: [(0, '4.522')] +[2023-02-25 13:43:16,165][12803] Updated weights for policy 0, policy_version 240 (0.0012) +[2023-02-25 13:43:19,196][00699] Fps is (10 sec: 3278.9, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 995328. Throughput: 0: 879.1. Samples: 248430. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:43:19,204][00699] Avg episode reward: [(0, '4.599')] +[2023-02-25 13:43:24,196][00699] Fps is (10 sec: 4096.9, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 1015808. Throughput: 0: 900.3. Samples: 255310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:43:24,204][00699] Avg episode reward: [(0, '4.764')] +[2023-02-25 13:43:24,290][12789] Saving new best policy, reward=4.764! +[2023-02-25 13:43:25,257][12803] Updated weights for policy 0, policy_version 250 (0.0029) +[2023-02-25 13:43:29,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 1032192. Throughput: 0: 888.2. Samples: 258002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:43:29,202][00699] Avg episode reward: [(0, '4.763')] +[2023-02-25 13:43:34,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 1048576. Throughput: 0: 871.7. Samples: 262294. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 13:43:34,207][00699] Avg episode reward: [(0, '4.736')] +[2023-02-25 13:43:37,992][12803] Updated weights for policy 0, policy_version 260 (0.0028) +[2023-02-25 13:43:39,197][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 1069056. Throughput: 0: 906.2. Samples: 267812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:43:39,204][00699] Avg episode reward: [(0, '4.703')] +[2023-02-25 13:43:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 1089536. Throughput: 0: 915.5. Samples: 271096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:43:44,205][00699] Avg episode reward: [(0, '4.586')] +[2023-02-25 13:43:47,983][12803] Updated weights for policy 0, policy_version 270 (0.0018) +[2023-02-25 13:43:49,197][00699] Fps is (10 sec: 3686.3, 60 sec: 3550.0, 300 sec: 3374.0). Total num frames: 1105920. Throughput: 0: 901.1. Samples: 276990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:43:49,205][00699] Avg episode reward: [(0, '4.837')] +[2023-02-25 13:43:49,220][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000270_1105920.pth... +[2023-02-25 13:43:49,401][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth +[2023-02-25 13:43:49,434][12789] Saving new best policy, reward=4.837! +[2023-02-25 13:43:54,197][00699] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3401.8). Total num frames: 1122304. Throughput: 0: 883.9. Samples: 281074. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 13:43:54,199][00699] Avg episode reward: [(0, '4.964')] +[2023-02-25 13:43:54,210][12789] Saving new best policy, reward=4.964! +[2023-02-25 13:43:59,197][00699] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 1138688. Throughput: 0: 890.2. Samples: 283360. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 13:43:59,199][00699] Avg episode reward: [(0, '4.907')] +[2023-02-25 13:44:00,560][12803] Updated weights for policy 0, policy_version 280 (0.0019) +[2023-02-25 13:44:04,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1159168. Throughput: 0: 922.9. Samples: 289962. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 13:44:04,202][00699] Avg episode reward: [(0, '4.968')] +[2023-02-25 13:44:04,207][12789] Saving new best policy, reward=4.968! +[2023-02-25 13:44:09,200][00699] Fps is (10 sec: 4094.4, 60 sec: 3618.3, 300 sec: 3443.4). Total num frames: 1179648. Throughput: 0: 891.6. Samples: 295436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:44:09,212][00699] Avg episode reward: [(0, '5.087')] +[2023-02-25 13:44:09,228][12789] Saving new best policy, reward=5.087! +[2023-02-25 13:44:11,973][12803] Updated weights for policy 0, policy_version 290 (0.0014) +[2023-02-25 13:44:14,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3415.6). Total num frames: 1191936. Throughput: 0: 876.0. Samples: 297422. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:44:14,201][00699] Avg episode reward: [(0, '5.121')] +[2023-02-25 13:44:14,204][12789] Saving new best policy, reward=5.121! +[2023-02-25 13:44:19,196][00699] Fps is (10 sec: 2868.4, 60 sec: 3549.9, 300 sec: 3429.6). Total num frames: 1208320. Throughput: 0: 878.6. Samples: 301832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:44:19,199][00699] Avg episode reward: [(0, '5.006')] +[2023-02-25 13:44:23,475][12803] Updated weights for policy 0, policy_version 300 (0.0022) +[2023-02-25 13:44:24,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1228800. Throughput: 0: 902.5. Samples: 308424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:44:24,204][00699] Avg episode reward: [(0, '5.221')] +[2023-02-25 13:44:24,207][12789] Saving new best policy, reward=5.221! +[2023-02-25 13:44:29,197][00699] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3429.5). Total num frames: 1249280. Throughput: 0: 902.2. Samples: 311696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:44:29,202][00699] Avg episode reward: [(0, '5.346')] +[2023-02-25 13:44:29,214][12789] Saving new best policy, reward=5.346! +[2023-02-25 13:44:34,197][00699] Fps is (10 sec: 3276.6, 60 sec: 3549.8, 300 sec: 3415.7). Total num frames: 1261568. Throughput: 0: 861.8. Samples: 315770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:44:34,204][00699] Avg episode reward: [(0, '5.321')] +[2023-02-25 13:44:36,311][12803] Updated weights for policy 0, policy_version 310 (0.0012) +[2023-02-25 13:44:39,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 1277952. Throughput: 0: 875.7. Samples: 320480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:44:39,205][00699] Avg episode reward: [(0, '5.228')] +[2023-02-25 13:44:44,201][00699] Fps is (10 sec: 3685.1, 60 sec: 3481.4, 300 sec: 3443.4). Total num frames: 1298432. Throughput: 0: 898.0. Samples: 323774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:44:44,204][00699] Avg episode reward: [(0, '4.965')] +[2023-02-25 13:44:46,138][12803] Updated weights for policy 0, policy_version 320 (0.0014) +[2023-02-25 13:44:49,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1318912. Throughput: 0: 894.5. Samples: 330214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:44:49,204][00699] Avg episode reward: [(0, '4.892')] +[2023-02-25 13:44:54,196][00699] Fps is (10 sec: 3688.0, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 1335296. Throughput: 0: 865.2. Samples: 334366. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:44:54,199][00699] Avg episode reward: [(0, '5.219')] +[2023-02-25 13:44:59,197][00699] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1347584. Throughput: 0: 866.7. Samples: 336422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:44:59,200][00699] Avg episode reward: [(0, '5.157')] +[2023-02-25 13:44:59,565][12803] Updated weights for policy 0, policy_version 330 (0.0027) +[2023-02-25 13:45:04,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 1368064. Throughput: 0: 900.4. Samples: 342350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:45:04,204][00699] Avg episode reward: [(0, '5.232')] +[2023-02-25 13:45:09,148][12803] Updated weights for policy 0, policy_version 340 (0.0020) +[2023-02-25 13:45:09,196][00699] Fps is (10 sec: 4505.7, 60 sec: 3550.1, 300 sec: 3471.2). Total num frames: 1392640. Throughput: 0: 894.3. Samples: 348668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:45:09,201][00699] Avg episode reward: [(0, '5.322')] +[2023-02-25 13:45:14,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1404928. Throughput: 0: 867.0. Samples: 350712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:45:14,206][00699] Avg episode reward: [(0, '5.523')] +[2023-02-25 13:45:14,211][12789] Saving new best policy, reward=5.523! +[2023-02-25 13:45:19,196][00699] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1417216. Throughput: 0: 865.8. Samples: 354732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:45:19,201][00699] Avg episode reward: [(0, '5.400')] +[2023-02-25 13:45:22,991][12803] Updated weights for policy 0, policy_version 350 (0.0028) +[2023-02-25 13:45:24,198][00699] Fps is (10 sec: 2866.7, 60 sec: 3413.2, 300 sec: 3443.4). Total num frames: 1433600. Throughput: 0: 872.1. Samples: 359726. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:45:24,201][00699] Avg episode reward: [(0, '5.494')] +[2023-02-25 13:45:29,197][00699] Fps is (10 sec: 3276.7, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 1449984. Throughput: 0: 845.0. Samples: 361798. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 13:45:29,201][00699] Avg episode reward: [(0, '5.438')] +[2023-02-25 13:45:34,197][00699] Fps is (10 sec: 2867.7, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1462272. Throughput: 0: 786.4. Samples: 365600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:45:34,201][00699] Avg episode reward: [(0, '5.156')] +[2023-02-25 13:45:38,162][12803] Updated weights for policy 0, policy_version 360 (0.0027) +[2023-02-25 13:45:39,197][00699] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1474560. Throughput: 0: 789.2. Samples: 369882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:45:39,201][00699] Avg episode reward: [(0, '5.260')] +[2023-02-25 13:45:44,196][00699] Fps is (10 sec: 3276.9, 60 sec: 3277.0, 300 sec: 3415.7). Total num frames: 1495040. Throughput: 0: 805.2. Samples: 372654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:45:44,198][00699] Avg episode reward: [(0, '5.495')] +[2023-02-25 13:45:48,350][12803] Updated weights for policy 0, policy_version 370 (0.0024) +[2023-02-25 13:45:49,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 1515520. Throughput: 0: 821.2. Samples: 379306. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 13:45:49,204][00699] Avg episode reward: [(0, '5.517')] +[2023-02-25 13:45:49,217][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000370_1515520.pth... +[2023-02-25 13:45:49,385][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000170_696320.pth +[2023-02-25 13:45:54,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 1531904. Throughput: 0: 790.0. Samples: 384220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:45:54,202][00699] Avg episode reward: [(0, '5.389')] +[2023-02-25 13:45:59,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1544192. Throughput: 0: 790.9. Samples: 386304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:45:59,204][00699] Avg episode reward: [(0, '5.361')] +[2023-02-25 13:46:01,652][12803] Updated weights for policy 0, policy_version 380 (0.0019) +[2023-02-25 13:46:04,197][00699] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3415.7). Total num frames: 1564672. Throughput: 0: 811.3. Samples: 391240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:46:04,206][00699] Avg episode reward: [(0, '5.372')] +[2023-02-25 13:46:09,199][00699] Fps is (10 sec: 4095.1, 60 sec: 3208.4, 300 sec: 3415.6). Total num frames: 1585152. Throughput: 0: 845.1. Samples: 397758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:46:09,205][00699] Avg episode reward: [(0, '5.069')] +[2023-02-25 13:46:11,502][12803] Updated weights for policy 0, policy_version 390 (0.0012) +[2023-02-25 13:46:14,196][00699] Fps is (10 sec: 3686.5, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 1601536. Throughput: 0: 860.2. Samples: 400506. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:46:14,199][00699] Avg episode reward: [(0, '5.066')] +[2023-02-25 13:46:19,199][00699] Fps is (10 sec: 3276.7, 60 sec: 3344.9, 300 sec: 3401.9). Total num frames: 1617920. Throughput: 0: 867.3. Samples: 404632. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:46:19,202][00699] Avg episode reward: [(0, '5.093')] +[2023-02-25 13:46:24,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3415.7). Total num frames: 1634304. Throughput: 0: 885.7. Samples: 409738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:46:24,199][00699] Avg episode reward: [(0, '5.133')] +[2023-02-25 13:46:24,714][12803] Updated weights for policy 0, policy_version 400 (0.0026) +[2023-02-25 13:46:29,196][00699] Fps is (10 sec: 3687.4, 60 sec: 3413.4, 300 sec: 3415.6). Total num frames: 1654784. Throughput: 0: 894.7. Samples: 412916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:46:29,199][00699] Avg episode reward: [(0, '5.210')] +[2023-02-25 13:46:34,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 1675264. Throughput: 0: 879.4. Samples: 418880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:46:34,199][00699] Avg episode reward: [(0, '5.206')] +[2023-02-25 13:46:35,208][12803] Updated weights for policy 0, policy_version 410 (0.0018) +[2023-02-25 13:46:39,203][00699] Fps is (10 sec: 3274.8, 60 sec: 3549.5, 300 sec: 3401.7). Total num frames: 1687552. Throughput: 0: 863.1. Samples: 423066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:46:39,220][00699] Avg episode reward: [(0, '5.233')] +[2023-02-25 13:46:44,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1703936. Throughput: 0: 863.1. Samples: 425142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:46:44,204][00699] Avg episode reward: [(0, '4.942')] +[2023-02-25 13:46:47,222][12803] Updated weights for policy 0, policy_version 420 (0.0018) +[2023-02-25 13:46:49,196][00699] Fps is (10 sec: 3688.7, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1724416. Throughput: 0: 893.7. Samples: 431456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:46:49,204][00699] Avg episode reward: [(0, '5.188')] +[2023-02-25 13:46:54,197][00699] Fps is (10 sec: 4095.9, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 1744896. Throughput: 0: 872.4. Samples: 437014. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:46:54,200][00699] Avg episode reward: [(0, '5.318')] +[2023-02-25 13:46:59,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 1757184. Throughput: 0: 857.0. Samples: 439070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:46:59,200][00699] Avg episode reward: [(0, '5.614')] +[2023-02-25 13:46:59,215][12789] Saving new best policy, reward=5.614! +[2023-02-25 13:46:59,725][12803] Updated weights for policy 0, policy_version 430 (0.0038) +[2023-02-25 13:47:04,196][00699] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 1773568. Throughput: 0: 858.9. Samples: 443282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:47:04,201][00699] Avg episode reward: [(0, '5.488')] +[2023-02-25 13:47:09,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3457.4). Total num frames: 1794048. Throughput: 0: 891.8. Samples: 449868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:47:09,199][00699] Avg episode reward: [(0, '5.686')] +[2023-02-25 13:47:09,208][12789] Saving new best policy, reward=5.686! +[2023-02-25 13:47:10,549][12803] Updated weights for policy 0, policy_version 440 (0.0022) +[2023-02-25 13:47:14,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1814528. Throughput: 0: 886.3. Samples: 452800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:47:14,200][00699] Avg episode reward: [(0, '5.653')] +[2023-02-25 13:47:19,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3457.3). Total num frames: 1826816. Throughput: 0: 849.6. Samples: 457114. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:47:19,202][00699] Avg episode reward: [(0, '5.435')] +[2023-02-25 13:47:23,981][12803] Updated weights for policy 0, policy_version 450 (0.0036) +[2023-02-25 13:47:24,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 1843200. Throughput: 0: 852.7. Samples: 461430. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:47:24,205][00699] Avg episode reward: [(0, '5.554')] +[2023-02-25 13:47:29,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1863680. Throughput: 0: 877.7. Samples: 464638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:47:29,203][00699] Avg episode reward: [(0, '5.407')] +[2023-02-25 13:47:33,456][12803] Updated weights for policy 0, policy_version 460 (0.0023) +[2023-02-25 13:47:34,198][00699] Fps is (10 sec: 4095.5, 60 sec: 3481.5, 300 sec: 3485.1). Total num frames: 1884160. Throughput: 0: 882.4. Samples: 471164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:47:34,206][00699] Avg episode reward: [(0, '5.426')] +[2023-02-25 13:47:39,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3482.0, 300 sec: 3457.3). Total num frames: 1896448. Throughput: 0: 852.9. Samples: 475396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:47:39,201][00699] Avg episode reward: [(0, '5.364')] +[2023-02-25 13:47:44,196][00699] Fps is (10 sec: 2867.6, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 1912832. Throughput: 0: 852.0. Samples: 477412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:47:44,204][00699] Avg episode reward: [(0, '5.431')] +[2023-02-25 13:47:46,683][12803] Updated weights for policy 0, policy_version 470 (0.0019) +[2023-02-25 13:47:49,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1933312. Throughput: 0: 890.3. Samples: 483344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:47:49,205][00699] Avg episode reward: [(0, '5.595')] +[2023-02-25 13:47:49,219][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000472_1933312.pth... +[2023-02-25 13:47:49,371][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000270_1105920.pth +[2023-02-25 13:47:54,201][00699] Fps is (10 sec: 4094.2, 60 sec: 3481.4, 300 sec: 3485.0). Total num frames: 1953792. Throughput: 0: 884.1. Samples: 489654. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:47:54,207][00699] Avg episode reward: [(0, '5.797')] +[2023-02-25 13:47:54,212][12789] Saving new best policy, reward=5.797! +[2023-02-25 13:47:57,676][12803] Updated weights for policy 0, policy_version 480 (0.0015) +[2023-02-25 13:47:59,202][00699] Fps is (10 sec: 3684.5, 60 sec: 3549.6, 300 sec: 3471.1). Total num frames: 1970176. Throughput: 0: 864.4. Samples: 491702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:47:59,205][00699] Avg episode reward: [(0, '5.744')] +[2023-02-25 13:48:04,201][00699] Fps is (10 sec: 2867.0, 60 sec: 3481.3, 300 sec: 3457.3). Total num frames: 1982464. Throughput: 0: 863.2. Samples: 495960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:04,204][00699] Avg episode reward: [(0, '5.671')] +[2023-02-25 13:48:09,196][00699] Fps is (10 sec: 3278.5, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2002944. Throughput: 0: 905.2. Samples: 502166. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:48:09,205][00699] Avg episode reward: [(0, '5.494')] +[2023-02-25 13:48:09,258][12803] Updated weights for policy 0, policy_version 490 (0.0016) +[2023-02-25 13:48:14,199][00699] Fps is (10 sec: 4506.7, 60 sec: 3549.7, 300 sec: 3498.9). Total num frames: 2027520. Throughput: 0: 906.2. Samples: 505418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:14,207][00699] Avg episode reward: [(0, '5.479')] +[2023-02-25 13:48:19,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2039808. Throughput: 0: 873.8. Samples: 510486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:48:19,201][00699] Avg episode reward: [(0, '5.613')] +[2023-02-25 13:48:21,142][12803] Updated weights for policy 0, policy_version 500 (0.0012) +[2023-02-25 13:48:24,198][00699] Fps is (10 sec: 2867.5, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 2056192. Throughput: 0: 872.7. Samples: 514668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:24,203][00699] Avg episode reward: [(0, '5.812')] +[2023-02-25 13:48:24,207][12789] Saving new best policy, reward=5.812! +[2023-02-25 13:48:29,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2076672. Throughput: 0: 894.7. Samples: 517672. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:29,205][00699] Avg episode reward: [(0, '5.924')] +[2023-02-25 13:48:29,217][12789] Saving new best policy, reward=5.924! +[2023-02-25 13:48:31,972][12803] Updated weights for policy 0, policy_version 510 (0.0021) +[2023-02-25 13:48:34,196][00699] Fps is (10 sec: 4096.5, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2097152. Throughput: 0: 903.6. Samples: 524006. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:48:34,199][00699] Avg episode reward: [(0, '6.142')] +[2023-02-25 13:48:34,202][12789] Saving new best policy, reward=6.142! +[2023-02-25 13:48:39,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2109440. Throughput: 0: 850.1. Samples: 527906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:39,199][00699] Avg episode reward: [(0, '5.880')] +[2023-02-25 13:48:44,199][00699] Fps is (10 sec: 2047.5, 60 sec: 3413.2, 300 sec: 3429.5). Total num frames: 2117632. Throughput: 0: 841.5. Samples: 529566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:48:44,201][00699] Avg episode reward: [(0, '6.047')] +[2023-02-25 13:48:48,142][12803] Updated weights for policy 0, policy_version 520 (0.0018) +[2023-02-25 13:48:49,197][00699] Fps is (10 sec: 2047.9, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 2129920. Throughput: 0: 822.7. Samples: 532980. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:48:49,200][00699] Avg episode reward: [(0, '6.281')] +[2023-02-25 13:48:49,217][12789] Saving new best policy, reward=6.281! +[2023-02-25 13:48:54,201][00699] Fps is (10 sec: 3276.2, 60 sec: 3276.8, 300 sec: 3429.5). Total num frames: 2150400. Throughput: 0: 804.8. Samples: 538386. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:48:54,209][00699] Avg episode reward: [(0, '6.131')] +[2023-02-25 13:48:58,539][12803] Updated weights for policy 0, policy_version 530 (0.0026) +[2023-02-25 13:48:59,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3345.4, 300 sec: 3429.5). Total num frames: 2170880. Throughput: 0: 805.4. Samples: 541658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:48:59,206][00699] Avg episode reward: [(0, '6.068')] +[2023-02-25 13:49:04,202][00699] Fps is (10 sec: 3686.0, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2187264. Throughput: 0: 814.7. Samples: 547152. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 13:49:04,205][00699] Avg episode reward: [(0, '6.140')] +[2023-02-25 13:49:09,197][00699] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 2199552. Throughput: 0: 813.0. Samples: 551254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:49:09,199][00699] Avg episode reward: [(0, '6.317')] +[2023-02-25 13:49:09,253][12789] Saving new best policy, reward=6.317! +[2023-02-25 13:49:11,967][12803] Updated weights for policy 0, policy_version 540 (0.0031) +[2023-02-25 13:49:14,199][00699] Fps is (10 sec: 3277.8, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2220032. Throughput: 0: 798.5. Samples: 553608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:49:14,207][00699] Avg episode reward: [(0, '6.686')] +[2023-02-25 13:49:14,210][12789] Saving new best policy, reward=6.686! +[2023-02-25 13:49:19,203][00699] Fps is (10 sec: 4093.3, 60 sec: 3344.7, 300 sec: 3429.5). Total num frames: 2240512. Throughput: 0: 800.2. Samples: 560022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:49:19,206][00699] Avg episode reward: [(0, '6.619')] +[2023-02-25 13:49:21,678][12803] Updated weights for policy 0, policy_version 550 (0.0019) +[2023-02-25 13:49:24,196][00699] Fps is (10 sec: 3687.2, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 2256896. Throughput: 0: 832.6. Samples: 565372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:49:24,199][00699] Avg episode reward: [(0, '7.067')] +[2023-02-25 13:49:24,207][12789] Saving new best policy, reward=7.067! +[2023-02-25 13:49:29,202][00699] Fps is (10 sec: 3277.2, 60 sec: 3276.5, 300 sec: 3429.5). Total num frames: 2273280. Throughput: 0: 841.9. Samples: 567454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:49:29,209][00699] Avg episode reward: [(0, '6.617')] +[2023-02-25 13:49:34,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2289664. Throughput: 0: 871.7. Samples: 572208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:49:34,204][00699] Avg episode reward: [(0, '6.969')] +[2023-02-25 13:49:34,714][12803] Updated weights for policy 0, policy_version 560 (0.0026) +[2023-02-25 13:49:39,196][00699] Fps is (10 sec: 3688.3, 60 sec: 3345.1, 300 sec: 3429.6). Total num frames: 2310144. Throughput: 0: 900.7. Samples: 578912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:49:39,208][00699] Avg episode reward: [(0, '6.914')] +[2023-02-25 13:49:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3429.5). Total num frames: 2330624. Throughput: 0: 895.4. Samples: 581950. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:49:44,200][00699] Avg episode reward: [(0, '7.068')] +[2023-02-25 13:49:45,320][12803] Updated weights for policy 0, policy_version 570 (0.0024) +[2023-02-25 13:49:49,197][00699] Fps is (10 sec: 3276.6, 60 sec: 3549.8, 300 sec: 3415.6). Total num frames: 2342912. Throughput: 0: 866.2. Samples: 586128. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:49:49,199][00699] Avg episode reward: [(0, '6.805')] +[2023-02-25 13:49:49,218][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000572_2342912.pth... +[2023-02-25 13:49:49,443][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000370_1515520.pth +[2023-02-25 13:49:54,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.8, 300 sec: 3429.5). Total num frames: 2359296. Throughput: 0: 884.8. Samples: 591070. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:49:54,202][00699] Avg episode reward: [(0, '6.399')] +[2023-02-25 13:49:57,255][12803] Updated weights for policy 0, policy_version 580 (0.0024) +[2023-02-25 13:49:59,196][00699] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2383872. Throughput: 0: 905.0. Samples: 594332. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:49:59,203][00699] Avg episode reward: [(0, '6.943')] +[2023-02-25 13:50:04,197][00699] Fps is (10 sec: 4095.9, 60 sec: 3550.2, 300 sec: 3415.6). Total num frames: 2400256. Throughput: 0: 894.7. Samples: 600278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:50:04,204][00699] Avg episode reward: [(0, '7.075')] +[2023-02-25 13:50:04,208][12789] Saving new best policy, reward=7.075! +[2023-02-25 13:50:09,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 2412544. Throughput: 0: 865.2. Samples: 604306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:50:09,204][00699] Avg episode reward: [(0, '7.393')] +[2023-02-25 13:50:09,216][12789] Saving new best policy, reward=7.393! +[2023-02-25 13:50:09,742][12803] Updated weights for policy 0, policy_version 590 (0.0022) +[2023-02-25 13:50:14,197][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.7, 300 sec: 3429.5). Total num frames: 2428928. Throughput: 0: 866.6. Samples: 606448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:50:14,199][00699] Avg episode reward: [(0, '7.025')] +[2023-02-25 13:50:19,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3550.3, 300 sec: 3457.3). Total num frames: 2453504. Throughput: 0: 907.4. Samples: 613042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:50:19,205][00699] Avg episode reward: [(0, '7.622')] +[2023-02-25 13:50:19,215][12789] Saving new best policy, reward=7.622! +[2023-02-25 13:50:19,867][12803] Updated weights for policy 0, policy_version 600 (0.0030) +[2023-02-25 13:50:24,196][00699] Fps is (10 sec: 4505.7, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2473984. Throughput: 0: 890.0. Samples: 618964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:50:24,199][00699] Avg episode reward: [(0, '8.359')] +[2023-02-25 13:50:24,202][12789] Saving new best policy, reward=8.359! +[2023-02-25 13:50:29,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3550.2, 300 sec: 3471.2). Total num frames: 2486272. Throughput: 0: 868.7. Samples: 621042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:50:29,203][00699] Avg episode reward: [(0, '8.629')] +[2023-02-25 13:50:29,215][12789] Saving new best policy, reward=8.629! +[2023-02-25 13:50:33,115][12803] Updated weights for policy 0, policy_version 610 (0.0020) +[2023-02-25 13:50:34,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2502656. Throughput: 0: 868.2. Samples: 625196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:50:34,204][00699] Avg episode reward: [(0, '8.166')] +[2023-02-25 13:50:39,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2523136. Throughput: 0: 907.3. Samples: 631900. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:50:39,206][00699] Avg episode reward: [(0, '7.996')] +[2023-02-25 13:50:42,278][12803] Updated weights for policy 0, policy_version 620 (0.0014) +[2023-02-25 13:50:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2543616. Throughput: 0: 909.4. Samples: 635256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:50:44,203][00699] Avg episode reward: [(0, '7.996')] +[2023-02-25 13:50:49,197][00699] Fps is (10 sec: 3686.3, 60 sec: 3618.2, 300 sec: 3485.1). Total num frames: 2560000. Throughput: 0: 880.7. Samples: 639908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:50:49,204][00699] Avg episode reward: [(0, '8.128')] +[2023-02-25 13:50:54,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2572288. Throughput: 0: 889.6. Samples: 644340. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:50:54,199][00699] Avg episode reward: [(0, '8.162')] +[2023-02-25 13:50:55,295][12803] Updated weights for policy 0, policy_version 630 (0.0021) +[2023-02-25 13:50:59,196][00699] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2596864. Throughput: 0: 915.8. Samples: 647660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:50:59,199][00699] Avg episode reward: [(0, '7.966')] +[2023-02-25 13:51:04,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2617344. Throughput: 0: 917.9. Samples: 654348. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:51:04,201][00699] Avg episode reward: [(0, '7.839')] +[2023-02-25 13:51:04,934][12803] Updated weights for policy 0, policy_version 640 (0.0026) +[2023-02-25 13:51:09,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2629632. Throughput: 0: 885.6. Samples: 658818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:51:09,202][00699] Avg episode reward: [(0, '7.454')] +[2023-02-25 13:51:14,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2646016. Throughput: 0: 885.6. Samples: 660896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:51:14,205][00699] Avg episode reward: [(0, '7.534')] +[2023-02-25 13:51:17,509][12803] Updated weights for policy 0, policy_version 650 (0.0018) +[2023-02-25 13:51:19,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2666496. Throughput: 0: 926.8. Samples: 666902. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:51:19,201][00699] Avg episode reward: [(0, '7.714')] +[2023-02-25 13:51:24,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2691072. Throughput: 0: 924.7. Samples: 673510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:51:24,201][00699] Avg episode reward: [(0, '8.175')] +[2023-02-25 13:51:27,976][12803] Updated weights for policy 0, policy_version 660 (0.0020) +[2023-02-25 13:51:29,199][00699] Fps is (10 sec: 3685.6, 60 sec: 3618.0, 300 sec: 3485.0). Total num frames: 2703360. Throughput: 0: 899.2. Samples: 675724. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:51:29,201][00699] Avg episode reward: [(0, '7.850')] +[2023-02-25 13:51:34,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2719744. Throughput: 0: 894.9. Samples: 680180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:51:34,199][00699] Avg episode reward: [(0, '7.599')] +[2023-02-25 13:51:39,196][00699] Fps is (10 sec: 3687.2, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2740224. Throughput: 0: 934.8. Samples: 686408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:51:39,204][00699] Avg episode reward: [(0, '7.200')] +[2023-02-25 13:51:39,323][12803] Updated weights for policy 0, policy_version 670 (0.0028) +[2023-02-25 13:51:44,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 2764800. Throughput: 0: 937.0. Samples: 689826. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:51:44,203][00699] Avg episode reward: [(0, '7.977')] +[2023-02-25 13:51:49,197][00699] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2777088. Throughput: 0: 901.8. Samples: 694930. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:51:49,201][00699] Avg episode reward: [(0, '8.604')] +[2023-02-25 13:51:49,213][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000678_2777088.pth... +[2023-02-25 13:51:49,363][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000472_1933312.pth +[2023-02-25 13:51:51,049][12803] Updated weights for policy 0, policy_version 680 (0.0012) +[2023-02-25 13:51:54,196][00699] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2789376. Throughput: 0: 886.7. Samples: 698718. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 13:51:54,203][00699] Avg episode reward: [(0, '8.738')] +[2023-02-25 13:51:54,209][12789] Saving new best policy, reward=8.738! +[2023-02-25 13:51:59,196][00699] Fps is (10 sec: 2457.7, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2801664. Throughput: 0: 881.5. Samples: 700562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:51:59,203][00699] Avg episode reward: [(0, '9.336')] +[2023-02-25 13:51:59,215][12789] Saving new best policy, reward=9.336! +[2023-02-25 13:52:04,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2818048. Throughput: 0: 842.8. Samples: 704830. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:52:04,204][00699] Avg episode reward: [(0, '9.087')] +[2023-02-25 13:52:05,183][12803] Updated weights for policy 0, policy_version 690 (0.0017) +[2023-02-25 13:52:09,197][00699] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2838528. Throughput: 0: 817.7. Samples: 710308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:52:09,199][00699] Avg episode reward: [(0, '8.521')] +[2023-02-25 13:52:14,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 2850816. Throughput: 0: 815.0. Samples: 712396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:52:14,201][00699] Avg episode reward: [(0, '8.555')] +[2023-02-25 13:52:18,156][12803] Updated weights for policy 0, policy_version 700 (0.0013) +[2023-02-25 13:52:19,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2871296. Throughput: 0: 826.8. Samples: 717384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:52:19,202][00699] Avg episode reward: [(0, '8.433')] +[2023-02-25 13:52:24,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 2891776. Throughput: 0: 839.5. Samples: 724184. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:52:24,201][00699] Avg episode reward: [(0, '7.871')] +[2023-02-25 13:52:27,496][12803] Updated weights for policy 0, policy_version 710 (0.0022) +[2023-02-25 13:52:29,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3485.1). Total num frames: 2912256. Throughput: 0: 833.7. Samples: 727342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:52:29,200][00699] Avg episode reward: [(0, '7.437')] +[2023-02-25 13:52:34,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2924544. Throughput: 0: 816.9. Samples: 731692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:52:34,198][00699] Avg episode reward: [(0, '8.225')] +[2023-02-25 13:52:39,197][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 2945024. Throughput: 0: 850.5. Samples: 736992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:52:39,199][00699] Avg episode reward: [(0, '8.567')] +[2023-02-25 13:52:39,830][12803] Updated weights for policy 0, policy_version 720 (0.0036) +[2023-02-25 13:52:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 2965504. Throughput: 0: 883.9. Samples: 740338. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:52:44,198][00699] Avg episode reward: [(0, '9.372')] +[2023-02-25 13:52:44,210][12789] Saving new best policy, reward=9.372! +[2023-02-25 13:52:49,197][00699] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 2985984. Throughput: 0: 926.0. Samples: 746498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:52:49,199][00699] Avg episode reward: [(0, '9.216')] +[2023-02-25 13:52:50,089][12803] Updated weights for policy 0, policy_version 730 (0.0012) +[2023-02-25 13:52:54,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2998272. Throughput: 0: 899.1. Samples: 750768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:52:54,200][00699] Avg episode reward: [(0, '9.030')] +[2023-02-25 13:52:59,197][00699] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3512.9). Total num frames: 3018752. Throughput: 0: 901.1. Samples: 752944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:52:59,200][00699] Avg episode reward: [(0, '8.767')] +[2023-02-25 13:53:01,859][12803] Updated weights for policy 0, policy_version 740 (0.0013) +[2023-02-25 13:53:04,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 3039232. Throughput: 0: 940.8. Samples: 759718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:53:04,202][00699] Avg episode reward: [(0, '9.064')] +[2023-02-25 13:53:09,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 3059712. Throughput: 0: 917.3. Samples: 765462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:53:09,199][00699] Avg episode reward: [(0, '9.870')] +[2023-02-25 13:53:09,212][12789] Saving new best policy, reward=9.870! +[2023-02-25 13:53:13,528][12803] Updated weights for policy 0, policy_version 750 (0.0024) +[2023-02-25 13:53:14,199][00699] Fps is (10 sec: 3276.1, 60 sec: 3686.3, 300 sec: 3498.9). Total num frames: 3072000. Throughput: 0: 892.8. Samples: 767520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:53:14,209][00699] Avg episode reward: [(0, '10.192')] +[2023-02-25 13:53:14,212][12789] Saving new best policy, reward=10.192! +[2023-02-25 13:53:19,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 3088384. Throughput: 0: 896.3. Samples: 772026. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:53:19,199][00699] Avg episode reward: [(0, '9.140')] +[2023-02-25 13:53:24,024][12803] Updated weights for policy 0, policy_version 760 (0.0020) +[2023-02-25 13:53:24,196][00699] Fps is (10 sec: 4096.9, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 3112960. Throughput: 0: 928.3. Samples: 778766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:53:24,199][00699] Avg episode reward: [(0, '8.645')] +[2023-02-25 13:53:29,198][00699] Fps is (10 sec: 4095.5, 60 sec: 3618.1, 300 sec: 3498.9). Total num frames: 3129344. Throughput: 0: 929.6. Samples: 782170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:53:29,202][00699] Avg episode reward: [(0, '8.455')] +[2023-02-25 13:53:34,199][00699] Fps is (10 sec: 3276.0, 60 sec: 3686.3, 300 sec: 3512.8). Total num frames: 3145728. Throughput: 0: 890.1. Samples: 786556. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:53:34,202][00699] Avg episode reward: [(0, '8.672')] +[2023-02-25 13:53:36,470][12803] Updated weights for policy 0, policy_version 770 (0.0016) +[2023-02-25 13:53:39,196][00699] Fps is (10 sec: 3277.2, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 3162112. Throughput: 0: 911.2. Samples: 791774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:53:39,205][00699] Avg episode reward: [(0, '9.079')] +[2023-02-25 13:53:44,196][00699] Fps is (10 sec: 4097.0, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 3186688. Throughput: 0: 939.8. Samples: 795236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:53:44,200][00699] Avg episode reward: [(0, '9.456')] +[2023-02-25 13:53:45,527][12803] Updated weights for policy 0, policy_version 780 (0.0012) +[2023-02-25 13:53:49,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 3207168. Throughput: 0: 936.8. Samples: 801874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:53:49,201][00699] Avg episode reward: [(0, '10.128')] +[2023-02-25 13:53:49,211][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000783_3207168.pth... +[2023-02-25 13:53:49,383][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000572_2342912.pth +[2023-02-25 13:53:54,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 3219456. Throughput: 0: 903.7. Samples: 806128. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:53:54,199][00699] Avg episode reward: [(0, '9.263')] +[2023-02-25 13:53:58,387][12803] Updated weights for policy 0, policy_version 790 (0.0012) +[2023-02-25 13:53:59,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3554.6). Total num frames: 3235840. Throughput: 0: 905.2. Samples: 808250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:53:59,200][00699] Avg episode reward: [(0, '9.499')] +[2023-02-25 13:54:04,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 3260416. Throughput: 0: 954.9. Samples: 814996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:54:04,205][00699] Avg episode reward: [(0, '8.998')] +[2023-02-25 13:54:07,258][12803] Updated weights for policy 0, policy_version 800 (0.0019) +[2023-02-25 13:54:09,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3280896. Throughput: 0: 943.8. Samples: 821238. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:54:09,201][00699] Avg episode reward: [(0, '9.859')] +[2023-02-25 13:54:14,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3582.3). Total num frames: 3297280. Throughput: 0: 916.1. Samples: 823392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:54:14,204][00699] Avg episode reward: [(0, '10.271')] +[2023-02-25 13:54:14,211][12789] Saving new best policy, reward=10.271! +[2023-02-25 13:54:19,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3582.3). Total num frames: 3313664. Throughput: 0: 918.4. Samples: 827882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:54:19,204][00699] Avg episode reward: [(0, '11.178')] +[2023-02-25 13:54:19,213][12789] Saving new best policy, reward=11.178! +[2023-02-25 13:54:19,831][12803] Updated weights for policy 0, policy_version 810 (0.0015) +[2023-02-25 13:54:24,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3334144. Throughput: 0: 954.4. Samples: 834720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:54:24,203][00699] Avg episode reward: [(0, '12.009')] +[2023-02-25 13:54:24,207][12789] Saving new best policy, reward=12.009! +[2023-02-25 13:54:29,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3610.0). Total num frames: 3354624. Throughput: 0: 949.5. Samples: 837964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:54:29,207][00699] Avg episode reward: [(0, '11.789')] +[2023-02-25 13:54:29,807][12803] Updated weights for policy 0, policy_version 820 (0.0018) +[2023-02-25 13:54:34,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3596.2). Total num frames: 3371008. Throughput: 0: 905.8. Samples: 842636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:54:34,201][00699] Avg episode reward: [(0, '11.614')] +[2023-02-25 13:54:39,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3582.3). Total num frames: 3387392. Throughput: 0: 921.5. Samples: 847596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:54:39,199][00699] Avg episode reward: [(0, '11.846')] +[2023-02-25 13:54:41,674][12803] Updated weights for policy 0, policy_version 830 (0.0037) +[2023-02-25 13:54:44,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 3411968. Throughput: 0: 950.7. Samples: 851030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:54:44,199][00699] Avg episode reward: [(0, '12.023')] +[2023-02-25 13:54:44,204][12789] Saving new best policy, reward=12.023! +[2023-02-25 13:54:49,199][00699] Fps is (10 sec: 4504.6, 60 sec: 3754.5, 300 sec: 3637.8). Total num frames: 3432448. Throughput: 0: 951.0. Samples: 857792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:54:49,201][00699] Avg episode reward: [(0, '13.010')] +[2023-02-25 13:54:49,223][12789] Saving new best policy, reward=13.010! +[2023-02-25 13:54:52,100][12803] Updated weights for policy 0, policy_version 840 (0.0013) +[2023-02-25 13:54:54,197][00699] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3596.1). Total num frames: 3444736. Throughput: 0: 907.1. Samples: 862056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:54:54,201][00699] Avg episode reward: [(0, '13.990')] +[2023-02-25 13:54:54,205][12789] Saving new best policy, reward=13.990! +[2023-02-25 13:54:59,196][00699] Fps is (10 sec: 2867.8, 60 sec: 3754.7, 300 sec: 3596.1). Total num frames: 3461120. Throughput: 0: 906.9. Samples: 864204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:54:59,199][00699] Avg episode reward: [(0, '13.417')] +[2023-02-25 13:55:03,378][12803] Updated weights for policy 0, policy_version 850 (0.0028) +[2023-02-25 13:55:04,196][00699] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3481600. Throughput: 0: 952.8. Samples: 870758. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:55:04,203][00699] Avg episode reward: [(0, '13.524')] +[2023-02-25 13:55:09,197][00699] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 3506176. Throughput: 0: 948.9. Samples: 877420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:55:09,201][00699] Avg episode reward: [(0, '13.040')] +[2023-02-25 13:55:14,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 3518464. Throughput: 0: 917.6. Samples: 879254. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:55:14,206][00699] Avg episode reward: [(0, '13.379')] +[2023-02-25 13:55:15,292][12803] Updated weights for policy 0, policy_version 860 (0.0027) +[2023-02-25 13:55:19,196][00699] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3530752. Throughput: 0: 889.9. Samples: 882682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 13:55:19,199][00699] Avg episode reward: [(0, '13.830')] +[2023-02-25 13:55:24,197][00699] Fps is (10 sec: 2457.5, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3543040. Throughput: 0: 862.4. Samples: 886402. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:55:24,201][00699] Avg episode reward: [(0, '15.119')] +[2023-02-25 13:55:24,205][12789] Saving new best policy, reward=15.119! +[2023-02-25 13:55:28,773][12803] Updated weights for policy 0, policy_version 870 (0.0022) +[2023-02-25 13:55:29,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3563520. Throughput: 0: 858.4. Samples: 889656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:55:29,203][00699] Avg episode reward: [(0, '16.336')] +[2023-02-25 13:55:29,214][12789] Saving new best policy, reward=16.336! +[2023-02-25 13:55:34,196][00699] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3584000. Throughput: 0: 862.5. Samples: 896602. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-25 13:55:34,202][00699] Avg episode reward: [(0, '15.369')] +[2023-02-25 13:55:39,197][00699] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3600384. Throughput: 0: 866.6. Samples: 901054. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:55:39,202][00699] Avg episode reward: [(0, '14.383')] +[2023-02-25 13:55:40,209][12803] Updated weights for policy 0, policy_version 880 (0.0022) +[2023-02-25 13:55:44,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3582.3). Total num frames: 3616768. Throughput: 0: 867.6. Samples: 903244. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:55:44,204][00699] Avg episode reward: [(0, '13.175')] +[2023-02-25 13:55:49,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3623.9). Total num frames: 3641344. Throughput: 0: 868.5. Samples: 909840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:55:49,203][00699] Avg episode reward: [(0, '13.199')] +[2023-02-25 13:55:49,215][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000889_3641344.pth... +[2023-02-25 13:55:49,350][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000678_2777088.pth +[2023-02-25 13:55:50,082][12803] Updated weights for policy 0, policy_version 890 (0.0023) +[2023-02-25 13:55:54,196][00699] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3661824. Throughput: 0: 863.9. Samples: 916294. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:55:54,208][00699] Avg episode reward: [(0, '13.931')] +[2023-02-25 13:55:59,196][00699] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3674112. Throughput: 0: 869.7. Samples: 918390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:55:59,203][00699] Avg episode reward: [(0, '14.026')] +[2023-02-25 13:56:02,241][12803] Updated weights for policy 0, policy_version 900 (0.0014) +[2023-02-25 13:56:04,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3690496. Throughput: 0: 891.6. Samples: 922806. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:56:04,203][00699] Avg episode reward: [(0, '14.478')] +[2023-02-25 13:56:09,196][00699] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 3715072. Throughput: 0: 959.4. Samples: 929576. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:56:09,203][00699] Avg episode reward: [(0, '15.262')] +[2023-02-25 13:56:11,623][12803] Updated weights for policy 0, policy_version 910 (0.0030) +[2023-02-25 13:56:14,200][00699] Fps is (10 sec: 4504.2, 60 sec: 3617.9, 300 sec: 3623.9). Total num frames: 3735552. Throughput: 0: 962.2. Samples: 932956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:56:14,211][00699] Avg episode reward: [(0, '17.171')] +[2023-02-25 13:56:14,213][12789] Saving new best policy, reward=17.171! +[2023-02-25 13:56:19,202][00699] Fps is (10 sec: 3275.1, 60 sec: 3617.8, 300 sec: 3582.2). Total num frames: 3747840. Throughput: 0: 915.0. Samples: 937784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:56:19,210][00699] Avg episode reward: [(0, '18.018')] +[2023-02-25 13:56:19,219][12789] Saving new best policy, reward=18.018! +[2023-02-25 13:56:24,196][00699] Fps is (10 sec: 2868.1, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3764224. Throughput: 0: 913.3. Samples: 942154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:56:24,203][00699] Avg episode reward: [(0, '19.681')] +[2023-02-25 13:56:24,206][12789] Saving new best policy, reward=19.681! +[2023-02-25 13:56:24,681][12803] Updated weights for policy 0, policy_version 920 (0.0013) +[2023-02-25 13:56:29,196][00699] Fps is (10 sec: 4098.2, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 3788800. Throughput: 0: 941.6. Samples: 945614. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:56:29,198][00699] Avg episode reward: [(0, '18.967')] +[2023-02-25 13:56:33,342][12803] Updated weights for policy 0, policy_version 930 (0.0012) +[2023-02-25 13:56:34,196][00699] Fps is (10 sec: 4915.2, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 3813376. Throughput: 0: 948.5. Samples: 952522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 13:56:34,203][00699] Avg episode reward: [(0, '18.027')] +[2023-02-25 13:56:39,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3596.2). Total num frames: 3825664. Throughput: 0: 913.2. Samples: 957390. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 13:56:39,202][00699] Avg episode reward: [(0, '17.612')] +[2023-02-25 13:56:44,196][00699] Fps is (10 sec: 2457.6, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3837952. Throughput: 0: 913.7. Samples: 959508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:56:44,199][00699] Avg episode reward: [(0, '18.667')] +[2023-02-25 13:56:46,003][12803] Updated weights for policy 0, policy_version 940 (0.0025) +[2023-02-25 13:56:49,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3862528. Throughput: 0: 952.2. Samples: 965656. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 13:56:49,204][00699] Avg episode reward: [(0, '18.743')] +[2023-02-25 13:56:54,196][00699] Fps is (10 sec: 4915.2, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 3887104. Throughput: 0: 956.5. Samples: 972618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:56:54,204][00699] Avg episode reward: [(0, '20.027')] +[2023-02-25 13:56:54,210][12789] Saving new best policy, reward=20.027! +[2023-02-25 13:56:55,173][12803] Updated weights for policy 0, policy_version 950 (0.0019) +[2023-02-25 13:56:59,196][00699] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 3899392. Throughput: 0: 928.8. Samples: 974748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:56:59,204][00699] Avg episode reward: [(0, '20.815')] +[2023-02-25 13:56:59,215][12789] Saving new best policy, reward=20.815! +[2023-02-25 13:57:04,196][00699] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 3915776. Throughput: 0: 918.1. Samples: 979092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:57:04,200][00699] Avg episode reward: [(0, '20.811')] +[2023-02-25 13:57:07,565][12803] Updated weights for policy 0, policy_version 960 (0.0015) +[2023-02-25 13:57:09,197][00699] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 3936256. Throughput: 0: 960.0. Samples: 985352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:57:09,204][00699] Avg episode reward: [(0, '20.494')] +[2023-02-25 13:57:14,200][00699] Fps is (10 sec: 4504.2, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 3960832. Throughput: 0: 959.1. Samples: 988776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 13:57:14,203][00699] Avg episode reward: [(0, '18.956')] +[2023-02-25 13:57:17,451][12803] Updated weights for policy 0, policy_version 970 (0.0011) +[2023-02-25 13:57:19,197][00699] Fps is (10 sec: 4096.0, 60 sec: 3823.3, 300 sec: 3679.5). Total num frames: 3977216. Throughput: 0: 929.5. Samples: 994348. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 13:57:19,203][00699] Avg episode reward: [(0, '19.121')] +[2023-02-25 13:57:24,196][00699] Fps is (10 sec: 2868.1, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 3989504. Throughput: 0: 919.7. Samples: 998778. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 13:57:24,201][00699] Avg episode reward: [(0, '18.999')] +[2023-02-25 13:57:27,520][12789] Stopping Batcher_0... +[2023-02-25 13:57:27,521][12789] Loop batcher_evt_loop terminating... +[2023-02-25 13:57:27,521][00699] Component Batcher_0 stopped! +[2023-02-25 13:57:27,532][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 13:57:27,583][12809] Stopping RolloutWorker_w4... +[2023-02-25 13:57:27,586][12808] Stopping RolloutWorker_w2... +[2023-02-25 13:57:27,586][12808] Loop rollout_proc2_evt_loop terminating... +[2023-02-25 13:57:27,583][00699] Component RolloutWorker_w4 stopped! +[2023-02-25 13:57:27,589][00699] Component RolloutWorker_w2 stopped! +[2023-02-25 13:57:27,597][12814] Stopping RolloutWorker_w6... +[2023-02-25 13:57:27,597][00699] Component RolloutWorker_w6 stopped! +[2023-02-25 13:57:27,583][12809] Loop rollout_proc4_evt_loop terminating... +[2023-02-25 13:57:27,603][12803] Weights refcount: 2 0 +[2023-02-25 13:57:27,600][12814] Loop rollout_proc6_evt_loop terminating... +[2023-02-25 13:57:27,609][00699] Component InferenceWorker_p0-w0 stopped! +[2023-02-25 13:57:27,609][12803] Stopping InferenceWorker_p0-w0... +[2023-02-25 13:57:27,612][12803] Loop inference_proc0-0_evt_loop terminating... +[2023-02-25 13:57:27,627][12805] Stopping RolloutWorker_w0... +[2023-02-25 13:57:27,631][12805] Loop rollout_proc0_evt_loop terminating... +[2023-02-25 13:57:27,626][00699] Component RolloutWorker_w1 stopped! +[2023-02-25 13:57:27,634][00699] Component RolloutWorker_w0 stopped! +[2023-02-25 13:57:27,626][12804] Stopping RolloutWorker_w1... +[2023-02-25 13:57:27,640][12804] Loop rollout_proc1_evt_loop terminating... +[2023-02-25 13:57:27,642][00699] Component RolloutWorker_w3 stopped! +[2023-02-25 13:57:27,645][12813] Stopping RolloutWorker_w3... +[2023-02-25 13:57:27,652][00699] Component RolloutWorker_w7 stopped! +[2023-02-25 13:57:27,657][12822] Stopping RolloutWorker_w7... +[2023-02-25 13:57:27,649][12813] Loop rollout_proc3_evt_loop terminating... +[2023-02-25 13:57:27,660][12822] Loop rollout_proc7_evt_loop terminating... +[2023-02-25 13:57:27,666][00699] Component RolloutWorker_w5 stopped! +[2023-02-25 13:57:27,671][12819] Stopping RolloutWorker_w5... +[2023-02-25 13:57:27,672][12819] Loop rollout_proc5_evt_loop terminating... +[2023-02-25 13:57:27,728][12789] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000783_3207168.pth +[2023-02-25 13:57:27,742][12789] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 13:57:27,930][00699] Component LearnerWorker_p0 stopped! +[2023-02-25 13:57:27,930][12789] Stopping LearnerWorker_p0... +[2023-02-25 13:57:27,937][12789] Loop learner_proc0_evt_loop terminating... +[2023-02-25 13:57:27,937][00699] Waiting for process learner_proc0 to stop... +[2023-02-25 13:57:29,680][00699] Waiting for process inference_proc0-0 to join... +[2023-02-25 13:57:29,835][00699] Waiting for process rollout_proc0 to join... +[2023-02-25 13:57:30,434][00699] Waiting for process rollout_proc1 to join... +[2023-02-25 13:57:30,436][00699] Waiting for process rollout_proc2 to join... +[2023-02-25 13:57:30,440][00699] Waiting for process rollout_proc3 to join... +[2023-02-25 13:57:30,444][00699] Waiting for process rollout_proc4 to join... +[2023-02-25 13:57:30,446][00699] Waiting for process rollout_proc5 to join... +[2023-02-25 13:57:30,448][00699] Waiting for process rollout_proc6 to join... +[2023-02-25 13:57:30,450][00699] Waiting for process rollout_proc7 to join... +[2023-02-25 13:57:30,451][00699] Batcher 0 profile tree view: +batching: 26.9716, releasing_batches: 0.0232 +[2023-02-25 13:57:30,452][00699] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 551.9726 +update_model: 7.9287 + weight_update: 0.0039 +one_step: 0.0123 + handle_policy_step: 553.2146 + deserialize: 15.3083, stack: 3.0778, obs_to_device_normalize: 118.8941, forward: 270.8024, send_messages: 27.3986 + prepare_outputs: 89.8224 + to_cpu: 56.1061 +[2023-02-25 13:57:30,454][00699] Learner 0 profile tree view: +misc: 0.0065, prepare_batch: 16.5891 +train: 77.8750 + epoch_init: 0.0121, minibatch_init: 0.0073, losses_postprocess: 0.5452, kl_divergence: 0.6278, after_optimizer: 32.9051 + calculate_losses: 28.1156 + losses_init: 0.0036, forward_head: 1.8022, bptt_initial: 18.4407, tail: 1.2613, advantages_returns: 0.3129, losses: 3.5919 + bptt: 2.2877 + bptt_forward_core: 2.2119 + update: 14.9934 + clip: 1.4682 +[2023-02-25 13:57:30,455][00699] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3872, enqueue_policy_requests: 151.8462, env_step: 868.9891, overhead: 23.0898, complete_rollouts: 6.7247 +save_policy_outputs: 21.5644 + split_output_tensors: 10.6949 +[2023-02-25 13:57:30,457][00699] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3382, enqueue_policy_requests: 157.7623, env_step: 863.9580, overhead: 23.0078, complete_rollouts: 8.1152 +save_policy_outputs: 21.0963 + split_output_tensors: 10.0864 +[2023-02-25 13:57:30,458][00699] Loop Runner_EvtLoop terminating... +[2023-02-25 13:57:30,460][00699] Runner profile tree view: +main_loop: 1179.6343 +[2023-02-25 13:57:30,461][00699] Collected {0: 4005888}, FPS: 3395.9 +[2023-02-25 13:57:30,516][00699] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 13:57:30,518][00699] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-25 13:57:30,520][00699] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-25 13:57:30,521][00699] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-25 13:57:30,523][00699] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 13:57:30,527][00699] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-25 13:57:30,530][00699] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 13:57:30,531][00699] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-25 13:57:30,533][00699] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-25 13:57:30,536][00699] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-25 13:57:30,538][00699] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-25 13:57:30,540][00699] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-25 13:57:30,542][00699] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-25 13:57:30,544][00699] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-25 13:57:30,546][00699] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-25 13:57:30,566][00699] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:57:30,567][00699] RunningMeanStd input shape: (1,) +[2023-02-25 13:57:30,584][00699] ConvEncoder: input_channels=3 +[2023-02-25 13:57:30,623][00699] Conv encoder output size: 512 +[2023-02-25 13:57:30,625][00699] Policy head output size: 512 +[2023-02-25 13:57:30,649][00699] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 13:57:32,046][00699] Num frames 100... +[2023-02-25 13:57:32,155][00699] Num frames 200... +[2023-02-25 13:57:32,265][00699] Num frames 300... +[2023-02-25 13:57:32,354][00699] Avg episode rewards: #0: 6.300, true rewards: #0: 3.300 +[2023-02-25 13:57:32,358][00699] Avg episode reward: 6.300, avg true_objective: 3.300 +[2023-02-25 13:57:32,439][00699] Num frames 400... +[2023-02-25 13:57:32,555][00699] Num frames 500... +[2023-02-25 13:57:32,669][00699] Num frames 600... +[2023-02-25 13:57:32,782][00699] Num frames 700... +[2023-02-25 13:57:32,898][00699] Num frames 800... +[2023-02-25 13:57:33,027][00699] Num frames 900... +[2023-02-25 13:57:33,141][00699] Num frames 1000... +[2023-02-25 13:57:33,308][00699] Avg episode rewards: #0: 9.490, true rewards: #0: 5.490 +[2023-02-25 13:57:33,310][00699] Avg episode reward: 9.490, avg true_objective: 5.490 +[2023-02-25 13:57:33,316][00699] Num frames 1100... +[2023-02-25 13:57:33,433][00699] Num frames 1200... +[2023-02-25 13:57:33,545][00699] Num frames 1300... +[2023-02-25 13:57:33,662][00699] Num frames 1400... +[2023-02-25 13:57:33,784][00699] Num frames 1500... +[2023-02-25 13:57:33,857][00699] Avg episode rewards: #0: 8.380, true rewards: #0: 5.047 +[2023-02-25 13:57:33,858][00699] Avg episode reward: 8.380, avg true_objective: 5.047 +[2023-02-25 13:57:33,957][00699] Num frames 1600... +[2023-02-25 13:57:34,076][00699] Num frames 1700... +[2023-02-25 13:57:34,194][00699] Num frames 1800... +[2023-02-25 13:57:34,308][00699] Num frames 1900... +[2023-02-25 13:57:34,427][00699] Num frames 2000... +[2023-02-25 13:57:34,539][00699] Num frames 2100... +[2023-02-25 13:57:34,652][00699] Num frames 2200... +[2023-02-25 13:57:34,765][00699] Avg episode rewards: #0: 10.375, true rewards: #0: 5.625 +[2023-02-25 13:57:34,767][00699] Avg episode reward: 10.375, avg true_objective: 5.625 +[2023-02-25 13:57:34,828][00699] Num frames 2300... +[2023-02-25 13:57:34,948][00699] Num frames 2400... +[2023-02-25 13:57:35,076][00699] Num frames 2500... +[2023-02-25 13:57:35,190][00699] Num frames 2600... +[2023-02-25 13:57:35,347][00699] Num frames 2700... +[2023-02-25 13:57:35,511][00699] Num frames 2800... +[2023-02-25 13:57:35,668][00699] Num frames 2900... +[2023-02-25 13:57:35,820][00699] Num frames 3000... +[2023-02-25 13:57:35,972][00699] Num frames 3100... +[2023-02-25 13:57:36,136][00699] Num frames 3200... +[2023-02-25 13:57:36,295][00699] Num frames 3300... +[2023-02-25 13:57:36,453][00699] Num frames 3400... +[2023-02-25 13:57:36,625][00699] Avg episode rewards: #0: 14.732, true rewards: #0: 6.932 +[2023-02-25 13:57:36,631][00699] Avg episode reward: 14.732, avg true_objective: 6.932 +[2023-02-25 13:57:36,698][00699] Num frames 3500... +[2023-02-25 13:57:36,866][00699] Num frames 3600... +[2023-02-25 13:57:37,029][00699] Num frames 3700... +[2023-02-25 13:57:37,204][00699] Num frames 3800... +[2023-02-25 13:57:37,361][00699] Num frames 3900... +[2023-02-25 13:57:37,532][00699] Num frames 4000... +[2023-02-25 13:57:37,691][00699] Num frames 4100... +[2023-02-25 13:57:37,850][00699] Num frames 4200... +[2023-02-25 13:57:38,007][00699] Num frames 4300... +[2023-02-25 13:57:38,175][00699] Num frames 4400... +[2023-02-25 13:57:38,333][00699] Num frames 4500... +[2023-02-25 13:57:38,495][00699] Num frames 4600... +[2023-02-25 13:57:38,654][00699] Num frames 4700... +[2023-02-25 13:57:38,784][00699] Num frames 4800... +[2023-02-25 13:57:38,896][00699] Num frames 4900... +[2023-02-25 13:57:39,014][00699] Num frames 5000... +[2023-02-25 13:57:39,126][00699] Num frames 5100... +[2023-02-25 13:57:39,247][00699] Num frames 5200... +[2023-02-25 13:57:39,368][00699] Num frames 5300... +[2023-02-25 13:57:39,448][00699] Avg episode rewards: #0: 19.703, true rewards: #0: 8.870 +[2023-02-25 13:57:39,451][00699] Avg episode reward: 19.703, avg true_objective: 8.870 +[2023-02-25 13:57:39,549][00699] Num frames 5400... +[2023-02-25 13:57:39,661][00699] Num frames 5500... +[2023-02-25 13:57:39,774][00699] Num frames 5600... +[2023-02-25 13:57:39,888][00699] Num frames 5700... +[2023-02-25 13:57:39,999][00699] Num frames 5800... +[2023-02-25 13:57:40,111][00699] Num frames 5900... +[2023-02-25 13:57:40,230][00699] Num frames 6000... +[2023-02-25 13:57:40,343][00699] Num frames 6100... +[2023-02-25 13:57:40,488][00699] Avg episode rewards: #0: 19.257, true rewards: #0: 8.829 +[2023-02-25 13:57:40,489][00699] Avg episode reward: 19.257, avg true_objective: 8.829 +[2023-02-25 13:57:40,515][00699] Num frames 6200... +[2023-02-25 13:57:40,630][00699] Num frames 6300... +[2023-02-25 13:57:40,746][00699] Num frames 6400... +[2023-02-25 13:57:40,860][00699] Num frames 6500... +[2023-02-25 13:57:40,983][00699] Num frames 6600... +[2023-02-25 13:57:41,075][00699] Avg episode rewards: #0: 17.660, true rewards: #0: 8.285 +[2023-02-25 13:57:41,077][00699] Avg episode reward: 17.660, avg true_objective: 8.285 +[2023-02-25 13:57:41,173][00699] Num frames 6700... +[2023-02-25 13:57:41,290][00699] Num frames 6800... +[2023-02-25 13:57:41,408][00699] Num frames 6900... +[2023-02-25 13:57:41,516][00699] Num frames 7000... +[2023-02-25 13:57:41,627][00699] Num frames 7100... +[2023-02-25 13:57:41,734][00699] Num frames 7200... +[2023-02-25 13:57:41,842][00699] Num frames 7300... +[2023-02-25 13:57:41,953][00699] Num frames 7400... +[2023-02-25 13:57:42,062][00699] Num frames 7500... +[2023-02-25 13:57:42,187][00699] Avg episode rewards: #0: 17.951, true rewards: #0: 8.396 +[2023-02-25 13:57:42,189][00699] Avg episode reward: 17.951, avg true_objective: 8.396 +[2023-02-25 13:57:42,251][00699] Num frames 7600... +[2023-02-25 13:57:42,364][00699] Num frames 7700... +[2023-02-25 13:57:42,479][00699] Num frames 7800... +[2023-02-25 13:57:42,590][00699] Num frames 7900... +[2023-02-25 13:57:42,701][00699] Num frames 8000... +[2023-02-25 13:57:42,813][00699] Num frames 8100... +[2023-02-25 13:57:42,905][00699] Avg episode rewards: #0: 17.032, true rewards: #0: 8.132 +[2023-02-25 13:57:42,907][00699] Avg episode reward: 17.032, avg true_objective: 8.132 +[2023-02-25 13:58:33,047][00699] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-25 13:58:33,400][00699] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 13:58:33,403][00699] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-25 13:58:33,407][00699] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-25 13:58:33,410][00699] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-25 13:58:33,413][00699] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 13:58:33,415][00699] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-25 13:58:33,418][00699] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-25 13:58:33,420][00699] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-25 13:58:33,421][00699] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-25 13:58:33,426][00699] Adding new argument 'hf_repository'='RegisGraptin/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-25 13:58:33,429][00699] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-25 13:58:33,431][00699] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-25 13:58:33,434][00699] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-25 13:58:33,442][00699] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-25 13:58:33,446][00699] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-25 13:58:33,475][00699] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 13:58:33,478][00699] RunningMeanStd input shape: (1,) +[2023-02-25 13:58:33,505][00699] ConvEncoder: input_channels=3 +[2023-02-25 13:58:33,570][00699] Conv encoder output size: 512 +[2023-02-25 13:58:33,572][00699] Policy head output size: 512 +[2023-02-25 13:58:33,614][00699] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 13:58:34,388][00699] Num frames 100... +[2023-02-25 13:58:34,563][00699] Num frames 200... +[2023-02-25 13:58:34,750][00699] Num frames 300... +[2023-02-25 13:58:34,934][00699] Num frames 400... +[2023-02-25 13:58:35,091][00699] Avg episode rewards: #0: 7.480, true rewards: #0: 4.480 +[2023-02-25 13:58:35,093][00699] Avg episode reward: 7.480, avg true_objective: 4.480 +[2023-02-25 13:58:35,202][00699] Num frames 500... +[2023-02-25 13:58:35,390][00699] Num frames 600... +[2023-02-25 13:58:35,569][00699] Num frames 700... +[2023-02-25 13:58:35,752][00699] Num frames 800... +[2023-02-25 13:58:35,934][00699] Num frames 900... +[2023-02-25 13:58:36,112][00699] Num frames 1000... +[2023-02-25 13:58:36,215][00699] Avg episode rewards: #0: 10.120, true rewards: #0: 5.120 +[2023-02-25 13:58:36,218][00699] Avg episode reward: 10.120, avg true_objective: 5.120 +[2023-02-25 13:58:36,367][00699] Num frames 1100... +[2023-02-25 13:58:36,554][00699] Num frames 1200... +[2023-02-25 13:58:36,737][00699] Num frames 1300... +[2023-02-25 13:58:36,921][00699] Num frames 1400... +[2023-02-25 13:58:37,104][00699] Num frames 1500... +[2023-02-25 13:58:37,281][00699] Num frames 1600... +[2023-02-25 13:58:37,467][00699] Num frames 1700... +[2023-02-25 13:58:37,645][00699] Num frames 1800... +[2023-02-25 13:58:37,822][00699] Num frames 1900... +[2023-02-25 13:58:38,001][00699] Num frames 2000... +[2023-02-25 13:58:38,169][00699] Num frames 2100... +[2023-02-25 13:58:38,360][00699] Avg episode rewards: #0: 15.920, true rewards: #0: 7.253 +[2023-02-25 13:58:38,362][00699] Avg episode reward: 15.920, avg true_objective: 7.253 +[2023-02-25 13:58:38,415][00699] Num frames 2200... +[2023-02-25 13:58:38,599][00699] Num frames 2300... +[2023-02-25 13:58:38,788][00699] Num frames 2400... +[2023-02-25 13:58:38,977][00699] Num frames 2500... +[2023-02-25 13:58:39,168][00699] Num frames 2600... +[2023-02-25 13:58:39,348][00699] Num frames 2700... +[2023-02-25 13:58:39,532][00699] Num frames 2800... +[2023-02-25 13:58:39,725][00699] Num frames 2900... +[2023-02-25 13:58:39,922][00699] Num frames 3000... +[2023-02-25 13:58:40,124][00699] Num frames 3100... +[2023-02-25 13:58:40,302][00699] Num frames 3200... +[2023-02-25 13:58:40,476][00699] Num frames 3300... +[2023-02-25 13:58:40,664][00699] Num frames 3400... +[2023-02-25 13:58:40,883][00699] Avg episode rewards: #0: 20.220, true rewards: #0: 8.720 +[2023-02-25 13:58:40,884][00699] Avg episode reward: 20.220, avg true_objective: 8.720 +[2023-02-25 13:58:40,912][00699] Num frames 3500... +[2023-02-25 13:58:41,104][00699] Num frames 3600... +[2023-02-25 13:58:41,299][00699] Num frames 3700... +[2023-02-25 13:58:41,501][00699] Num frames 3800... +[2023-02-25 13:58:41,675][00699] Num frames 3900... +[2023-02-25 13:58:41,833][00699] Num frames 4000... +[2023-02-25 13:58:41,985][00699] Num frames 4100... +[2023-02-25 13:58:42,125][00699] Num frames 4200... +[2023-02-25 13:58:42,238][00699] Num frames 4300... +[2023-02-25 13:58:42,355][00699] Num frames 4400... +[2023-02-25 13:58:42,414][00699] Avg episode rewards: #0: 20.002, true rewards: #0: 8.802 +[2023-02-25 13:58:42,416][00699] Avg episode reward: 20.002, avg true_objective: 8.802 +[2023-02-25 13:58:42,531][00699] Num frames 4500... +[2023-02-25 13:58:42,658][00699] Num frames 4600... +[2023-02-25 13:58:42,772][00699] Num frames 4700... +[2023-02-25 13:58:42,883][00699] Num frames 4800... +[2023-02-25 13:58:42,959][00699] Avg episode rewards: #0: 17.528, true rewards: #0: 8.028 +[2023-02-25 13:58:42,960][00699] Avg episode reward: 17.528, avg true_objective: 8.028 +[2023-02-25 13:58:43,059][00699] Num frames 4900... +[2023-02-25 13:58:43,174][00699] Num frames 5000... +[2023-02-25 13:58:43,284][00699] Num frames 5100... +[2023-02-25 13:58:43,396][00699] Num frames 5200... +[2023-02-25 13:58:43,506][00699] Num frames 5300... +[2023-02-25 13:58:43,616][00699] Num frames 5400... +[2023-02-25 13:58:43,726][00699] Num frames 5500... +[2023-02-25 13:58:43,834][00699] Num frames 5600... +[2023-02-25 13:58:43,944][00699] Num frames 5700... +[2023-02-25 13:58:44,015][00699] Avg episode rewards: #0: 17.590, true rewards: #0: 8.161 +[2023-02-25 13:58:44,017][00699] Avg episode reward: 17.590, avg true_objective: 8.161 +[2023-02-25 13:58:44,128][00699] Num frames 5800... +[2023-02-25 13:58:44,241][00699] Num frames 5900... +[2023-02-25 13:58:44,351][00699] Num frames 6000... +[2023-02-25 13:58:44,463][00699] Num frames 6100... +[2023-02-25 13:58:44,574][00699] Num frames 6200... +[2023-02-25 13:58:44,659][00699] Avg episode rewards: #0: 16.406, true rewards: #0: 7.781 +[2023-02-25 13:58:44,661][00699] Avg episode reward: 16.406, avg true_objective: 7.781 +[2023-02-25 13:58:44,750][00699] Num frames 6300... +[2023-02-25 13:58:44,871][00699] Num frames 6400... +[2023-02-25 13:58:44,986][00699] Num frames 6500... +[2023-02-25 13:58:45,106][00699] Num frames 6600... +[2023-02-25 13:58:45,222][00699] Num frames 6700... +[2023-02-25 13:58:45,332][00699] Num frames 6800... +[2023-02-25 13:58:45,442][00699] Num frames 6900... +[2023-02-25 13:58:45,554][00699] Num frames 7000... +[2023-02-25 13:58:45,667][00699] Num frames 7100... +[2023-02-25 13:58:45,777][00699] Num frames 7200... +[2023-02-25 13:58:45,889][00699] Num frames 7300... +[2023-02-25 13:58:46,000][00699] Num frames 7400... +[2023-02-25 13:58:46,070][00699] Avg episode rewards: #0: 17.232, true rewards: #0: 8.232 +[2023-02-25 13:58:46,072][00699] Avg episode reward: 17.232, avg true_objective: 8.232 +[2023-02-25 13:58:46,185][00699] Num frames 7500... +[2023-02-25 13:58:46,300][00699] Num frames 7600... +[2023-02-25 13:58:46,407][00699] Num frames 7700... +[2023-02-25 13:58:46,517][00699] Num frames 7800... +[2023-02-25 13:58:46,628][00699] Num frames 7900... +[2023-02-25 13:58:46,742][00699] Num frames 8000... +[2023-02-25 13:58:46,854][00699] Num frames 8100... +[2023-02-25 13:58:46,956][00699] Avg episode rewards: #0: 17.043, true rewards: #0: 8.143 +[2023-02-25 13:58:46,957][00699] Avg episode reward: 17.043, avg true_objective: 8.143 +[2023-02-25 13:59:37,297][00699] Replay video saved to /content/train_dir/default_experiment/replay.mp4!