diff --git "a/sf_log.txt" "b/sf_log.txt" --- "a/sf_log.txt" +++ "b/sf_log.txt" @@ -1,50 +1,50 @@ -[2023-02-24 13:37:48,414][00980] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-24 13:37:48,419][00980] Rollout worker 0 uses device cpu -[2023-02-24 13:37:48,421][00980] Rollout worker 1 uses device cpu -[2023-02-24 13:37:48,424][00980] Rollout worker 2 uses device cpu -[2023-02-24 13:37:48,425][00980] Rollout worker 3 uses device cpu -[2023-02-24 13:37:48,426][00980] Rollout worker 4 uses device cpu -[2023-02-24 13:37:48,427][00980] Rollout worker 5 uses device cpu -[2023-02-24 13:37:48,429][00980] Rollout worker 6 uses device cpu -[2023-02-24 13:37:48,430][00980] Rollout worker 7 uses device cpu -[2023-02-24 13:37:48,612][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 13:37:48,614][00980] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-24 13:37:48,648][00980] Starting all processes... -[2023-02-24 13:37:48,650][00980] Starting process learner_proc0 -[2023-02-24 13:37:48,704][00980] Starting all processes... -[2023-02-24 13:37:48,713][00980] Starting process inference_proc0-0 -[2023-02-24 13:37:48,713][00980] Starting process rollout_proc0 -[2023-02-24 13:37:48,715][00980] Starting process rollout_proc1 -[2023-02-24 13:37:48,715][00980] Starting process rollout_proc2 -[2023-02-24 13:37:48,715][00980] Starting process rollout_proc3 -[2023-02-24 13:37:48,715][00980] Starting process rollout_proc4 -[2023-02-24 13:37:48,716][00980] Starting process rollout_proc5 -[2023-02-24 13:37:48,716][00980] Starting process rollout_proc6 -[2023-02-24 13:37:48,716][00980] Starting process rollout_proc7 -[2023-02-24 13:38:00,073][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 13:38:00,074][11152] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-24 13:38:00,181][11169] Worker 1 uses CPU cores [1] -[2023-02-24 13:38:00,225][11171] Worker 5 uses CPU cores [1] -[2023-02-24 13:38:00,643][11170] Worker 4 uses CPU cores [0] -[2023-02-24 13:38:00,644][11166] Worker 0 uses CPU cores [0] -[2023-02-24 13:38:00,645][11168] Worker 2 uses CPU cores [0] -[2023-02-24 13:38:00,656][11174] Worker 7 uses CPU cores [1] -[2023-02-24 13:38:00,681][11173] Worker 3 uses CPU cores [1] -[2023-02-24 13:38:00,723][11172] Worker 6 uses CPU cores [0] -[2023-02-24 13:38:00,808][11167] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 13:38:00,809][11167] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-24 13:38:01,040][11152] Num visible devices: 1 -[2023-02-24 13:38:01,040][11167] Num visible devices: 1 -[2023-02-24 13:38:01,054][11152] Starting seed is not provided -[2023-02-24 13:38:01,054][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 13:38:01,055][11152] Initializing actor-critic model on device cuda:0 -[2023-02-24 13:38:01,055][11152] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 13:38:01,057][11152] RunningMeanStd input shape: (1,) -[2023-02-24 13:38:01,069][11152] ConvEncoder: input_channels=3 -[2023-02-24 13:38:01,348][11152] Conv encoder output size: 512 -[2023-02-24 13:38:01,348][11152] Policy head output size: 512 -[2023-02-24 13:38:01,398][11152] Created Actor Critic model with architecture: -[2023-02-24 13:38:01,398][11152] ActorCriticSharedWeights( +[2023-02-24 15:06:37,628][00176] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-24 15:06:37,633][00176] Rollout worker 0 uses device cpu +[2023-02-24 15:06:37,636][00176] Rollout worker 1 uses device cpu +[2023-02-24 15:06:37,639][00176] Rollout worker 2 uses device cpu +[2023-02-24 15:06:37,642][00176] Rollout worker 3 uses device cpu +[2023-02-24 15:06:37,643][00176] Rollout worker 4 uses device cpu +[2023-02-24 15:06:37,645][00176] Rollout worker 5 uses device cpu +[2023-02-24 15:06:37,651][00176] Rollout worker 6 uses device cpu +[2023-02-24 15:06:37,653][00176] Rollout worker 7 uses device cpu +[2023-02-24 15:06:37,969][00176] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-24 15:06:37,975][00176] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-24 15:06:38,022][00176] Starting all processes... +[2023-02-24 15:06:38,040][00176] Starting process learner_proc0 +[2023-02-24 15:06:38,128][00176] Starting all processes... +[2023-02-24 15:06:38,151][00176] Starting process inference_proc0-0 +[2023-02-24 15:06:38,152][00176] Starting process rollout_proc0 +[2023-02-24 15:06:38,152][00176] Starting process rollout_proc1 +[2023-02-24 15:06:38,152][00176] Starting process rollout_proc2 +[2023-02-24 15:06:38,152][00176] Starting process rollout_proc3 +[2023-02-24 15:06:38,152][00176] Starting process rollout_proc4 +[2023-02-24 15:06:38,160][00176] Starting process rollout_proc5 +[2023-02-24 15:06:38,162][00176] Starting process rollout_proc6 +[2023-02-24 15:06:38,162][00176] Starting process rollout_proc7 +[2023-02-24 15:06:51,049][10355] Worker 3 uses CPU cores [1] +[2023-02-24 15:06:51,236][10354] Worker 4 uses CPU cores [0] +[2023-02-24 15:06:51,254][10336] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-24 15:06:51,257][10336] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-24 15:06:51,372][10352] Worker 1 uses CPU cores [1] +[2023-02-24 15:06:51,376][10351] Worker 0 uses CPU cores [0] +[2023-02-24 15:06:51,548][10357] Worker 7 uses CPU cores [1] +[2023-02-24 15:06:51,675][10350] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-24 15:06:51,679][10350] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-24 15:06:51,779][10353] Worker 2 uses CPU cores [0] +[2023-02-24 15:06:51,857][10356] Worker 5 uses CPU cores [1] +[2023-02-24 15:06:51,913][10358] Worker 6 uses CPU cores [0] +[2023-02-24 15:06:52,399][10350] Num visible devices: 1 +[2023-02-24 15:06:52,398][10336] Num visible devices: 1 +[2023-02-24 15:06:52,415][10336] Starting seed is not provided +[2023-02-24 15:06:52,416][10336] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-24 15:06:52,416][10336] Initializing actor-critic model on device cuda:0 +[2023-02-24 15:06:52,419][10336] RunningMeanStd input shape: (3, 72, 128) +[2023-02-24 15:06:52,421][10336] RunningMeanStd input shape: (1,) +[2023-02-24 15:06:52,433][10336] ConvEncoder: input_channels=3 +[2023-02-24 15:06:52,733][10336] Conv encoder output size: 512 +[2023-02-24 15:06:52,733][10336] Policy head output size: 512 +[2023-02-24 15:06:52,788][10336] Created Actor Critic model with architecture: +[2023-02-24 15:06:52,788][10336] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( @@ -85,3113 +85,2035 @@ (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) -[2023-02-24 13:38:08,318][11152] Using optimizer -[2023-02-24 13:38:08,319][11152] No checkpoints found -[2023-02-24 13:38:08,319][11152] Did not load from checkpoint, starting from scratch! -[2023-02-24 13:38:08,319][11152] Initialized policy 0 weights for model version 0 -[2023-02-24 13:38:08,324][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 13:38:08,331][11152] LearnerWorker_p0 finished initialization! -[2023-02-24 13:38:08,532][11167] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 13:38:08,533][11167] RunningMeanStd input shape: (1,) -[2023-02-24 13:38:08,545][11167] ConvEncoder: input_channels=3 -[2023-02-24 13:38:08,605][00980] Heartbeat connected on Batcher_0 -[2023-02-24 13:38:08,612][00980] Heartbeat connected on LearnerWorker_p0 -[2023-02-24 13:38:08,624][00980] Heartbeat connected on RolloutWorker_w0 -[2023-02-24 13:38:08,627][00980] Heartbeat connected on RolloutWorker_w1 -[2023-02-24 13:38:08,632][00980] Heartbeat connected on RolloutWorker_w2 -[2023-02-24 13:38:08,635][00980] Heartbeat connected on RolloutWorker_w3 -[2023-02-24 13:38:08,638][00980] Heartbeat connected on RolloutWorker_w4 -[2023-02-24 13:38:08,641][00980] Heartbeat connected on RolloutWorker_w5 -[2023-02-24 13:38:08,645][00980] Heartbeat connected on RolloutWorker_w6 -[2023-02-24 13:38:08,648][00980] Heartbeat connected on RolloutWorker_w7 -[2023-02-24 13:38:08,675][11167] Conv encoder output size: 512 -[2023-02-24 13:38:08,675][11167] Policy head output size: 512 -[2023-02-24 13:38:11,675][00980] Inference worker 0-0 is ready! -[2023-02-24 13:38:11,680][00980] All inference workers are ready! Signal rollout workers to start! -[2023-02-24 13:38:11,682][00980] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-24 13:38:11,815][11170] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,823][11166] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,836][11172] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,861][11168] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,939][11171] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,970][11169] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,949][11174] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:11,997][11173] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:38:12,941][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 13:38:13,336][11171] Decorrelating experience for 0 frames... -[2023-02-24 13:38:13,337][11173] Decorrelating experience for 0 frames... -[2023-02-24 13:38:13,337][11172] Decorrelating experience for 0 frames... -[2023-02-24 13:38:13,336][11170] Decorrelating experience for 0 frames... -[2023-02-24 13:38:13,334][11166] Decorrelating experience for 0 frames... -[2023-02-24 13:38:14,535][11173] Decorrelating experience for 32 frames... -[2023-02-24 13:38:14,540][11169] Decorrelating experience for 0 frames... -[2023-02-24 13:38:14,548][11168] Decorrelating experience for 0 frames... -[2023-02-24 13:38:14,554][11171] Decorrelating experience for 32 frames... -[2023-02-24 13:38:14,564][11170] Decorrelating experience for 32 frames... -[2023-02-24 13:38:15,085][11169] Decorrelating experience for 32 frames... -[2023-02-24 13:38:15,712][11166] Decorrelating experience for 32 frames... -[2023-02-24 13:38:15,719][11168] Decorrelating experience for 32 frames... -[2023-02-24 13:38:15,721][11172] Decorrelating experience for 32 frames... -[2023-02-24 13:38:15,962][11170] Decorrelating experience for 64 frames... -[2023-02-24 13:38:16,447][11173] Decorrelating experience for 64 frames... -[2023-02-24 13:38:16,505][11174] Decorrelating experience for 0 frames... -[2023-02-24 13:38:17,213][11168] Decorrelating experience for 64 frames... -[2023-02-24 13:38:17,219][11172] Decorrelating experience for 64 frames... -[2023-02-24 13:38:17,357][11170] Decorrelating experience for 96 frames... -[2023-02-24 13:38:17,559][11171] Decorrelating experience for 64 frames... -[2023-02-24 13:38:17,646][11174] Decorrelating experience for 32 frames... -[2023-02-24 13:38:17,740][11173] Decorrelating experience for 96 frames... -[2023-02-24 13:38:17,909][11166] Decorrelating experience for 64 frames... -[2023-02-24 13:38:17,940][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 13:38:18,591][11172] Decorrelating experience for 96 frames... -[2023-02-24 13:38:18,779][11168] Decorrelating experience for 96 frames... -[2023-02-24 13:38:19,018][11166] Decorrelating experience for 96 frames... -[2023-02-24 13:38:19,203][11171] Decorrelating experience for 96 frames... -[2023-02-24 13:38:19,352][11174] Decorrelating experience for 64 frames... -[2023-02-24 13:38:19,715][11169] Decorrelating experience for 64 frames... -[2023-02-24 13:38:20,019][11174] Decorrelating experience for 96 frames... -[2023-02-24 13:38:20,258][11169] Decorrelating experience for 96 frames... -[2023-02-24 13:38:22,940][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 4.4. Samples: 44. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 13:38:22,942][00980] Avg episode reward: [(0, '1.652')] -[2023-02-24 13:38:24,162][11152] Signal inference workers to stop experience collection... -[2023-02-24 13:38:24,169][11167] InferenceWorker_p0-w0: stopping experience collection -[2023-02-24 13:38:26,901][11152] Signal inference workers to resume experience collection... -[2023-02-24 13:38:26,901][11167] InferenceWorker_p0-w0: resuming experience collection -[2023-02-24 13:38:27,940][00980] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 173.4. Samples: 2600. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-02-24 13:38:27,943][00980] Avg episode reward: [(0, '2.516')] -[2023-02-24 13:38:32,941][00980] Fps is (10 sec: 2047.8, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 293.0. Samples: 5860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:38:32,943][00980] Avg episode reward: [(0, '3.416')] -[2023-02-24 13:38:36,828][11167] Updated weights for policy 0, policy_version 10 (0.0013) -[2023-02-24 13:38:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 1802.4, 300 sec: 1802.4). Total num frames: 45056. Throughput: 0: 363.7. Samples: 9092. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) -[2023-02-24 13:38:37,941][00980] Avg episode reward: [(0, '4.245')] -[2023-02-24 13:38:42,940][00980] Fps is (10 sec: 4506.1, 60 sec: 2184.6, 300 sec: 2184.6). Total num frames: 65536. Throughput: 0: 537.4. Samples: 16120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:38:42,943][00980] Avg episode reward: [(0, '4.308')] -[2023-02-24 13:38:47,434][11167] Updated weights for policy 0, policy_version 20 (0.0013) -[2023-02-24 13:38:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 2340.7, 300 sec: 2340.7). Total num frames: 81920. Throughput: 0: 606.9. Samples: 21242. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:38:47,943][00980] Avg episode reward: [(0, '4.248')] -[2023-02-24 13:38:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 2355.3, 300 sec: 2355.3). Total num frames: 94208. Throughput: 0: 585.2. Samples: 23408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:38:52,948][00980] Avg episode reward: [(0, '4.218')] -[2023-02-24 13:38:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 2639.7, 300 sec: 2639.7). Total num frames: 118784. Throughput: 0: 650.8. Samples: 29286. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:38:57,942][00980] Avg episode reward: [(0, '4.411')] -[2023-02-24 13:38:57,945][11152] Saving new best policy, reward=4.411! -[2023-02-24 13:38:58,493][11167] Updated weights for policy 0, policy_version 30 (0.0020) -[2023-02-24 13:39:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 2867.3, 300 sec: 2867.3). Total num frames: 143360. Throughput: 0: 807.6. Samples: 36342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:39:02,942][00980] Avg episode reward: [(0, '4.305')] -[2023-02-24 13:39:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 2904.5, 300 sec: 2904.5). Total num frames: 159744. Throughput: 0: 864.7. Samples: 38956. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-24 13:39:07,947][00980] Avg episode reward: [(0, '4.309')] -[2023-02-24 13:39:09,169][11167] Updated weights for policy 0, policy_version 40 (0.0027) -[2023-02-24 13:39:12,940][00980] Fps is (10 sec: 2867.0, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 172032. Throughput: 0: 906.8. Samples: 43406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:39:12,947][00980] Avg episode reward: [(0, '4.432')] -[2023-02-24 13:39:12,961][11152] Saving new best policy, reward=4.432! -[2023-02-24 13:39:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3024.8). Total num frames: 196608. Throughput: 0: 973.5. Samples: 49666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:39:17,947][00980] Avg episode reward: [(0, '4.399')] -[2023-02-24 13:39:19,472][11167] Updated weights for policy 0, policy_version 50 (0.0025) -[2023-02-24 13:39:22,940][00980] Fps is (10 sec: 4915.6, 60 sec: 3686.4, 300 sec: 3159.8). Total num frames: 221184. Throughput: 0: 979.9. Samples: 53186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:39:22,942][00980] Avg episode reward: [(0, '4.317')] -[2023-02-24 13:39:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3167.6). Total num frames: 237568. Throughput: 0: 951.6. Samples: 58944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:39:27,944][00980] Avg episode reward: [(0, '4.377')] -[2023-02-24 13:39:30,688][11167] Updated weights for policy 0, policy_version 60 (0.0021) -[2023-02-24 13:39:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3823.0, 300 sec: 3123.3). Total num frames: 249856. Throughput: 0: 938.6. Samples: 63480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:39:32,942][00980] Avg episode reward: [(0, '4.564')] -[2023-02-24 13:39:32,961][11152] Saving new best policy, reward=4.564! -[2023-02-24 13:39:37,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3228.7). Total num frames: 274432. Throughput: 0: 960.5. Samples: 66632. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:39:37,947][00980] Avg episode reward: [(0, '4.579')] -[2023-02-24 13:39:37,952][11152] Saving new best policy, reward=4.579! -[2023-02-24 13:39:40,521][11167] Updated weights for policy 0, policy_version 70 (0.0024) -[2023-02-24 13:39:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3276.9). Total num frames: 294912. Throughput: 0: 985.4. Samples: 73630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:39:42,943][00980] Avg episode reward: [(0, '4.581')] -[2023-02-24 13:39:42,956][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth... -[2023-02-24 13:39:43,156][11152] Saving new best policy, reward=4.581! -[2023-02-24 13:39:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3276.9). Total num frames: 311296. Throughput: 0: 941.9. Samples: 78726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:39:47,945][00980] Avg episode reward: [(0, '4.659')] -[2023-02-24 13:39:47,949][11152] Saving new best policy, reward=4.659! -[2023-02-24 13:39:52,851][11167] Updated weights for policy 0, policy_version 80 (0.0015) -[2023-02-24 13:39:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3276.9). Total num frames: 327680. Throughput: 0: 929.0. Samples: 80760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:39:52,942][00980] Avg episode reward: [(0, '4.729')] -[2023-02-24 13:39:52,955][11152] Saving new best policy, reward=4.729! -[2023-02-24 13:39:57,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3315.9). Total num frames: 348160. Throughput: 0: 960.9. Samples: 86646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:39:57,943][00980] Avg episode reward: [(0, '4.488')] -[2023-02-24 13:40:01,727][11167] Updated weights for policy 0, policy_version 90 (0.0021) -[2023-02-24 13:40:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3388.6). Total num frames: 372736. Throughput: 0: 983.8. Samples: 93938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:40:02,943][00980] Avg episode reward: [(0, '4.355')] -[2023-02-24 13:40:07,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3383.7). Total num frames: 389120. Throughput: 0: 964.3. Samples: 96578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:40:07,945][00980] Avg episode reward: [(0, '4.497')] -[2023-02-24 13:40:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3379.2). Total num frames: 405504. Throughput: 0: 936.4. Samples: 101084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:40:12,943][00980] Avg episode reward: [(0, '4.596')] -[2023-02-24 13:40:13,930][11167] Updated weights for policy 0, policy_version 100 (0.0047) -[2023-02-24 13:40:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3407.9). Total num frames: 425984. Throughput: 0: 977.8. Samples: 107482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:40:17,942][00980] Avg episode reward: [(0, '4.536')] -[2023-02-24 13:40:22,536][11167] Updated weights for policy 0, policy_version 110 (0.0022) -[2023-02-24 13:40:22,940][00980] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3465.9). Total num frames: 450560. Throughput: 0: 987.2. Samples: 111054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:40:22,943][00980] Avg episode reward: [(0, '4.550')] -[2023-02-24 13:40:27,942][00980] Fps is (10 sec: 4094.9, 60 sec: 3822.8, 300 sec: 3458.8). Total num frames: 466944. Throughput: 0: 956.5. Samples: 116674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:40:27,945][00980] Avg episode reward: [(0, '4.564')] -[2023-02-24 13:40:32,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3822.9, 300 sec: 3423.1). Total num frames: 479232. Throughput: 0: 943.0. Samples: 121160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:40:32,948][00980] Avg episode reward: [(0, '4.622')] -[2023-02-24 13:40:34,819][11167] Updated weights for policy 0, policy_version 120 (0.0035) -[2023-02-24 13:40:37,940][00980] Fps is (10 sec: 3687.3, 60 sec: 3822.9, 300 sec: 3474.6). Total num frames: 503808. Throughput: 0: 970.1. Samples: 124414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:40:37,952][00980] Avg episode reward: [(0, '4.742')] -[2023-02-24 13:40:37,960][11152] Saving new best policy, reward=4.742! -[2023-02-24 13:40:42,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3522.6). Total num frames: 528384. Throughput: 0: 993.2. Samples: 131340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:40:42,943][00980] Avg episode reward: [(0, '4.836')] -[2023-02-24 13:40:42,954][11152] Saving new best policy, reward=4.836! -[2023-02-24 13:40:43,915][11167] Updated weights for policy 0, policy_version 130 (0.0016) -[2023-02-24 13:40:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3488.2). Total num frames: 540672. Throughput: 0: 945.4. Samples: 136480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:40:47,942][00980] Avg episode reward: [(0, '4.832')] -[2023-02-24 13:40:52,941][00980] Fps is (10 sec: 2867.0, 60 sec: 3822.9, 300 sec: 3481.6). Total num frames: 557056. Throughput: 0: 934.8. Samples: 138646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:40:52,947][00980] Avg episode reward: [(0, '4.762')] -[2023-02-24 13:40:56,054][11167] Updated weights for policy 0, policy_version 140 (0.0025) -[2023-02-24 13:40:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3525.1). Total num frames: 581632. Throughput: 0: 970.1. Samples: 144738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:40:57,946][00980] Avg episode reward: [(0, '4.819')] -[2023-02-24 13:41:02,942][00980] Fps is (10 sec: 4914.4, 60 sec: 3891.0, 300 sec: 3565.9). Total num frames: 606208. Throughput: 0: 987.2. Samples: 151908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:41:02,947][00980] Avg episode reward: [(0, '4.724')] -[2023-02-24 13:41:05,336][11167] Updated weights for policy 0, policy_version 150 (0.0016) -[2023-02-24 13:41:07,943][00980] Fps is (10 sec: 3685.0, 60 sec: 3822.7, 300 sec: 3534.2). Total num frames: 618496. Throughput: 0: 961.7. Samples: 154332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:41:07,949][00980] Avg episode reward: [(0, '4.639')] -[2023-02-24 13:41:12,943][00980] Fps is (10 sec: 2866.9, 60 sec: 3822.7, 300 sec: 3527.1). Total num frames: 634880. Throughput: 0: 935.8. Samples: 158786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:41:12,946][00980] Avg episode reward: [(0, '4.888')] -[2023-02-24 13:41:12,966][11152] Saving new best policy, reward=4.888! -[2023-02-24 13:41:17,053][11167] Updated weights for policy 0, policy_version 160 (0.0031) -[2023-02-24 13:41:17,940][00980] Fps is (10 sec: 4097.5, 60 sec: 3891.2, 300 sec: 3564.7). Total num frames: 659456. Throughput: 0: 979.3. Samples: 165230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:41:17,945][00980] Avg episode reward: [(0, '4.879')] -[2023-02-24 13:41:22,940][00980] Fps is (10 sec: 4507.2, 60 sec: 3822.9, 300 sec: 3578.6). Total num frames: 679936. Throughput: 0: 984.0. Samples: 168692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:41:22,942][00980] Avg episode reward: [(0, '4.645')] -[2023-02-24 13:41:27,308][11167] Updated weights for policy 0, policy_version 170 (0.0025) -[2023-02-24 13:41:27,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3822.8, 300 sec: 3570.8). Total num frames: 696320. Throughput: 0: 952.9. Samples: 174226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:41:27,948][00980] Avg episode reward: [(0, '4.523')] -[2023-02-24 13:41:32,941][00980] Fps is (10 sec: 3276.3, 60 sec: 3891.1, 300 sec: 3563.5). Total num frames: 712704. Throughput: 0: 939.6. Samples: 178764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:41:32,949][00980] Avg episode reward: [(0, '4.561')] -[2023-02-24 13:41:37,940][00980] Fps is (10 sec: 3688.3, 60 sec: 3822.9, 300 sec: 3576.5). Total num frames: 733184. Throughput: 0: 967.9. Samples: 182202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:41:37,942][00980] Avg episode reward: [(0, '4.777')] -[2023-02-24 13:41:37,966][11167] Updated weights for policy 0, policy_version 180 (0.0012) -[2023-02-24 13:41:42,940][00980] Fps is (10 sec: 4506.3, 60 sec: 3822.9, 300 sec: 3608.4). Total num frames: 757760. Throughput: 0: 991.1. Samples: 189336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:41:42,946][00980] Avg episode reward: [(0, '4.800')] -[2023-02-24 13:41:42,957][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth... -[2023-02-24 13:41:47,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3600.7). Total num frames: 774144. Throughput: 0: 944.4. Samples: 194404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:41:47,943][00980] Avg episode reward: [(0, '4.725')] -[2023-02-24 13:41:48,509][11167] Updated weights for policy 0, policy_version 190 (0.0022) -[2023-02-24 13:41:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3593.3). Total num frames: 790528. Throughput: 0: 941.7. Samples: 196704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:41:52,945][00980] Avg episode reward: [(0, '4.819')] -[2023-02-24 13:41:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3622.7). Total num frames: 815104. Throughput: 0: 978.3. Samples: 202806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:41:57,945][00980] Avg episode reward: [(0, '5.099')] -[2023-02-24 13:41:57,949][11152] Saving new best policy, reward=5.099! -[2023-02-24 13:41:58,857][11167] Updated weights for policy 0, policy_version 200 (0.0017) -[2023-02-24 13:42:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3823.1, 300 sec: 3633.0). Total num frames: 835584. Throughput: 0: 993.7. Samples: 209946. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:42:02,946][00980] Avg episode reward: [(0, '5.284')] -[2023-02-24 13:42:02,959][11152] Saving new best policy, reward=5.284! -[2023-02-24 13:42:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3625.4). Total num frames: 851968. Throughput: 0: 972.4. Samples: 212452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:42:07,947][00980] Avg episode reward: [(0, '5.709')] -[2023-02-24 13:42:07,949][11152] Saving new best policy, reward=5.709! -[2023-02-24 13:42:10,052][11167] Updated weights for policy 0, policy_version 210 (0.0013) -[2023-02-24 13:42:12,941][00980] Fps is (10 sec: 3276.4, 60 sec: 3891.3, 300 sec: 3618.1). Total num frames: 868352. Throughput: 0: 948.8. Samples: 216918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:42:12,944][00980] Avg episode reward: [(0, '5.637')] -[2023-02-24 13:42:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3627.9). Total num frames: 888832. Throughput: 0: 995.5. Samples: 223560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:42:17,953][00980] Avg episode reward: [(0, '5.676')] -[2023-02-24 13:42:19,687][11167] Updated weights for policy 0, policy_version 220 (0.0018) -[2023-02-24 13:42:22,940][00980] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3653.7). Total num frames: 913408. Throughput: 0: 998.4. Samples: 227128. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:42:22,944][00980] Avg episode reward: [(0, '5.737')] -[2023-02-24 13:42:23,034][11152] Saving new best policy, reward=5.737! -[2023-02-24 13:42:27,942][00980] Fps is (10 sec: 4095.1, 60 sec: 3891.4, 300 sec: 3646.2). Total num frames: 929792. Throughput: 0: 966.9. Samples: 232850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:42:27,952][00980] Avg episode reward: [(0, '5.743')] -[2023-02-24 13:42:27,960][11152] Saving new best policy, reward=5.743! -[2023-02-24 13:42:31,215][11167] Updated weights for policy 0, policy_version 230 (0.0014) -[2023-02-24 13:42:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3639.2). Total num frames: 946176. Throughput: 0: 952.4. Samples: 237264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:42:32,944][00980] Avg episode reward: [(0, '5.787')] -[2023-02-24 13:42:32,954][11152] Saving new best policy, reward=5.787! -[2023-02-24 13:42:37,940][00980] Fps is (10 sec: 4096.9, 60 sec: 3959.5, 300 sec: 3663.2). Total num frames: 970752. Throughput: 0: 978.3. Samples: 240728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:42:37,942][00980] Avg episode reward: [(0, '6.137')] -[2023-02-24 13:42:37,948][11152] Saving new best policy, reward=6.137! -[2023-02-24 13:42:40,345][11167] Updated weights for policy 0, policy_version 240 (0.0017) -[2023-02-24 13:42:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3671.3). Total num frames: 991232. Throughput: 0: 999.2. Samples: 247770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:42:42,945][00980] Avg episode reward: [(0, '6.197')] -[2023-02-24 13:42:42,962][11152] Saving new best policy, reward=6.197! -[2023-02-24 13:42:47,942][00980] Fps is (10 sec: 3685.6, 60 sec: 3891.1, 300 sec: 3664.1). Total num frames: 1007616. Throughput: 0: 953.6. Samples: 252862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:42:47,946][00980] Avg episode reward: [(0, '6.539')] -[2023-02-24 13:42:47,948][11152] Saving new best policy, reward=6.539! -[2023-02-24 13:42:52,614][11167] Updated weights for policy 0, policy_version 250 (0.0030) -[2023-02-24 13:42:52,942][00980] Fps is (10 sec: 3276.2, 60 sec: 3891.1, 300 sec: 3657.1). Total num frames: 1024000. Throughput: 0: 945.6. Samples: 255006. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:42:52,946][00980] Avg episode reward: [(0, '6.493')] -[2023-02-24 13:42:57,940][00980] Fps is (10 sec: 3687.2, 60 sec: 3822.9, 300 sec: 3664.9). Total num frames: 1044480. Throughput: 0: 984.6. Samples: 261222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:42:57,942][00980] Avg episode reward: [(0, '6.429')] -[2023-02-24 13:43:01,309][11167] Updated weights for policy 0, policy_version 260 (0.0016) -[2023-02-24 13:43:02,940][00980] Fps is (10 sec: 4506.4, 60 sec: 3891.2, 300 sec: 3686.4). Total num frames: 1069056. Throughput: 0: 998.5. Samples: 268492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:43:02,943][00980] Avg episode reward: [(0, '6.526')] -[2023-02-24 13:43:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3679.5). Total num frames: 1085440. Throughput: 0: 974.4. Samples: 270974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:43:07,948][00980] Avg episode reward: [(0, '6.938')] -[2023-02-24 13:43:08,030][11152] Saving new best policy, reward=6.938! -[2023-02-24 13:43:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3735.0). Total num frames: 1101824. Throughput: 0: 947.2. Samples: 275470. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) -[2023-02-24 13:43:12,948][00980] Avg episode reward: [(0, '7.288')] -[2023-02-24 13:43:12,964][11152] Saving new best policy, reward=7.288! -[2023-02-24 13:43:13,547][11167] Updated weights for policy 0, policy_version 270 (0.0019) -[2023-02-24 13:43:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 1126400. Throughput: 0: 991.3. Samples: 281872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:43:17,945][00980] Avg episode reward: [(0, '6.738')] -[2023-02-24 13:43:22,174][11167] Updated weights for policy 0, policy_version 280 (0.0025) -[2023-02-24 13:43:22,940][00980] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1146880. Throughput: 0: 992.3. Samples: 285380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-24 13:43:22,943][00980] Avg episode reward: [(0, '6.764')] -[2023-02-24 13:43:27,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3891.3, 300 sec: 3873.9). Total num frames: 1163264. Throughput: 0: 963.4. Samples: 291122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:43:27,950][00980] Avg episode reward: [(0, '7.105')] -[2023-02-24 13:43:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1179648. Throughput: 0: 950.9. Samples: 295652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:43:32,946][00980] Avg episode reward: [(0, '7.147')] -[2023-02-24 13:43:34,461][11167] Updated weights for policy 0, policy_version 290 (0.0019) -[2023-02-24 13:43:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1204224. Throughput: 0: 978.1. Samples: 299018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:43:37,945][00980] Avg episode reward: [(0, '7.403')] -[2023-02-24 13:43:37,951][11152] Saving new best policy, reward=7.403! -[2023-02-24 13:43:42,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1224704. Throughput: 0: 996.7. Samples: 306074. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:43:42,944][00980] Avg episode reward: [(0, '8.056')] -[2023-02-24 13:43:42,953][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth... -[2023-02-24 13:43:43,101][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth -[2023-02-24 13:43:43,110][11152] Saving new best policy, reward=8.056! -[2023-02-24 13:43:43,489][11167] Updated weights for policy 0, policy_version 300 (0.0023) -[2023-02-24 13:43:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3887.7). Total num frames: 1241088. Throughput: 0: 947.8. Samples: 311142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:43:47,945][00980] Avg episode reward: [(0, '8.824')] -[2023-02-24 13:43:47,951][11152] Saving new best policy, reward=8.824! -[2023-02-24 13:43:52,940][00980] Fps is (10 sec: 3276.6, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 1257472. Throughput: 0: 939.8. Samples: 313264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:43:52,943][00980] Avg episode reward: [(0, '9.508')] -[2023-02-24 13:43:52,971][11152] Saving new best policy, reward=9.508! -[2023-02-24 13:43:55,690][11167] Updated weights for policy 0, policy_version 310 (0.0024) -[2023-02-24 13:43:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1277952. Throughput: 0: 970.8. Samples: 319158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:43:57,945][00980] Avg episode reward: [(0, '9.624')] -[2023-02-24 13:43:57,947][11152] Saving new best policy, reward=9.624! -[2023-02-24 13:44:02,940][00980] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1302528. Throughput: 0: 985.2. Samples: 326206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:44:02,945][00980] Avg episode reward: [(0, '8.869')] -[2023-02-24 13:44:04,905][11167] Updated weights for policy 0, policy_version 320 (0.0023) -[2023-02-24 13:44:07,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1318912. Throughput: 0: 964.5. Samples: 328784. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:44:07,945][00980] Avg episode reward: [(0, '8.736')] -[2023-02-24 13:44:12,941][00980] Fps is (10 sec: 3276.5, 60 sec: 3891.1, 300 sec: 3859.9). Total num frames: 1335296. Throughput: 0: 937.4. Samples: 333304. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:44:12,946][00980] Avg episode reward: [(0, '9.513')] -[2023-02-24 13:44:16,504][11167] Updated weights for policy 0, policy_version 330 (0.0021) -[2023-02-24 13:44:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1355776. Throughput: 0: 982.5. Samples: 339864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:44:17,948][00980] Avg episode reward: [(0, '9.925')] -[2023-02-24 13:44:17,953][11152] Saving new best policy, reward=9.925! -[2023-02-24 13:44:22,940][00980] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1380352. Throughput: 0: 985.2. Samples: 343354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:44:22,943][00980] Avg episode reward: [(0, '10.988')] -[2023-02-24 13:44:22,954][11152] Saving new best policy, reward=10.988! -[2023-02-24 13:44:26,333][11167] Updated weights for policy 0, policy_version 340 (0.0014) -[2023-02-24 13:44:27,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1396736. Throughput: 0: 953.3. Samples: 348974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:44:27,945][00980] Avg episode reward: [(0, '10.246')] -[2023-02-24 13:44:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1409024. Throughput: 0: 943.7. Samples: 353608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:44:32,949][00980] Avg episode reward: [(0, '10.671')] -[2023-02-24 13:44:37,313][11167] Updated weights for policy 0, policy_version 350 (0.0030) -[2023-02-24 13:44:37,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1433600. Throughput: 0: 970.6. Samples: 356940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:44:37,942][00980] Avg episode reward: [(0, '10.539')] -[2023-02-24 13:44:42,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1458176. Throughput: 0: 998.4. Samples: 364086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:44:42,943][00980] Avg episode reward: [(0, '10.770')] -[2023-02-24 13:44:47,461][11167] Updated weights for policy 0, policy_version 360 (0.0028) -[2023-02-24 13:44:47,941][00980] Fps is (10 sec: 4095.3, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 1474560. Throughput: 0: 959.2. Samples: 369370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:44:47,951][00980] Avg episode reward: [(0, '11.574')] -[2023-02-24 13:44:47,959][11152] Saving new best policy, reward=11.574! -[2023-02-24 13:44:52,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 1486848. Throughput: 0: 949.8. Samples: 371524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:44:52,945][00980] Avg episode reward: [(0, '12.029')] -[2023-02-24 13:44:52,963][11152] Saving new best policy, reward=12.029! -[2023-02-24 13:44:57,940][00980] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1511424. Throughput: 0: 982.5. Samples: 377514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:44:57,946][00980] Avg episode reward: [(0, '13.549')] -[2023-02-24 13:44:57,950][11152] Saving new best policy, reward=13.549! -[2023-02-24 13:44:58,343][11167] Updated weights for policy 0, policy_version 370 (0.0021) -[2023-02-24 13:45:02,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1536000. Throughput: 0: 996.8. Samples: 384718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:02,942][00980] Avg episode reward: [(0, '13.641')] -[2023-02-24 13:45:02,955][11152] Saving new best policy, reward=13.641! -[2023-02-24 13:45:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1552384. Throughput: 0: 976.7. Samples: 387304. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:45:07,943][00980] Avg episode reward: [(0, '13.450')] -[2023-02-24 13:45:08,858][11167] Updated weights for policy 0, policy_version 380 (0.0019) -[2023-02-24 13:45:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 1568768. Throughput: 0: 953.3. Samples: 391874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:12,945][00980] Avg episode reward: [(0, '12.302')] -[2023-02-24 13:45:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1589248. Throughput: 0: 993.2. Samples: 398302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:17,942][00980] Avg episode reward: [(0, '11.242')] -[2023-02-24 13:45:18,981][11167] Updated weights for policy 0, policy_version 390 (0.0014) -[2023-02-24 13:45:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 1613824. Throughput: 0: 999.1. Samples: 401898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:22,942][00980] Avg episode reward: [(0, '11.683')] -[2023-02-24 13:45:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1630208. Throughput: 0: 970.1. Samples: 407740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:45:27,948][00980] Avg episode reward: [(0, '11.982')] -[2023-02-24 13:45:29,929][11167] Updated weights for policy 0, policy_version 400 (0.0024) -[2023-02-24 13:45:32,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 1646592. Throughput: 0: 951.9. Samples: 412202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:45:32,948][00980] Avg episode reward: [(0, '12.382')] -[2023-02-24 13:45:37,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1662976. Throughput: 0: 962.6. Samples: 414840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:45:37,948][00980] Avg episode reward: [(0, '13.828')] -[2023-02-24 13:45:37,950][11152] Saving new best policy, reward=13.828! -[2023-02-24 13:45:41,821][11167] Updated weights for policy 0, policy_version 410 (0.0015) -[2023-02-24 13:45:42,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1683456. Throughput: 0: 950.4. Samples: 420280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:42,943][00980] Avg episode reward: [(0, '13.286')] -[2023-02-24 13:45:42,953][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth... -[2023-02-24 13:45:43,087][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth -[2023-02-24 13:45:47,941][00980] Fps is (10 sec: 3686.1, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1699840. Throughput: 0: 908.7. Samples: 425610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:45:47,950][00980] Avg episode reward: [(0, '13.446')] -[2023-02-24 13:45:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 1712128. Throughput: 0: 900.2. Samples: 427814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:45:52,949][00980] Avg episode reward: [(0, '14.849')] -[2023-02-24 13:45:52,959][11152] Saving new best policy, reward=14.849! -[2023-02-24 13:45:54,263][11167] Updated weights for policy 0, policy_version 420 (0.0045) -[2023-02-24 13:45:57,940][00980] Fps is (10 sec: 3686.7, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 1736704. Throughput: 0: 928.5. Samples: 433658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:45:57,948][00980] Avg episode reward: [(0, '15.688')] -[2023-02-24 13:45:57,952][11152] Saving new best policy, reward=15.688! -[2023-02-24 13:46:02,793][11167] Updated weights for policy 0, policy_version 430 (0.0017) -[2023-02-24 13:46:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1761280. Throughput: 0: 943.8. Samples: 440774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:46:02,947][00980] Avg episode reward: [(0, '15.851')] -[2023-02-24 13:46:02,961][11152] Saving new best policy, reward=15.851! -[2023-02-24 13:46:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1777664. Throughput: 0: 920.5. Samples: 443322. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:46:07,947][00980] Avg episode reward: [(0, '14.958')] -[2023-02-24 13:46:12,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 1789952. Throughput: 0: 892.1. Samples: 447886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:46:12,945][00980] Avg episode reward: [(0, '13.428')] -[2023-02-24 13:46:15,105][11167] Updated weights for policy 0, policy_version 440 (0.0014) -[2023-02-24 13:46:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 1814528. Throughput: 0: 934.8. Samples: 454268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:46:17,946][00980] Avg episode reward: [(0, '13.581')] -[2023-02-24 13:46:22,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1839104. Throughput: 0: 957.1. Samples: 457908. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:46:22,945][00980] Avg episode reward: [(0, '13.883')] -[2023-02-24 13:46:23,563][11167] Updated weights for policy 0, policy_version 450 (0.0012) -[2023-02-24 13:46:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1855488. Throughput: 0: 965.1. Samples: 463710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:46:27,944][00980] Avg episode reward: [(0, '14.481')] -[2023-02-24 13:46:32,940][00980] Fps is (10 sec: 2867.1, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 1867776. Throughput: 0: 944.3. Samples: 468102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:46:32,943][00980] Avg episode reward: [(0, '14.688')] -[2023-02-24 13:46:35,998][11167] Updated weights for policy 0, policy_version 460 (0.0018) -[2023-02-24 13:46:37,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1892352. Throughput: 0: 967.8. Samples: 471366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:46:37,943][00980] Avg episode reward: [(0, '14.700')] -[2023-02-24 13:46:42,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1916928. Throughput: 0: 999.8. Samples: 478648. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:46:42,942][00980] Avg episode reward: [(0, '14.915')] -[2023-02-24 13:46:44,917][11167] Updated weights for policy 0, policy_version 470 (0.0016) -[2023-02-24 13:46:47,941][00980] Fps is (10 sec: 4095.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1933312. Throughput: 0: 962.1. Samples: 484070. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:46:47,944][00980] Avg episode reward: [(0, '15.869')] -[2023-02-24 13:46:47,946][11152] Saving new best policy, reward=15.869! -[2023-02-24 13:46:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1945600. Throughput: 0: 953.3. Samples: 486220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:46:52,945][00980] Avg episode reward: [(0, '16.879')] -[2023-02-24 13:46:52,977][11152] Saving new best policy, reward=16.879! -[2023-02-24 13:46:56,655][11167] Updated weights for policy 0, policy_version 480 (0.0021) -[2023-02-24 13:46:57,940][00980] Fps is (10 sec: 3686.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1970176. Throughput: 0: 984.3. Samples: 492180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:46:57,942][00980] Avg episode reward: [(0, '17.186')] -[2023-02-24 13:46:57,949][11152] Saving new best policy, reward=17.186! -[2023-02-24 13:47:02,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1994752. Throughput: 0: 1001.2. Samples: 499320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:47:02,943][00980] Avg episode reward: [(0, '17.813')] -[2023-02-24 13:47:02,953][11152] Saving new best policy, reward=17.813! -[2023-02-24 13:47:06,352][11167] Updated weights for policy 0, policy_version 490 (0.0013) -[2023-02-24 13:47:07,942][00980] Fps is (10 sec: 4095.1, 60 sec: 3891.1, 300 sec: 3873.8). Total num frames: 2011136. Throughput: 0: 976.7. Samples: 501860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:47:07,952][00980] Avg episode reward: [(0, '18.134')] -[2023-02-24 13:47:07,954][11152] Saving new best policy, reward=18.134! -[2023-02-24 13:47:12,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2023424. Throughput: 0: 947.6. Samples: 506350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:47:12,943][00980] Avg episode reward: [(0, '17.744')] -[2023-02-24 13:47:17,484][11167] Updated weights for policy 0, policy_version 500 (0.0025) -[2023-02-24 13:47:17,940][00980] Fps is (10 sec: 3687.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2048000. Throughput: 0: 993.2. Samples: 512794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:47:17,942][00980] Avg episode reward: [(0, '16.609')] -[2023-02-24 13:47:22,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2072576. Throughput: 0: 1001.3. Samples: 516424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:47:22,942][00980] Avg episode reward: [(0, '17.144')] -[2023-02-24 13:47:27,541][11167] Updated weights for policy 0, policy_version 510 (0.0019) -[2023-02-24 13:47:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2088960. Throughput: 0: 966.0. Samples: 522118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:47:27,945][00980] Avg episode reward: [(0, '17.301')] -[2023-02-24 13:47:32,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 2105344. Throughput: 0: 947.3. Samples: 526696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:47:32,946][00980] Avg episode reward: [(0, '17.404')] -[2023-02-24 13:47:37,890][11167] Updated weights for policy 0, policy_version 520 (0.0014) -[2023-02-24 13:47:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2129920. Throughput: 0: 977.2. Samples: 530192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:47:37,950][00980] Avg episode reward: [(0, '17.412')] -[2023-02-24 13:47:42,940][00980] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2150400. Throughput: 0: 1006.9. Samples: 537490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:47:42,942][00980] Avg episode reward: [(0, '17.494')] -[2023-02-24 13:47:42,957][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000525_2150400.pth... -[2023-02-24 13:47:43,094][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth -[2023-02-24 13:47:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3873.9). Total num frames: 2166784. Throughput: 0: 961.7. Samples: 542594. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:47:47,943][00980] Avg episode reward: [(0, '19.088')] -[2023-02-24 13:47:47,948][11152] Saving new best policy, reward=19.088! -[2023-02-24 13:47:48,795][11167] Updated weights for policy 0, policy_version 530 (0.0032) -[2023-02-24 13:47:52,940][00980] Fps is (10 sec: 3276.6, 60 sec: 3959.4, 300 sec: 3859.9). Total num frames: 2183168. Throughput: 0: 954.4. Samples: 544806. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:47:52,949][00980] Avg episode reward: [(0, '20.210')] -[2023-02-24 13:47:52,965][11152] Saving new best policy, reward=20.210! -[2023-02-24 13:47:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2207744. Throughput: 0: 991.3. Samples: 550958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:47:57,947][00980] Avg episode reward: [(0, '20.504')] -[2023-02-24 13:47:57,951][11152] Saving new best policy, reward=20.504! -[2023-02-24 13:47:58,747][11167] Updated weights for policy 0, policy_version 540 (0.0030) -[2023-02-24 13:48:02,940][00980] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2228224. Throughput: 0: 1009.2. Samples: 558208. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:48:02,942][00980] Avg episode reward: [(0, '21.654')] -[2023-02-24 13:48:02,953][11152] Saving new best policy, reward=21.654! -[2023-02-24 13:48:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 2244608. Throughput: 0: 981.4. Samples: 560588. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:48:07,942][00980] Avg episode reward: [(0, '21.914')] -[2023-02-24 13:48:07,952][11152] Saving new best policy, reward=21.914! -[2023-02-24 13:48:09,868][11167] Updated weights for policy 0, policy_version 550 (0.0016) -[2023-02-24 13:48:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 2260992. Throughput: 0: 955.5. Samples: 565114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:48:12,942][00980] Avg episode reward: [(0, '22.633')] -[2023-02-24 13:48:12,955][11152] Saving new best policy, reward=22.633! -[2023-02-24 13:48:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2285568. Throughput: 0: 1003.4. Samples: 571850. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:48:17,942][00980] Avg episode reward: [(0, '22.016')] -[2023-02-24 13:48:19,372][11167] Updated weights for policy 0, policy_version 560 (0.0027) -[2023-02-24 13:48:22,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2310144. Throughput: 0: 1006.8. Samples: 575496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:22,942][00980] Avg episode reward: [(0, '21.889')] -[2023-02-24 13:48:27,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3890.9, 300 sec: 3873.8). Total num frames: 2322432. Throughput: 0: 968.2. Samples: 581066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:48:27,948][00980] Avg episode reward: [(0, '23.015')] -[2023-02-24 13:48:27,951][11152] Saving new best policy, reward=23.015! -[2023-02-24 13:48:31,140][11167] Updated weights for policy 0, policy_version 570 (0.0022) -[2023-02-24 13:48:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2338816. Throughput: 0: 956.6. Samples: 585642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:32,942][00980] Avg episode reward: [(0, '24.470')] -[2023-02-24 13:48:32,963][11152] Saving new best policy, reward=24.470! -[2023-02-24 13:48:37,940][00980] Fps is (10 sec: 4098.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2363392. Throughput: 0: 985.7. Samples: 589160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:37,942][00980] Avg episode reward: [(0, '23.279')] -[2023-02-24 13:48:40,147][11167] Updated weights for policy 0, policy_version 580 (0.0017) -[2023-02-24 13:48:42,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2387968. Throughput: 0: 1008.7. Samples: 596350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:42,942][00980] Avg episode reward: [(0, '23.115')] -[2023-02-24 13:48:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2400256. Throughput: 0: 956.3. Samples: 601242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:47,948][00980] Avg episode reward: [(0, '22.919')] -[2023-02-24 13:48:52,332][11167] Updated weights for policy 0, policy_version 590 (0.0013) -[2023-02-24 13:48:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2416640. Throughput: 0: 953.6. Samples: 603498. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:48:52,947][00980] Avg episode reward: [(0, '22.899')] -[2023-02-24 13:48:57,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2441216. Throughput: 0: 996.4. Samples: 609954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:48:57,947][00980] Avg episode reward: [(0, '20.811')] -[2023-02-24 13:49:00,802][11167] Updated weights for policy 0, policy_version 600 (0.0014) -[2023-02-24 13:49:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2465792. Throughput: 0: 1005.1. Samples: 617078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:02,946][00980] Avg episode reward: [(0, '19.670')] -[2023-02-24 13:49:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2482176. Throughput: 0: 975.1. Samples: 619374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:07,947][00980] Avg episode reward: [(0, '19.588')] -[2023-02-24 13:49:12,935][11167] Updated weights for policy 0, policy_version 610 (0.0021) -[2023-02-24 13:49:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2498560. Throughput: 0: 955.4. Samples: 624052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:49:12,944][00980] Avg episode reward: [(0, '19.264')] -[2023-02-24 13:49:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2519040. Throughput: 0: 1006.2. Samples: 630922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:49:17,942][00980] Avg episode reward: [(0, '18.994')] -[2023-02-24 13:49:21,487][11167] Updated weights for policy 0, policy_version 620 (0.0025) -[2023-02-24 13:49:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2543616. Throughput: 0: 1008.9. Samples: 634562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:22,944][00980] Avg episode reward: [(0, '19.400')] -[2023-02-24 13:49:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.8, 300 sec: 3901.6). Total num frames: 2560000. Throughput: 0: 966.9. Samples: 639860. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:49:27,947][00980] Avg episode reward: [(0, '19.749')] -[2023-02-24 13:49:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2576384. Throughput: 0: 963.9. Samples: 644616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:32,942][00980] Avg episode reward: [(0, '21.772')] -[2023-02-24 13:49:33,630][11167] Updated weights for policy 0, policy_version 630 (0.0034) -[2023-02-24 13:49:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2600960. Throughput: 0: 992.8. Samples: 648176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:49:37,942][00980] Avg episode reward: [(0, '23.388')] -[2023-02-24 13:49:42,134][11167] Updated weights for policy 0, policy_version 640 (0.0037) -[2023-02-24 13:49:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2621440. Throughput: 0: 1010.4. Samples: 655422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:42,945][00980] Avg episode reward: [(0, '22.442')] -[2023-02-24 13:49:42,956][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth... -[2023-02-24 13:49:43,091][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth -[2023-02-24 13:49:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2637824. Throughput: 0: 953.8. Samples: 660000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:47,945][00980] Avg episode reward: [(0, '24.144')] -[2023-02-24 13:49:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2654208. Throughput: 0: 951.3. Samples: 662184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:49:52,942][00980] Avg episode reward: [(0, '23.765')] -[2023-02-24 13:49:54,499][11167] Updated weights for policy 0, policy_version 650 (0.0033) -[2023-02-24 13:49:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2674688. Throughput: 0: 993.3. Samples: 668752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:49:57,942][00980] Avg episode reward: [(0, '22.009')] -[2023-02-24 13:50:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2699264. Throughput: 0: 994.2. Samples: 675660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:50:02,944][00980] Avg episode reward: [(0, '20.942')] -[2023-02-24 13:50:03,645][11167] Updated weights for policy 0, policy_version 660 (0.0015) -[2023-02-24 13:50:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2715648. Throughput: 0: 963.9. Samples: 677938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:50:07,946][00980] Avg episode reward: [(0, '21.439')] -[2023-02-24 13:50:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2732032. Throughput: 0: 949.9. Samples: 682606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:50:12,947][00980] Avg episode reward: [(0, '21.345')] -[2023-02-24 13:50:15,229][11167] Updated weights for policy 0, policy_version 670 (0.0014) -[2023-02-24 13:50:17,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3959.4, 300 sec: 3873.8). Total num frames: 2756608. Throughput: 0: 1000.8. Samples: 689654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:50:17,942][00980] Avg episode reward: [(0, '20.952')] -[2023-02-24 13:50:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2777088. Throughput: 0: 1000.1. Samples: 693182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:50:22,943][00980] Avg episode reward: [(0, '20.930')] -[2023-02-24 13:50:24,913][11167] Updated weights for policy 0, policy_version 680 (0.0015) -[2023-02-24 13:50:27,940][00980] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2793472. Throughput: 0: 949.3. Samples: 698142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:50:27,946][00980] Avg episode reward: [(0, '22.144')] -[2023-02-24 13:50:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2809856. Throughput: 0: 957.6. Samples: 703092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:50:32,946][00980] Avg episode reward: [(0, '22.441')] -[2023-02-24 13:50:36,209][11167] Updated weights for policy 0, policy_version 690 (0.0049) -[2023-02-24 13:50:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2834432. Throughput: 0: 988.8. Samples: 706678. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:50:37,942][00980] Avg episode reward: [(0, '23.206')] -[2023-02-24 13:50:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2854912. Throughput: 0: 1003.4. Samples: 713906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:50:42,950][00980] Avg episode reward: [(0, '23.539')] -[2023-02-24 13:50:46,085][11167] Updated weights for policy 0, policy_version 700 (0.0017) -[2023-02-24 13:50:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2871296. Throughput: 0: 955.2. Samples: 718644. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:50:47,942][00980] Avg episode reward: [(0, '24.095')] -[2023-02-24 13:50:52,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2887680. Throughput: 0: 955.8. Samples: 720950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:50:52,947][00980] Avg episode reward: [(0, '26.693')] -[2023-02-24 13:50:52,965][11152] Saving new best policy, reward=26.693! -[2023-02-24 13:50:56,683][11167] Updated weights for policy 0, policy_version 710 (0.0016) -[2023-02-24 13:50:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2912256. Throughput: 0: 998.4. Samples: 727534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:50:57,946][00980] Avg episode reward: [(0, '26.035')] -[2023-02-24 13:51:02,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2932736. Throughput: 0: 996.5. Samples: 734498. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:51:02,943][00980] Avg episode reward: [(0, '25.625')] -[2023-02-24 13:51:07,116][11167] Updated weights for policy 0, policy_version 720 (0.0012) -[2023-02-24 13:51:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2949120. Throughput: 0: 968.7. Samples: 736774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:51:07,945][00980] Avg episode reward: [(0, '24.872')] -[2023-02-24 13:51:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2965504. Throughput: 0: 961.1. Samples: 741390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:51:12,947][00980] Avg episode reward: [(0, '25.277')] -[2023-02-24 13:51:17,311][11167] Updated weights for policy 0, policy_version 730 (0.0035) -[2023-02-24 13:51:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2990080. Throughput: 0: 1012.2. Samples: 748642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:51:17,943][00980] Avg episode reward: [(0, '24.269')] -[2023-02-24 13:51:22,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3959.4, 300 sec: 3929.4). Total num frames: 3014656. Throughput: 0: 1012.3. Samples: 752230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:51:22,944][00980] Avg episode reward: [(0, '24.348')] -[2023-02-24 13:51:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3026944. Throughput: 0: 961.3. Samples: 757164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:51:27,948][00980] Avg episode reward: [(0, '25.070')] -[2023-02-24 13:51:28,432][11167] Updated weights for policy 0, policy_version 740 (0.0012) -[2023-02-24 13:51:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3043328. Throughput: 0: 964.5. Samples: 762048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:51:32,948][00980] Avg episode reward: [(0, '25.088')] -[2023-02-24 13:51:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3067904. Throughput: 0: 992.7. Samples: 765622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:51:37,942][00980] Avg episode reward: [(0, '25.119')] -[2023-02-24 13:51:38,118][11167] Updated weights for policy 0, policy_version 750 (0.0014) -[2023-02-24 13:51:42,944][00980] Fps is (10 sec: 4913.2, 60 sec: 3959.2, 300 sec: 3929.3). Total num frames: 3092480. Throughput: 0: 1009.5. Samples: 772966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:51:42,947][00980] Avg episode reward: [(0, '24.609')] -[2023-02-24 13:51:42,966][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth... -[2023-02-24 13:51:43,132][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000525_2150400.pth -[2023-02-24 13:51:47,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3108864. Throughput: 0: 958.9. Samples: 777648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:51:47,942][00980] Avg episode reward: [(0, '24.922')] -[2023-02-24 13:51:48,996][11167] Updated weights for policy 0, policy_version 760 (0.0011) -[2023-02-24 13:51:52,940][00980] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3125248. Throughput: 0: 959.2. Samples: 779936. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-24 13:51:52,949][00980] Avg episode reward: [(0, '24.443')] -[2023-02-24 13:51:57,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3149824. Throughput: 0: 1006.7. Samples: 786692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:51:57,943][00980] Avg episode reward: [(0, '23.121')] -[2023-02-24 13:51:58,600][11167] Updated weights for policy 0, policy_version 770 (0.0016) -[2023-02-24 13:52:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3170304. Throughput: 0: 1000.4. Samples: 793662. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:52:02,942][00980] Avg episode reward: [(0, '23.086')] -[2023-02-24 13:52:07,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3943.3). Total num frames: 3186688. Throughput: 0: 971.6. Samples: 795950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:52:07,950][00980] Avg episode reward: [(0, '22.290')] -[2023-02-24 13:52:10,165][11167] Updated weights for policy 0, policy_version 780 (0.0025) -[2023-02-24 13:52:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3203072. Throughput: 0: 963.8. Samples: 800536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:12,942][00980] Avg episode reward: [(0, '21.696')] -[2023-02-24 13:52:17,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3227648. Throughput: 0: 1018.3. Samples: 807870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:17,946][00980] Avg episode reward: [(0, '21.529')] -[2023-02-24 13:52:19,091][11167] Updated weights for policy 0, policy_version 790 (0.0020) -[2023-02-24 13:52:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3248128. Throughput: 0: 1019.9. Samples: 811518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:22,943][00980] Avg episode reward: [(0, '22.744')] -[2023-02-24 13:52:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3264512. Throughput: 0: 967.6. Samples: 816506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:27,942][00980] Avg episode reward: [(0, '23.093')] -[2023-02-24 13:52:30,941][11167] Updated weights for policy 0, policy_version 800 (0.0033) -[2023-02-24 13:52:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3915.5). Total num frames: 3284992. Throughput: 0: 976.4. Samples: 821586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:52:32,947][00980] Avg episode reward: [(0, '24.660')] -[2023-02-24 13:52:37,940][00980] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 3309568. Throughput: 0: 1006.2. Samples: 825214. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) -[2023-02-24 13:52:37,947][00980] Avg episode reward: [(0, '25.534')] -[2023-02-24 13:52:39,715][11167] Updated weights for policy 0, policy_version 810 (0.0028) -[2023-02-24 13:52:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.7, 300 sec: 3943.3). Total num frames: 3330048. Throughput: 0: 1014.2. Samples: 832332. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:42,943][00980] Avg episode reward: [(0, '25.907')] -[2023-02-24 13:52:47,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3342336. Throughput: 0: 960.6. Samples: 836890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:52:47,944][00980] Avg episode reward: [(0, '26.039')] -[2023-02-24 13:52:51,855][11167] Updated weights for policy 0, policy_version 820 (0.0023) -[2023-02-24 13:52:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3362816. Throughput: 0: 959.3. Samples: 839118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:52:52,942][00980] Avg episode reward: [(0, '26.542')] -[2023-02-24 13:52:57,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3387392. Throughput: 0: 1013.3. Samples: 846134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:52:57,947][00980] Avg episode reward: [(0, '25.855')] -[2023-02-24 13:53:00,300][11167] Updated weights for policy 0, policy_version 830 (0.0014) -[2023-02-24 13:53:02,942][00980] Fps is (10 sec: 4504.6, 60 sec: 3959.3, 300 sec: 3943.2). Total num frames: 3407872. Throughput: 0: 997.3. Samples: 852752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:53:02,950][00980] Avg episode reward: [(0, '25.344')] -[2023-02-24 13:53:07,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3959.1, 300 sec: 3943.2). Total num frames: 3424256. Throughput: 0: 967.7. Samples: 855068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:07,950][00980] Avg episode reward: [(0, '25.157')] -[2023-02-24 13:53:12,322][11167] Updated weights for policy 0, policy_version 840 (0.0017) -[2023-02-24 13:53:12,940][00980] Fps is (10 sec: 3277.5, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3440640. Throughput: 0: 966.1. Samples: 859980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:53:12,942][00980] Avg episode reward: [(0, '24.986')] -[2023-02-24 13:53:17,940][00980] Fps is (10 sec: 4098.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3465216. Throughput: 0: 1015.6. Samples: 867288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) -[2023-02-24 13:53:17,942][00980] Avg episode reward: [(0, '23.994')] -[2023-02-24 13:53:20,771][11167] Updated weights for policy 0, policy_version 850 (0.0017) -[2023-02-24 13:53:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3485696. Throughput: 0: 1014.5. Samples: 870868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:53:22,946][00980] Avg episode reward: [(0, '23.515')] -[2023-02-24 13:53:27,940][00980] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3502080. Throughput: 0: 961.0. Samples: 875576. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:27,942][00980] Avg episode reward: [(0, '22.749')] -[2023-02-24 13:53:32,844][11167] Updated weights for policy 0, policy_version 860 (0.0013) -[2023-02-24 13:53:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3522560. Throughput: 0: 980.3. Samples: 881004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:32,943][00980] Avg episode reward: [(0, '24.068')] -[2023-02-24 13:53:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 3543040. Throughput: 0: 1011.7. Samples: 884644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:37,942][00980] Avg episode reward: [(0, '24.460')] -[2023-02-24 13:53:41,525][11167] Updated weights for policy 0, policy_version 870 (0.0016) -[2023-02-24 13:53:42,944][00980] Fps is (10 sec: 4503.7, 60 sec: 3959.2, 300 sec: 3957.1). Total num frames: 3567616. Throughput: 0: 1006.7. Samples: 891438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:42,946][00980] Avg episode reward: [(0, '24.611')] -[2023-02-24 13:53:42,962][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth... -[2023-02-24 13:53:43,098][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth -[2023-02-24 13:53:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3579904. Throughput: 0: 961.9. Samples: 896036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:47,956][00980] Avg episode reward: [(0, '24.148')] -[2023-02-24 13:53:52,940][00980] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3600384. Throughput: 0: 962.2. Samples: 898364. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:53:52,947][00980] Avg episode reward: [(0, '24.942')] -[2023-02-24 13:53:53,338][11167] Updated weights for policy 0, policy_version 880 (0.0013) -[2023-02-24 13:53:57,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3624960. Throughput: 0: 1015.3. Samples: 905668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) -[2023-02-24 13:53:57,943][00980] Avg episode reward: [(0, '26.245')] -[2023-02-24 13:54:02,912][11167] Updated weights for policy 0, policy_version 890 (0.0014) -[2023-02-24 13:54:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3943.3). Total num frames: 3645440. Throughput: 0: 989.6. Samples: 911818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:54:02,942][00980] Avg episode reward: [(0, '24.977')] -[2023-02-24 13:54:07,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.5, 300 sec: 3929.4). Total num frames: 3657728. Throughput: 0: 959.3. Samples: 914038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:54:07,948][00980] Avg episode reward: [(0, '25.792')] -[2023-02-24 13:54:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3678208. Throughput: 0: 971.8. Samples: 919308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:54:12,949][00980] Avg episode reward: [(0, '25.107')] -[2023-02-24 13:54:14,064][11167] Updated weights for policy 0, policy_version 900 (0.0025) -[2023-02-24 13:54:17,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3702784. Throughput: 0: 1013.9. Samples: 926628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:54:17,943][00980] Avg episode reward: [(0, '26.024')] -[2023-02-24 13:54:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3723264. Throughput: 0: 1010.1. Samples: 930100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:54:22,942][00980] Avg episode reward: [(0, '25.060')] -[2023-02-24 13:54:23,755][11167] Updated weights for policy 0, policy_version 910 (0.0012) -[2023-02-24 13:54:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3739648. Throughput: 0: 961.2. Samples: 934686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:54:27,946][00980] Avg episode reward: [(0, '25.466')] -[2023-02-24 13:54:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3760128. Throughput: 0: 986.6. Samples: 940434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:54:32,945][00980] Avg episode reward: [(0, '26.161')] -[2023-02-24 13:54:34,537][11167] Updated weights for policy 0, policy_version 920 (0.0017) -[2023-02-24 13:54:37,940][00980] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3784704. Throughput: 0: 1016.0. Samples: 944084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:54:37,944][00980] Avg episode reward: [(0, '26.766')] -[2023-02-24 13:54:37,949][11152] Saving new best policy, reward=26.766! -[2023-02-24 13:54:42,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3943.3). Total num frames: 3801088. Throughput: 0: 998.6. Samples: 950606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:54:42,947][00980] Avg episode reward: [(0, '26.990')] -[2023-02-24 13:54:42,958][11152] Saving new best policy, reward=26.990! -[2023-02-24 13:54:44,675][11167] Updated weights for policy 0, policy_version 930 (0.0017) -[2023-02-24 13:54:47,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3817472. Throughput: 0: 961.6. Samples: 955088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) -[2023-02-24 13:54:47,944][00980] Avg episode reward: [(0, '27.957')] -[2023-02-24 13:54:47,950][11152] Saving new best policy, reward=27.957! -[2023-02-24 13:54:52,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3837952. Throughput: 0: 970.5. Samples: 957710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:54:52,942][00980] Avg episode reward: [(0, '26.032')] -[2023-02-24 13:54:55,178][11167] Updated weights for policy 0, policy_version 940 (0.0032) -[2023-02-24 13:54:57,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3862528. Throughput: 0: 1016.5. Samples: 965052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:54:57,942][00980] Avg episode reward: [(0, '25.783')] -[2023-02-24 13:55:02,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3878912. Throughput: 0: 988.7. Samples: 971118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:55:02,944][00980] Avg episode reward: [(0, '25.187')] -[2023-02-24 13:55:05,669][11167] Updated weights for policy 0, policy_version 950 (0.0013) -[2023-02-24 13:55:07,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3895296. Throughput: 0: 961.2. Samples: 973356. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:55:07,949][00980] Avg episode reward: [(0, '24.429')] -[2023-02-24 13:55:12,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3915776. Throughput: 0: 982.3. Samples: 978890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) -[2023-02-24 13:55:12,942][00980] Avg episode reward: [(0, '23.922')] -[2023-02-24 13:55:15,707][11167] Updated weights for policy 0, policy_version 960 (0.0025) -[2023-02-24 13:55:17,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3940352. Throughput: 0: 1015.5. Samples: 986132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:55:17,943][00980] Avg episode reward: [(0, '25.560')] -[2023-02-24 13:55:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3960832. Throughput: 0: 1002.9. Samples: 989216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) -[2023-02-24 13:55:22,943][00980] Avg episode reward: [(0, '25.104')] -[2023-02-24 13:55:26,867][11167] Updated weights for policy 0, policy_version 970 (0.0013) -[2023-02-24 13:55:27,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3973120. Throughput: 0: 961.3. Samples: 993864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) -[2023-02-24 13:55:27,948][00980] Avg episode reward: [(0, '26.268')] -[2023-02-24 13:55:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3993600. Throughput: 0: 993.0. Samples: 999774. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) -[2023-02-24 13:55:32,947][00980] Avg episode reward: [(0, '25.975')] -[2023-02-24 13:55:34,585][11152] Stopping Batcher_0... -[2023-02-24 13:55:34,586][00980] Component Batcher_0 stopped! -[2023-02-24 13:55:34,589][11152] Loop batcher_evt_loop terminating... -[2023-02-24 13:55:34,588][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-24 13:55:34,643][11167] Weights refcount: 2 0 -[2023-02-24 13:55:34,658][00980] Component InferenceWorker_p0-w0 stopped! -[2023-02-24 13:55:34,660][11167] Stopping InferenceWorker_p0-w0... -[2023-02-24 13:55:34,661][11167] Loop inference_proc0-0_evt_loop terminating... -[2023-02-24 13:55:34,674][11171] Stopping RolloutWorker_w5... -[2023-02-24 13:55:34,674][00980] Component RolloutWorker_w5 stopped! -[2023-02-24 13:55:34,684][11173] Stopping RolloutWorker_w3... -[2023-02-24 13:55:34,684][00980] Component RolloutWorker_w4 stopped! -[2023-02-24 13:55:34,686][00980] Component RolloutWorker_w3 stopped! -[2023-02-24 13:55:34,683][11170] Stopping RolloutWorker_w4... -[2023-02-24 13:55:34,692][00980] Component RolloutWorker_w0 stopped! -[2023-02-24 13:55:34,694][11169] Stopping RolloutWorker_w1... -[2023-02-24 13:55:34,695][00980] Component RolloutWorker_w1 stopped! -[2023-02-24 13:55:34,689][11170] Loop rollout_proc4_evt_loop terminating... -[2023-02-24 13:55:34,694][11166] Stopping RolloutWorker_w0... -[2023-02-24 13:55:34,701][00980] Component RolloutWorker_w2 stopped! -[2023-02-24 13:55:34,701][11168] Stopping RolloutWorker_w2... -[2023-02-24 13:55:34,710][11174] Stopping RolloutWorker_w7... -[2023-02-24 13:55:34,710][11174] Loop rollout_proc7_evt_loop terminating... -[2023-02-24 13:55:34,709][00980] Component RolloutWorker_w6 stopped! -[2023-02-24 13:55:34,680][11171] Loop rollout_proc5_evt_loop terminating... -[2023-02-24 13:55:34,709][11173] Loop rollout_proc3_evt_loop terminating... -[2023-02-24 13:55:34,709][11172] Stopping RolloutWorker_w6... -[2023-02-24 13:55:34,711][00980] Component RolloutWorker_w7 stopped! -[2023-02-24 13:55:34,695][11169] Loop rollout_proc1_evt_loop terminating... -[2023-02-24 13:55:34,700][11166] Loop rollout_proc0_evt_loop terminating... -[2023-02-24 13:55:34,717][11168] Loop rollout_proc2_evt_loop terminating... -[2023-02-24 13:55:34,719][11172] Loop rollout_proc6_evt_loop terminating... -[2023-02-24 13:55:34,781][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth -[2023-02-24 13:55:34,793][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-24 13:55:34,954][00980] Component LearnerWorker_p0 stopped! -[2023-02-24 13:55:34,963][00980] Waiting for process learner_proc0 to stop... -[2023-02-24 13:55:34,967][11152] Stopping LearnerWorker_p0... -[2023-02-24 13:55:34,968][11152] Loop learner_proc0_evt_loop terminating... -[2023-02-24 13:55:36,742][00980] Waiting for process inference_proc0-0 to join... -[2023-02-24 13:55:37,070][00980] Waiting for process rollout_proc0 to join... -[2023-02-24 13:55:37,548][00980] Waiting for process rollout_proc1 to join... -[2023-02-24 13:55:37,549][00980] Waiting for process rollout_proc2 to join... -[2023-02-24 13:55:37,551][00980] Waiting for process rollout_proc3 to join... -[2023-02-24 13:55:37,552][00980] Waiting for process rollout_proc4 to join... -[2023-02-24 13:55:37,570][00980] Waiting for process rollout_proc5 to join... -[2023-02-24 13:55:37,571][00980] Waiting for process rollout_proc6 to join... -[2023-02-24 13:55:37,572][00980] Waiting for process rollout_proc7 to join... -[2023-02-24 13:55:37,573][00980] Batcher 0 profile tree view: -batching: 25.3280, releasing_batches: 0.0221 -[2023-02-24 13:55:37,575][00980] InferenceWorker_p0-w0 profile tree view: -wait_policy: 0.0005 - wait_policy_total: 498.3727 -update_model: 7.4088 - weight_update: 0.0013 -one_step: 0.0023 - handle_policy_step: 495.8979 - deserialize: 14.1770, stack: 2.8848, obs_to_device_normalize: 112.3387, forward: 236.1196, send_messages: 25.3594 - prepare_outputs: 80.6792 - to_cpu: 50.7526 -[2023-02-24 13:55:37,576][00980] Learner 0 profile tree view: -misc: 0.0053, prepare_batch: 17.0874 -train: 74.1658 - epoch_init: 0.0057, minibatch_init: 0.0202, losses_postprocess: 0.5531, kl_divergence: 0.5942, after_optimizer: 32.2145 - calculate_losses: 25.9284 - losses_init: 0.0048, forward_head: 1.6499, bptt_initial: 17.2608, tail: 0.9288, advantages_returns: 0.3566, losses: 3.2248 - bptt: 2.1849 - bptt_forward_core: 2.1100 - update: 14.2904 - clip: 1.4789 -[2023-02-24 13:55:37,578][00980] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.3031, enqueue_policy_requests: 129.9487, env_step: 791.6916, overhead: 18.9921, complete_rollouts: 6.7884 -save_policy_outputs: 19.2065 - split_output_tensors: 9.2842 -[2023-02-24 13:55:37,579][00980] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.3690, enqueue_policy_requests: 129.6848, env_step: 791.0784, overhead: 18.7556, complete_rollouts: 6.6190 -save_policy_outputs: 18.7805 - split_output_tensors: 9.0138 -[2023-02-24 13:55:37,581][00980] Loop Runner_EvtLoop terminating... -[2023-02-24 13:55:37,583][00980] Runner profile tree view: -main_loop: 1068.9353 -[2023-02-24 13:55:37,585][00980] Collected {0: 4005888}, FPS: 3747.5 -[2023-02-24 13:55:37,828][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 13:55:37,830][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 13:55:37,833][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 13:55:37,836][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 13:55:37,838][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 13:55:37,840][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 13:55:37,842][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 13:55:37,843][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 13:55:37,844][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-24 13:55:37,845][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-24 13:55:37,847][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 13:55:37,849][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 13:55:37,850][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 13:55:37,851][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 13:55:37,853][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 13:55:37,880][00980] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 13:55:37,882][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 13:55:37,884][00980] RunningMeanStd input shape: (1,) -[2023-02-24 13:55:37,900][00980] ConvEncoder: input_channels=3 -[2023-02-24 13:55:38,593][00980] Conv encoder output size: 512 -[2023-02-24 13:55:38,595][00980] Policy head output size: 512 -[2023-02-24 13:55:41,532][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-24 13:55:43,247][00980] Num frames 100... -[2023-02-24 13:55:43,370][00980] Num frames 200... -[2023-02-24 13:55:43,480][00980] Num frames 300... -[2023-02-24 13:55:43,592][00980] Num frames 400... -[2023-02-24 13:55:43,709][00980] Num frames 500... -[2023-02-24 13:55:43,824][00980] Num frames 600... -[2023-02-24 13:55:43,947][00980] Num frames 700... -[2023-02-24 13:55:44,057][00980] Num frames 800... -[2023-02-24 13:55:44,170][00980] Num frames 900... -[2023-02-24 13:55:44,283][00980] Num frames 1000... -[2023-02-24 13:55:44,409][00980] Num frames 1100... -[2023-02-24 13:55:44,526][00980] Num frames 1200... -[2023-02-24 13:55:44,643][00980] Num frames 1300... -[2023-02-24 13:55:44,757][00980] Num frames 1400... -[2023-02-24 13:55:44,870][00980] Num frames 1500... -[2023-02-24 13:55:44,982][00980] Num frames 1600... -[2023-02-24 13:55:45,098][00980] Num frames 1700... -[2023-02-24 13:55:45,212][00980] Num frames 1800... -[2023-02-24 13:55:45,331][00980] Num frames 1900... -[2023-02-24 13:55:45,452][00980] Num frames 2000... -[2023-02-24 13:55:45,569][00980] Num frames 2100... -[2023-02-24 13:55:45,622][00980] Avg episode rewards: #0: 55.999, true rewards: #0: 21.000 -[2023-02-24 13:55:45,624][00980] Avg episode reward: 55.999, avg true_objective: 21.000 -[2023-02-24 13:55:45,739][00980] Num frames 2200... -[2023-02-24 13:55:45,851][00980] Num frames 2300... -[2023-02-24 13:55:45,962][00980] Num frames 2400... -[2023-02-24 13:55:46,071][00980] Num frames 2500... -[2023-02-24 13:55:46,192][00980] Num frames 2600... -[2023-02-24 13:55:46,311][00980] Num frames 2700... -[2023-02-24 13:55:46,430][00980] Num frames 2800... -[2023-02-24 13:55:46,541][00980] Num frames 2900... -[2023-02-24 13:55:46,668][00980] Num frames 3000... -[2023-02-24 13:55:46,778][00980] Num frames 3100... -[2023-02-24 13:55:46,891][00980] Num frames 3200... -[2023-02-24 13:55:47,003][00980] Num frames 3300... -[2023-02-24 13:55:47,121][00980] Num frames 3400... -[2023-02-24 13:55:47,240][00980] Num frames 3500... -[2023-02-24 13:55:47,360][00980] Num frames 3600... -[2023-02-24 13:55:47,480][00980] Num frames 3700... -[2023-02-24 13:55:47,591][00980] Num frames 3800... -[2023-02-24 13:55:47,704][00980] Num frames 3900... -[2023-02-24 13:55:47,868][00980] Avg episode rewards: #0: 52.459, true rewards: #0: 19.960 -[2023-02-24 13:55:47,869][00980] Avg episode reward: 52.459, avg true_objective: 19.960 -[2023-02-24 13:55:47,884][00980] Num frames 4000... -[2023-02-24 13:55:47,999][00980] Num frames 4100... -[2023-02-24 13:55:48,109][00980] Num frames 4200... -[2023-02-24 13:55:48,219][00980] Num frames 4300... -[2023-02-24 13:55:48,332][00980] Num frames 4400... -[2023-02-24 13:55:48,468][00980] Num frames 4500... -[2023-02-24 13:55:48,588][00980] Num frames 4600... -[2023-02-24 13:55:48,709][00980] Num frames 4700... -[2023-02-24 13:55:48,822][00980] Num frames 4800... -[2023-02-24 13:55:48,937][00980] Num frames 4900... -[2023-02-24 13:55:49,050][00980] Num frames 5000... -[2023-02-24 13:55:49,206][00980] Avg episode rewards: #0: 43.623, true rewards: #0: 16.957 -[2023-02-24 13:55:49,208][00980] Avg episode reward: 43.623, avg true_objective: 16.957 -[2023-02-24 13:55:49,228][00980] Num frames 5100... -[2023-02-24 13:55:49,341][00980] Num frames 5200... -[2023-02-24 13:55:49,466][00980] Num frames 5300... -[2023-02-24 13:55:49,578][00980] Num frames 5400... -[2023-02-24 13:55:49,690][00980] Num frames 5500... -[2023-02-24 13:55:49,804][00980] Num frames 5600... -[2023-02-24 13:55:49,968][00980] Avg episode rewards: #0: 35.492, true rewards: #0: 14.243 -[2023-02-24 13:55:49,970][00980] Avg episode reward: 35.492, avg true_objective: 14.243 -[2023-02-24 13:55:49,976][00980] Num frames 5700... -[2023-02-24 13:55:50,090][00980] Num frames 5800... -[2023-02-24 13:55:50,202][00980] Num frames 5900... -[2023-02-24 13:55:50,314][00980] Num frames 6000... -[2023-02-24 13:55:50,439][00980] Num frames 6100... -[2023-02-24 13:55:50,519][00980] Avg episode rewards: #0: 30.026, true rewards: #0: 12.226 -[2023-02-24 13:55:50,521][00980] Avg episode reward: 30.026, avg true_objective: 12.226 -[2023-02-24 13:55:50,632][00980] Num frames 6200... -[2023-02-24 13:55:50,747][00980] Num frames 6300... -[2023-02-24 13:55:50,861][00980] Num frames 6400... -[2023-02-24 13:55:50,987][00980] Num frames 6500... -[2023-02-24 13:55:51,101][00980] Num frames 6600... -[2023-02-24 13:55:51,224][00980] Num frames 6700... -[2023-02-24 13:55:51,338][00980] Num frames 6800... -[2023-02-24 13:55:51,469][00980] Num frames 6900... -[2023-02-24 13:55:51,585][00980] Num frames 7000... -[2023-02-24 13:55:51,665][00980] Avg episode rewards: #0: 28.361, true rewards: #0: 11.695 -[2023-02-24 13:55:51,669][00980] Avg episode reward: 28.361, avg true_objective: 11.695 -[2023-02-24 13:55:51,760][00980] Num frames 7100... -[2023-02-24 13:55:51,873][00980] Num frames 7200... -[2023-02-24 13:55:51,994][00980] Num frames 7300... -[2023-02-24 13:55:52,108][00980] Num frames 7400... -[2023-02-24 13:55:52,221][00980] Num frames 7500... -[2023-02-24 13:55:52,336][00980] Num frames 7600... -[2023-02-24 13:55:52,448][00980] Num frames 7700... -[2023-02-24 13:55:52,572][00980] Num frames 7800... -[2023-02-24 13:55:52,725][00980] Avg episode rewards: #0: 27.265, true rewards: #0: 11.266 -[2023-02-24 13:55:52,727][00980] Avg episode reward: 27.265, avg true_objective: 11.266 -[2023-02-24 13:55:52,746][00980] Num frames 7900... -[2023-02-24 13:55:52,858][00980] Num frames 8000... -[2023-02-24 13:55:52,967][00980] Num frames 8100... -[2023-02-24 13:55:53,081][00980] Num frames 8200... -[2023-02-24 13:55:53,228][00980] Num frames 8300... -[2023-02-24 13:55:53,398][00980] Num frames 8400... -[2023-02-24 13:55:53,602][00980] Avg episode rewards: #0: 25.492, true rewards: #0: 10.617 -[2023-02-24 13:55:53,605][00980] Avg episode reward: 25.492, avg true_objective: 10.617 -[2023-02-24 13:55:53,619][00980] Num frames 8500... -[2023-02-24 13:55:53,775][00980] Num frames 8600... -[2023-02-24 13:55:53,934][00980] Num frames 8700... -[2023-02-24 13:55:54,021][00980] Avg episode rewards: #0: 23.020, true rewards: #0: 9.687 -[2023-02-24 13:55:54,024][00980] Avg episode reward: 23.020, avg true_objective: 9.687 -[2023-02-24 13:55:54,159][00980] Num frames 8800... -[2023-02-24 13:55:54,318][00980] Num frames 8900... -[2023-02-24 13:55:54,487][00980] Num frames 9000... -[2023-02-24 13:55:54,648][00980] Num frames 9100... -[2023-02-24 13:55:54,815][00980] Num frames 9200... -[2023-02-24 13:55:54,976][00980] Num frames 9300... -[2023-02-24 13:55:55,142][00980] Num frames 9400... -[2023-02-24 13:55:55,310][00980] Num frames 9500... -[2023-02-24 13:55:55,486][00980] Num frames 9600... -[2023-02-24 13:55:55,656][00980] Num frames 9700... -[2023-02-24 13:55:55,819][00980] Num frames 9800... -[2023-02-24 13:55:55,981][00980] Num frames 9900... -[2023-02-24 13:55:56,199][00980] Avg episode rewards: #0: 23.598, true rewards: #0: 9.998 -[2023-02-24 13:55:56,202][00980] Avg episode reward: 23.598, avg true_objective: 9.998 -[2023-02-24 13:55:56,206][00980] Num frames 10000... -[2023-02-24 13:56:56,166][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 13:57:22,832][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 13:57:22,834][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 13:57:22,837][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 13:57:22,839][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 13:57:22,841][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 13:57:22,844][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 13:57:22,848][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! -[2023-02-24 13:57:22,850][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 13:57:22,851][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! -[2023-02-24 13:57:22,852][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! -[2023-02-24 13:57:22,854][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 13:57:22,855][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 13:57:22,857][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 13:57:22,858][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 13:57:22,860][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 13:57:22,883][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 13:57:22,885][00980] RunningMeanStd input shape: (1,) -[2023-02-24 13:57:22,898][00980] ConvEncoder: input_channels=3 -[2023-02-24 13:57:22,934][00980] Conv encoder output size: 512 -[2023-02-24 13:57:22,935][00980] Policy head output size: 512 -[2023-02-24 13:57:22,956][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-24 13:57:23,383][00980] Num frames 100... -[2023-02-24 13:57:23,490][00980] Num frames 200... -[2023-02-24 13:57:23,605][00980] Num frames 300... -[2023-02-24 13:57:23,714][00980] Num frames 400... -[2023-02-24 13:57:23,828][00980] Num frames 500... -[2023-02-24 13:57:23,940][00980] Num frames 600... -[2023-02-24 13:57:24,054][00980] Num frames 700... -[2023-02-24 13:57:24,167][00980] Num frames 800... -[2023-02-24 13:57:24,279][00980] Num frames 900... -[2023-02-24 13:57:24,389][00980] Num frames 1000... -[2023-02-24 13:57:24,472][00980] Avg episode rewards: #0: 22.240, true rewards: #0: 10.240 -[2023-02-24 13:57:24,475][00980] Avg episode reward: 22.240, avg true_objective: 10.240 -[2023-02-24 13:57:24,575][00980] Num frames 1100... -[2023-02-24 13:57:24,689][00980] Num frames 1200... -[2023-02-24 13:57:24,811][00980] Num frames 1300... -[2023-02-24 13:57:24,924][00980] Num frames 1400... -[2023-02-24 13:57:25,036][00980] Num frames 1500... -[2023-02-24 13:57:25,148][00980] Num frames 1600... -[2023-02-24 13:57:25,270][00980] Num frames 1700... -[2023-02-24 13:57:25,389][00980] Num frames 1800... -[2023-02-24 13:57:25,499][00980] Num frames 1900... -[2023-02-24 13:57:25,577][00980] Avg episode rewards: #0: 22.600, true rewards: #0: 9.600 -[2023-02-24 13:57:25,579][00980] Avg episode reward: 22.600, avg true_objective: 9.600 -[2023-02-24 13:57:25,669][00980] Num frames 2000... -[2023-02-24 13:57:25,783][00980] Num frames 2100... -[2023-02-24 13:57:25,905][00980] Num frames 2200... -[2023-02-24 13:57:26,018][00980] Num frames 2300... -[2023-02-24 13:57:26,130][00980] Num frames 2400... -[2023-02-24 13:57:26,242][00980] Num frames 2500... -[2023-02-24 13:57:26,361][00980] Num frames 2600... -[2023-02-24 13:57:26,477][00980] Num frames 2700... -[2023-02-24 13:57:26,607][00980] Num frames 2800... -[2023-02-24 13:57:26,693][00980] Avg episode rewards: #0: 21.094, true rewards: #0: 9.427 -[2023-02-24 13:57:26,696][00980] Avg episode reward: 21.094, avg true_objective: 9.427 -[2023-02-24 13:57:26,779][00980] Num frames 2900... -[2023-02-24 13:57:26,891][00980] Num frames 3000... -[2023-02-24 13:57:27,001][00980] Num frames 3100... -[2023-02-24 13:57:27,115][00980] Num frames 3200... -[2023-02-24 13:57:27,231][00980] Num frames 3300... -[2023-02-24 13:57:27,342][00980] Num frames 3400... -[2023-02-24 13:57:27,453][00980] Num frames 3500... -[2023-02-24 13:57:27,556][00980] Avg episode rewards: #0: 19.358, true rewards: #0: 8.857 -[2023-02-24 13:57:27,558][00980] Avg episode reward: 19.358, avg true_objective: 8.857 -[2023-02-24 13:57:27,635][00980] Num frames 3600... -[2023-02-24 13:57:27,748][00980] Num frames 3700... -[2023-02-24 13:57:27,870][00980] Num frames 3800... -[2023-02-24 13:57:27,981][00980] Num frames 3900... -[2023-02-24 13:57:28,109][00980] Num frames 4000... -[2023-02-24 13:57:28,204][00980] Avg episode rewards: #0: 16.846, true rewards: #0: 8.046 -[2023-02-24 13:57:28,206][00980] Avg episode reward: 16.846, avg true_objective: 8.046 -[2023-02-24 13:57:28,337][00980] Num frames 4100... -[2023-02-24 13:57:28,500][00980] Num frames 4200... -[2023-02-24 13:57:28,661][00980] Num frames 4300... -[2023-02-24 13:57:28,821][00980] Num frames 4400... -[2023-02-24 13:57:28,984][00980] Num frames 4500... -[2023-02-24 13:57:29,140][00980] Num frames 4600... -[2023-02-24 13:57:29,302][00980] Num frames 4700... -[2023-02-24 13:57:29,457][00980] Num frames 4800... -[2023-02-24 13:57:29,617][00980] Num frames 4900... -[2023-02-24 13:57:29,793][00980] Num frames 5000... -[2023-02-24 13:57:29,958][00980] Num frames 5100... -[2023-02-24 13:57:30,119][00980] Num frames 5200... -[2023-02-24 13:57:30,283][00980] Num frames 5300... -[2023-02-24 13:57:30,457][00980] Num frames 5400... -[2023-02-24 13:57:30,620][00980] Num frames 5500... -[2023-02-24 13:57:30,788][00980] Num frames 5600... -[2023-02-24 13:57:30,955][00980] Num frames 5700... -[2023-02-24 13:57:31,092][00980] Avg episode rewards: #0: 21.418, true rewards: #0: 9.585 -[2023-02-24 13:57:31,094][00980] Avg episode reward: 21.418, avg true_objective: 9.585 -[2023-02-24 13:57:31,179][00980] Num frames 5800... -[2023-02-24 13:57:31,343][00980] Num frames 5900... -[2023-02-24 13:57:31,515][00980] Num frames 6000... -[2023-02-24 13:57:31,682][00980] Num frames 6100... -[2023-02-24 13:57:31,813][00980] Num frames 6200... -[2023-02-24 13:57:31,932][00980] Num frames 6300... -[2023-02-24 13:57:32,043][00980] Num frames 6400... -[2023-02-24 13:57:32,153][00980] Num frames 6500... -[2023-02-24 13:57:32,264][00980] Num frames 6600... -[2023-02-24 13:57:32,382][00980] Num frames 6700... -[2023-02-24 13:57:32,495][00980] Num frames 6800... -[2023-02-24 13:57:32,613][00980] Num frames 6900... -[2023-02-24 13:57:32,725][00980] Avg episode rewards: #0: 22.359, true rewards: #0: 9.930 -[2023-02-24 13:57:32,726][00980] Avg episode reward: 22.359, avg true_objective: 9.930 -[2023-02-24 13:57:32,788][00980] Num frames 7000... -[2023-02-24 13:57:32,898][00980] Num frames 7100... -[2023-02-24 13:57:33,014][00980] Num frames 7200... -[2023-02-24 13:57:33,124][00980] Num frames 7300... -[2023-02-24 13:57:33,239][00980] Num frames 7400... -[2023-02-24 13:57:33,351][00980] Num frames 7500... -[2023-02-24 13:57:33,507][00980] Avg episode rewards: #0: 20.989, true rewards: #0: 9.489 -[2023-02-24 13:57:33,509][00980] Avg episode reward: 20.989, avg true_objective: 9.489 -[2023-02-24 13:57:33,522][00980] Num frames 7600... -[2023-02-24 13:57:33,636][00980] Num frames 7700... -[2023-02-24 13:57:33,769][00980] Num frames 7800... -[2023-02-24 13:57:33,889][00980] Num frames 7900... -[2023-02-24 13:57:33,999][00980] Num frames 8000... -[2023-02-24 13:57:34,109][00980] Num frames 8100... -[2023-02-24 13:57:34,218][00980] Num frames 8200... -[2023-02-24 13:57:34,330][00980] Num frames 8300... -[2023-02-24 13:57:34,446][00980] Num frames 8400... -[2023-02-24 13:57:34,563][00980] Num frames 8500... -[2023-02-24 13:57:34,678][00980] Num frames 8600... -[2023-02-24 13:57:34,796][00980] Num frames 8700... -[2023-02-24 13:57:34,909][00980] Num frames 8800... -[2023-02-24 13:57:35,045][00980] Avg episode rewards: #0: 22.523, true rewards: #0: 9.857 -[2023-02-24 13:57:35,047][00980] Avg episode reward: 22.523, avg true_objective: 9.857 -[2023-02-24 13:57:35,083][00980] Num frames 8900... -[2023-02-24 13:57:35,194][00980] Num frames 9000... -[2023-02-24 13:57:35,310][00980] Num frames 9100... -[2023-02-24 13:57:35,422][00980] Num frames 9200... -[2023-02-24 13:57:35,550][00980] Num frames 9300... -[2023-02-24 13:57:35,664][00980] Num frames 9400... -[2023-02-24 13:57:35,775][00980] Num frames 9500... -[2023-02-24 13:57:35,895][00980] Num frames 9600... -[2023-02-24 13:57:36,007][00980] Num frames 9700... -[2023-02-24 13:57:36,121][00980] Num frames 9800... -[2023-02-24 13:57:36,241][00980] Num frames 9900... -[2023-02-24 13:57:36,392][00980] Avg episode rewards: #0: 22.988, true rewards: #0: 9.988 -[2023-02-24 13:57:36,394][00980] Avg episode reward: 22.988, avg true_objective: 9.988 -[2023-02-24 13:58:35,822][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 13:58:39,900][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme -[2023-02-24 14:05:27,094][00980] Environment doom_basic already registered, overwriting... -[2023-02-24 14:05:27,098][00980] Environment doom_two_colors_easy already registered, overwriting... -[2023-02-24 14:05:27,100][00980] Environment doom_two_colors_hard already registered, overwriting... -[2023-02-24 14:05:27,101][00980] Environment doom_dm already registered, overwriting... -[2023-02-24 14:05:27,105][00980] Environment doom_dwango5 already registered, overwriting... -[2023-02-24 14:05:27,107][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... -[2023-02-24 14:05:27,109][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... -[2023-02-24 14:05:27,110][00980] Environment doom_my_way_home already registered, overwriting... -[2023-02-24 14:05:27,114][00980] Environment doom_deadly_corridor already registered, overwriting... -[2023-02-24 14:05:27,115][00980] Environment doom_defend_the_center already registered, overwriting... -[2023-02-24 14:05:27,119][00980] Environment doom_defend_the_line already registered, overwriting... -[2023-02-24 14:05:27,120][00980] Environment doom_health_gathering already registered, overwriting... -[2023-02-24 14:05:27,121][00980] Environment doom_health_gathering_supreme already registered, overwriting... -[2023-02-24 14:05:27,122][00980] Environment doom_battle already registered, overwriting... -[2023-02-24 14:05:27,124][00980] Environment doom_battle2 already registered, overwriting... -[2023-02-24 14:05:27,128][00980] Environment doom_duel_bots already registered, overwriting... -[2023-02-24 14:05:27,130][00980] Environment doom_deathmatch_bots already registered, overwriting... -[2023-02-24 14:05:27,131][00980] Environment doom_duel already registered, overwriting... -[2023-02-24 14:05:27,132][00980] Environment doom_deathmatch_full already registered, overwriting... -[2023-02-24 14:05:27,133][00980] Environment doom_benchmark already registered, overwriting... -[2023-02-24 14:05:27,135][00980] register_encoder_factory: -[2023-02-24 14:05:27,169][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:05:27,170][00980] Overriding arg 'train_for_env_steps' with value 1000000 passed from command line -[2023-02-24 14:05:27,179][00980] Experiment dir /content/train_dir/default_experiment already exists! -[2023-02-24 14:05:27,185][00980] Resuming existing experiment from /content/train_dir/default_experiment... -[2023-02-24 14:05:27,186][00980] Weights and Biases integration disabled -[2023-02-24 14:05:27,191][00980] Environment var CUDA_VISIBLE_DEVICES is 0 - -[2023-02-24 14:05:29,280][00980] Starting experiment with the following configuration: -help=False -algo=APPO -env=doom_health_gathering_supreme -experiment=default_experiment -train_dir=/content/train_dir -restart_behavior=resume -device=gpu -seed=None -num_policies=1 -async_rl=True -serial_mode=False -batched_sampling=False -num_batches_to_accumulate=2 -worker_num_splits=2 -policy_workers_per_policy=1 -max_policy_lag=1000 -num_workers=8 -num_envs_per_worker=4 -batch_size=1024 -num_batches_per_epoch=1 -num_epochs=1 -rollout=32 -recurrence=32 -shuffle_minibatches=False -gamma=0.99 -reward_scale=1.0 -reward_clip=1000.0 -value_bootstrap=False -normalize_returns=True -exploration_loss_coeff=0.001 -value_loss_coeff=0.5 -kl_loss_coeff=0.0 -exploration_loss=symmetric_kl -gae_lambda=0.95 -ppo_clip_ratio=0.1 -ppo_clip_value=0.2 -with_vtrace=False -vtrace_rho=1.0 -vtrace_c=1.0 -optimizer=adam -adam_eps=1e-06 -adam_beta1=0.9 -adam_beta2=0.999 -max_grad_norm=4.0 -learning_rate=0.0001 -lr_schedule=constant -lr_schedule_kl_threshold=0.008 -lr_adaptive_min=1e-06 -lr_adaptive_max=0.01 -obs_subtract_mean=0.0 -obs_scale=255.0 -normalize_input=True -normalize_input_keys=None -decorrelate_experience_max_seconds=0 -decorrelate_envs_on_one_worker=True -actor_worker_gpus=[] -set_workers_cpu_affinity=True -force_envs_single_thread=False -default_niceness=0 -log_to_file=True -experiment_summaries_interval=10 -flush_summaries_interval=30 -stats_avg=100 -summaries_use_frameskip=True -heartbeat_interval=20 -heartbeat_reporting_interval=600 -train_for_env_steps=1000000 -train_for_seconds=10000000000 -save_every_sec=120 -keep_checkpoints=2 -load_checkpoint_kind=latest -save_milestones_sec=-1 -save_best_every_sec=5 -save_best_metric=reward -save_best_after=100000 -benchmark=False -encoder_mlp_layers=[512, 512] -encoder_conv_architecture=convnet_simple -encoder_conv_mlp_layers=[512] -use_rnn=True -rnn_size=512 -rnn_type=gru -rnn_num_layers=1 -decoder_mlp_layers=[] -nonlinearity=elu -policy_initialization=orthogonal -policy_init_gain=1.0 -actor_critic_share_weights=True -adaptive_stddev=True -continuous_tanh_scale=0.0 -initial_stddev=1.0 -use_env_info_cache=False -env_gpu_actions=False -env_gpu_observations=True -env_frameskip=4 -env_framestack=1 -pixel_format=CHW -use_record_episode_statistics=False -with_wandb=False -wandb_user=None -wandb_project=sample_factory -wandb_group=None -wandb_job_type=SF -wandb_tags=[] -with_pbt=False -pbt_mix_policies_in_one_env=True -pbt_period_env_steps=5000000 -pbt_start_mutation=20000000 -pbt_replace_fraction=0.3 -pbt_mutation_rate=0.15 -pbt_replace_reward_gap=0.1 -pbt_replace_reward_gap_absolute=1e-06 -pbt_optimize_gamma=False -pbt_target_objective=true_objective -pbt_perturb_min=1.1 -pbt_perturb_max=1.5 -num_agents=-1 -num_humans=0 -num_bots=-1 -start_bot_difficulty=None -timelimit=None -res_w=128 -res_h=72 -wide_aspect_ratio=False -eval_env_frameskip=1 -fps=35 -command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 -cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} -git_hash=unknown -git_repo_name=not a git repository -[2023-02-24 14:05:29,284][00980] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-24 14:05:29,288][00980] Rollout worker 0 uses device cpu -[2023-02-24 14:05:29,293][00980] Rollout worker 1 uses device cpu -[2023-02-24 14:05:29,295][00980] Rollout worker 2 uses device cpu -[2023-02-24 14:05:29,297][00980] Rollout worker 3 uses device cpu -[2023-02-24 14:05:29,299][00980] Rollout worker 4 uses device cpu -[2023-02-24 14:05:29,301][00980] Rollout worker 5 uses device cpu -[2023-02-24 14:05:29,303][00980] Rollout worker 6 uses device cpu -[2023-02-24 14:05:29,305][00980] Rollout worker 7 uses device cpu -[2023-02-24 14:05:29,424][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:05:29,426][00980] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-24 14:05:29,459][00980] Starting all processes... -[2023-02-24 14:05:29,461][00980] Starting process learner_proc0 -[2023-02-24 14:05:29,592][00980] Starting all processes... -[2023-02-24 14:05:29,603][00980] Starting process inference_proc0-0 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc0 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc1 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc2 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc3 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc4 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc5 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc6 -[2023-02-24 14:05:29,604][00980] Starting process rollout_proc7 -[2023-02-24 14:05:37,475][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:05:37,492][20720] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-24 14:05:37,530][20720] Num visible devices: 1 -[2023-02-24 14:05:37,562][20720] Starting seed is not provided -[2023-02-24 14:05:37,563][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:05:37,564][20720] Initializing actor-critic model on device cuda:0 -[2023-02-24 14:05:37,565][20720] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:05:37,567][20720] RunningMeanStd input shape: (1,) -[2023-02-24 14:05:37,652][20720] ConvEncoder: input_channels=3 -[2023-02-24 14:05:38,454][20720] Conv encoder output size: 512 -[2023-02-24 14:05:38,462][20720] Policy head output size: 512 -[2023-02-24 14:05:38,547][20720] Created Actor Critic model with architecture: -[2023-02-24 14:05:38,560][20720] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): VizdoomEncoder( - (basic_encoder): ConvEncoder( - (enc): RecursiveScriptModule( - original_name=ConvEncoderImpl - (conv_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Conv2d) - (1): RecursiveScriptModule(original_name=ELU) - (2): RecursiveScriptModule(original_name=Conv2d) - (3): RecursiveScriptModule(original_name=ELU) - (4): RecursiveScriptModule(original_name=Conv2d) - (5): RecursiveScriptModule(original_name=ELU) - ) - (mlp_layers): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=ELU) - ) - ) - ) - ) - (core): ModelCoreRNN( - (core): GRU(512, 512) - ) - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=512, out_features=1, bias=True) - (action_parameterization): ActionParameterizationDefault( - (distribution_linear): Linear(in_features=512, out_features=5, bias=True) - ) -) -[2023-02-24 14:05:38,848][20735] Worker 0 uses CPU cores [0] -[2023-02-24 14:05:38,874][20734] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:05:38,874][20734] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-24 14:05:38,948][20734] Num visible devices: 1 -[2023-02-24 14:05:39,017][20737] Worker 2 uses CPU cores [0] -[2023-02-24 14:05:39,818][20747] Worker 4 uses CPU cores [0] -[2023-02-24 14:05:39,866][20739] Worker 3 uses CPU cores [1] -[2023-02-24 14:05:40,106][20741] Worker 1 uses CPU cores [1] -[2023-02-24 14:05:40,463][20748] Worker 7 uses CPU cores [1] -[2023-02-24 14:05:40,605][20750] Worker 5 uses CPU cores [1] -[2023-02-24 14:05:40,683][20753] Worker 6 uses CPU cores [0] -[2023-02-24 14:05:43,236][20720] Using optimizer -[2023-02-24 14:05:43,237][20720] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... -[2023-02-24 14:05:43,279][20720] Loading model from checkpoint -[2023-02-24 14:05:43,287][20720] Loaded experiment state at self.train_step=978, self.env_steps=4005888 -[2023-02-24 14:05:43,287][20720] Initialized policy 0 weights for model version 978 -[2023-02-24 14:05:43,293][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:05:43,302][20720] LearnerWorker_p0 finished initialization! -[2023-02-24 14:05:43,650][20734] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:05:43,652][20734] RunningMeanStd input shape: (1,) -[2023-02-24 14:05:43,671][20734] ConvEncoder: input_channels=3 -[2023-02-24 14:05:43,805][20734] Conv encoder output size: 512 -[2023-02-24 14:05:43,805][20734] Policy head output size: 512 -[2023-02-24 14:05:46,152][00980] Inference worker 0-0 is ready! -[2023-02-24 14:05:46,154][00980] All inference workers are ready! Signal rollout workers to start! -[2023-02-24 14:05:46,278][20747] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,277][20735] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,274][20753] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,298][20737] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,310][20739] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,313][20748] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,317][20750] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:46,320][20741] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:05:47,191][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:05:47,495][20747] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,496][20735] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,497][20753] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,497][20750] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,500][20739] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,503][20741] Decorrelating experience for 0 frames... -[2023-02-24 14:05:47,857][20741] Decorrelating experience for 32 frames... -[2023-02-24 14:05:48,254][20741] Decorrelating experience for 64 frames... -[2023-02-24 14:05:48,444][20753] Decorrelating experience for 32 frames... -[2023-02-24 14:05:48,461][20735] Decorrelating experience for 32 frames... -[2023-02-24 14:05:48,471][20737] Decorrelating experience for 0 frames... -[2023-02-24 14:05:48,844][20750] Decorrelating experience for 32 frames... -[2023-02-24 14:05:49,216][20737] Decorrelating experience for 32 frames... -[2023-02-24 14:05:49,254][20748] Decorrelating experience for 0 frames... -[2023-02-24 14:05:49,317][20753] Decorrelating experience for 64 frames... -[2023-02-24 14:05:49,417][00980] Heartbeat connected on Batcher_0 -[2023-02-24 14:05:49,427][00980] Heartbeat connected on LearnerWorker_p0 -[2023-02-24 14:05:49,469][00980] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-24 14:05:50,064][20735] Decorrelating experience for 64 frames... -[2023-02-24 14:05:50,153][20753] Decorrelating experience for 96 frames... -[2023-02-24 14:05:50,238][00980] Heartbeat connected on RolloutWorker_w6 -[2023-02-24 14:05:50,333][20748] Decorrelating experience for 32 frames... -[2023-02-24 14:05:50,358][20750] Decorrelating experience for 64 frames... -[2023-02-24 14:05:50,369][20741] Decorrelating experience for 96 frames... -[2023-02-24 14:05:50,673][00980] Heartbeat connected on RolloutWorker_w1 -[2023-02-24 14:05:51,453][20737] Decorrelating experience for 64 frames... -[2023-02-24 14:05:51,472][20747] Decorrelating experience for 32 frames... -[2023-02-24 14:05:51,931][20750] Decorrelating experience for 96 frames... -[2023-02-24 14:05:52,032][20748] Decorrelating experience for 64 frames... -[2023-02-24 14:05:52,195][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:05:52,267][00980] Heartbeat connected on RolloutWorker_w5 -[2023-02-24 14:05:53,434][20735] Decorrelating experience for 96 frames... -[2023-02-24 14:05:53,617][20737] Decorrelating experience for 96 frames... -[2023-02-24 14:05:53,926][00980] Heartbeat connected on RolloutWorker_w0 -[2023-02-24 14:05:54,196][00980] Heartbeat connected on RolloutWorker_w2 -[2023-02-24 14:05:54,491][20747] Decorrelating experience for 64 frames... -[2023-02-24 14:05:54,847][20748] Decorrelating experience for 96 frames... -[2023-02-24 14:05:55,791][00980] Heartbeat connected on RolloutWorker_w7 -[2023-02-24 14:05:57,191][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 152.4. Samples: 1524. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:05:57,201][00980] Avg episode reward: [(0, '2.762')] -[2023-02-24 14:05:57,549][20720] Signal inference workers to stop experience collection... -[2023-02-24 14:05:57,572][20739] Decorrelating experience for 32 frames... -[2023-02-24 14:05:57,574][20734] InferenceWorker_p0-w0: stopping experience collection -[2023-02-24 14:05:58,375][20747] Decorrelating experience for 96 frames... -[2023-02-24 14:05:58,585][00980] Heartbeat connected on RolloutWorker_w4 -[2023-02-24 14:05:58,843][20739] Decorrelating experience for 64 frames... -[2023-02-24 14:05:59,511][20739] Decorrelating experience for 96 frames... -[2023-02-24 14:05:59,622][00980] Heartbeat connected on RolloutWorker_w3 -[2023-02-24 14:06:00,370][20720] Signal inference workers to resume experience collection... -[2023-02-24 14:06:00,371][20734] InferenceWorker_p0-w0: resuming experience collection -[2023-02-24 14:06:00,372][20720] Stopping Batcher_0... -[2023-02-24 14:06:00,373][20720] Loop batcher_evt_loop terminating... -[2023-02-24 14:06:00,398][00980] Component Batcher_0 stopped! -[2023-02-24 14:06:00,464][20734] Weights refcount: 2 0 -[2023-02-24 14:06:00,467][20734] Stopping InferenceWorker_p0-w0... -[2023-02-24 14:06:00,467][00980] Component InferenceWorker_p0-w0 stopped! -[2023-02-24 14:06:00,467][20734] Loop inference_proc0-0_evt_loop terminating... -[2023-02-24 14:06:00,567][20747] Stopping RolloutWorker_w4... -[2023-02-24 14:06:00,575][20747] Loop rollout_proc4_evt_loop terminating... -[2023-02-24 14:06:00,571][00980] Component RolloutWorker_w4 stopped! -[2023-02-24 14:06:00,580][20753] Stopping RolloutWorker_w6... -[2023-02-24 14:06:00,581][20753] Loop rollout_proc6_evt_loop terminating... -[2023-02-24 14:06:00,581][00980] Component RolloutWorker_w6 stopped! -[2023-02-24 14:06:00,585][20737] Stopping RolloutWorker_w2... -[2023-02-24 14:06:00,585][20737] Loop rollout_proc2_evt_loop terminating... -[2023-02-24 14:06:00,588][00980] Component RolloutWorker_w2 stopped! -[2023-02-24 14:06:00,594][20735] Stopping RolloutWorker_w0... -[2023-02-24 14:06:00,595][00980] Component RolloutWorker_w0 stopped! -[2023-02-24 14:06:00,594][20735] Loop rollout_proc0_evt_loop terminating... -[2023-02-24 14:06:00,623][00980] Component RolloutWorker_w7 stopped! -[2023-02-24 14:06:00,629][20748] Stopping RolloutWorker_w7... -[2023-02-24 14:06:00,630][20748] Loop rollout_proc7_evt_loop terminating... -[2023-02-24 14:06:00,659][00980] Component RolloutWorker_w1 stopped! -[2023-02-24 14:06:00,664][00980] Component RolloutWorker_w5 stopped! -[2023-02-24 14:06:00,668][20750] Stopping RolloutWorker_w5... -[2023-02-24 14:06:00,669][20750] Loop rollout_proc5_evt_loop terminating... -[2023-02-24 14:06:00,670][20741] Stopping RolloutWorker_w1... -[2023-02-24 14:06:00,670][20741] Loop rollout_proc1_evt_loop terminating... -[2023-02-24 14:06:00,699][00980] Component RolloutWorker_w3 stopped! -[2023-02-24 14:06:00,710][20739] Stopping RolloutWorker_w3... -[2023-02-24 14:06:00,710][20739] Loop rollout_proc3_evt_loop terminating... -[2023-02-24 14:06:03,543][20720] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-24 14:06:03,663][20720] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth -[2023-02-24 14:06:03,677][20720] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-24 14:06:03,858][20720] Stopping LearnerWorker_p0... -[2023-02-24 14:06:03,859][20720] Loop learner_proc0_evt_loop terminating... -[2023-02-24 14:06:03,858][00980] Component LearnerWorker_p0 stopped! -[2023-02-24 14:06:03,869][00980] Waiting for process learner_proc0 to stop... -[2023-02-24 14:06:04,891][00980] Waiting for process inference_proc0-0 to join... -[2023-02-24 14:06:04,893][00980] Waiting for process rollout_proc0 to join... -[2023-02-24 14:06:04,896][00980] Waiting for process rollout_proc1 to join... -[2023-02-24 14:06:04,901][00980] Waiting for process rollout_proc2 to join... -[2023-02-24 14:06:04,902][00980] Waiting for process rollout_proc3 to join... -[2023-02-24 14:06:04,904][00980] Waiting for process rollout_proc4 to join... -[2023-02-24 14:06:04,909][00980] Waiting for process rollout_proc5 to join... -[2023-02-24 14:06:04,911][00980] Waiting for process rollout_proc6 to join... -[2023-02-24 14:06:04,913][00980] Waiting for process rollout_proc7 to join... -[2023-02-24 14:06:04,914][00980] Batcher 0 profile tree view: -batching: 0.0456, releasing_batches: 0.0011 -[2023-02-24 14:06:04,916][00980] InferenceWorker_p0-w0 profile tree view: -wait_policy: 0.0051 - wait_policy_total: 7.3640 -update_model: 0.0356 - weight_update: 0.0127 -one_step: 0.0326 - handle_policy_step: 3.9057 - deserialize: 0.0505, stack: 0.0081, obs_to_device_normalize: 0.3641, forward: 3.0388, send_messages: 0.0944 - prepare_outputs: 0.2652 - to_cpu: 0.1469 -[2023-02-24 14:06:04,917][00980] Learner 0 profile tree view: -misc: 0.0000, prepare_batch: 8.0217 -train: 1.7323 - epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0005, kl_divergence: 0.0026, after_optimizer: 0.0345 - calculate_losses: 0.2408 - losses_init: 0.0000, forward_head: 0.1146, bptt_initial: 0.0866, tail: 0.0014, advantages_returns: 0.0010, losses: 0.0318 - bptt: 0.0049 - bptt_forward_core: 0.0048 - update: 1.4525 - clip: 0.0120 -[2023-02-24 14:06:04,918][00980] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.0005, enqueue_policy_requests: 0.7739, env_step: 2.6918, overhead: 0.0454, complete_rollouts: 0.0249 -save_policy_outputs: 0.0684 - split_output_tensors: 0.0449 -[2023-02-24 14:06:04,920][00980] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.0015, enqueue_policy_requests: 0.2734, env_step: 1.2700, overhead: 0.0664, complete_rollouts: 0.0008 -save_policy_outputs: 0.0364 - split_output_tensors: 0.0053 -[2023-02-24 14:06:04,922][00980] Loop Runner_EvtLoop terminating... -[2023-02-24 14:06:04,924][00980] Runner profile tree view: -main_loop: 35.4650 -[2023-02-24 14:06:04,927][00980] Collected {0: 4014080}, FPS: 231.0 -[2023-02-24 14:06:04,960][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:06:04,963][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:06:04,964][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:06:04,966][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:06:04,971][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:06:04,972][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:06:04,976][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:06:04,977][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:06:04,980][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-24 14:06:04,982][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-24 14:06:04,984][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:06:04,987][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:06:04,989][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:06:04,990][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:06:04,991][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:06:05,021][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:06:05,024][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:06:05,042][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:06:05,093][00980] Conv encoder output size: 512 -[2023-02-24 14:06:05,095][00980] Policy head output size: 512 -[2023-02-24 14:06:05,123][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-24 14:06:05,565][00980] Num frames 100... -[2023-02-24 14:06:05,685][00980] Num frames 200... -[2023-02-24 14:06:05,798][00980] Num frames 300... -[2023-02-24 14:06:05,912][00980] Num frames 400... -[2023-02-24 14:06:06,025][00980] Num frames 500... -[2023-02-24 14:06:06,142][00980] Num frames 600... -[2023-02-24 14:06:06,254][00980] Num frames 700... -[2023-02-24 14:06:06,363][00980] Num frames 800... -[2023-02-24 14:06:06,480][00980] Num frames 900... -[2023-02-24 14:06:06,611][00980] Num frames 1000... -[2023-02-24 14:06:06,767][00980] Avg episode rewards: #0: 22.880, true rewards: #0: 10.880 -[2023-02-24 14:06:06,770][00980] Avg episode reward: 22.880, avg true_objective: 10.880 -[2023-02-24 14:06:06,788][00980] Num frames 1100... -[2023-02-24 14:06:06,898][00980] Num frames 1200... -[2023-02-24 14:06:07,011][00980] Num frames 1300... -[2023-02-24 14:06:07,122][00980] Num frames 1400... -[2023-02-24 14:06:07,234][00980] Num frames 1500... -[2023-02-24 14:06:07,364][00980] Num frames 1600... -[2023-02-24 14:06:07,487][00980] Num frames 1700... -[2023-02-24 14:06:07,603][00980] Num frames 1800... -[2023-02-24 14:06:07,718][00980] Num frames 1900... -[2023-02-24 14:06:07,835][00980] Num frames 2000... -[2023-02-24 14:06:07,949][00980] Num frames 2100... -[2023-02-24 14:06:08,032][00980] Avg episode rewards: #0: 23.120, true rewards: #0: 10.620 -[2023-02-24 14:06:08,034][00980] Avg episode reward: 23.120, avg true_objective: 10.620 -[2023-02-24 14:06:08,123][00980] Num frames 2200... -[2023-02-24 14:06:08,238][00980] Num frames 2300... -[2023-02-24 14:06:08,364][00980] Num frames 2400... -[2023-02-24 14:06:08,483][00980] Num frames 2500... -[2023-02-24 14:06:08,607][00980] Num frames 2600... -[2023-02-24 14:06:08,738][00980] Num frames 2700... -[2023-02-24 14:06:08,854][00980] Num frames 2800... -[2023-02-24 14:06:08,970][00980] Num frames 2900... -[2023-02-24 14:06:09,084][00980] Num frames 3000... -[2023-02-24 14:06:09,201][00980] Num frames 3100... -[2023-02-24 14:06:09,319][00980] Num frames 3200... -[2023-02-24 14:06:09,437][00980] Num frames 3300... -[2023-02-24 14:06:09,559][00980] Num frames 3400... -[2023-02-24 14:06:09,674][00980] Num frames 3500... -[2023-02-24 14:06:09,795][00980] Num frames 3600... -[2023-02-24 14:06:09,910][00980] Num frames 3700... -[2023-02-24 14:06:10,035][00980] Num frames 3800... -[2023-02-24 14:06:10,160][00980] Num frames 3900... -[2023-02-24 14:06:10,281][00980] Num frames 4000... -[2023-02-24 14:06:10,443][00980] Avg episode rewards: #0: 31.307, true rewards: #0: 13.640 -[2023-02-24 14:06:10,444][00980] Avg episode reward: 31.307, avg true_objective: 13.640 -[2023-02-24 14:06:10,460][00980] Num frames 4100... -[2023-02-24 14:06:10,583][00980] Num frames 4200... -[2023-02-24 14:06:10,703][00980] Num frames 4300... -[2023-02-24 14:06:10,816][00980] Num frames 4400... -[2023-02-24 14:06:10,934][00980] Num frames 4500... -[2023-02-24 14:06:11,046][00980] Num frames 4600... -[2023-02-24 14:06:11,171][00980] Num frames 4700... -[2023-02-24 14:06:11,286][00980] Num frames 4800... -[2023-02-24 14:06:11,389][00980] Avg episode rewards: #0: 26.857, true rewards: #0: 12.107 -[2023-02-24 14:06:11,392][00980] Avg episode reward: 26.857, avg true_objective: 12.107 -[2023-02-24 14:06:11,458][00980] Num frames 4900... -[2023-02-24 14:06:11,579][00980] Num frames 5000... -[2023-02-24 14:06:11,692][00980] Num frames 5100... -[2023-02-24 14:06:11,808][00980] Num frames 5200... -[2023-02-24 14:06:11,922][00980] Num frames 5300... -[2023-02-24 14:06:12,040][00980] Num frames 5400... -[2023-02-24 14:06:12,155][00980] Num frames 5500... -[2023-02-24 14:06:12,306][00980] Avg episode rewards: #0: 24.558, true rewards: #0: 11.158 -[2023-02-24 14:06:12,308][00980] Avg episode reward: 24.558, avg true_objective: 11.158 -[2023-02-24 14:06:12,333][00980] Num frames 5600... -[2023-02-24 14:06:12,453][00980] Num frames 5700... -[2023-02-24 14:06:12,571][00980] Num frames 5800... -[2023-02-24 14:06:12,692][00980] Num frames 5900... -[2023-02-24 14:06:12,805][00980] Num frames 6000... -[2023-02-24 14:06:12,925][00980] Num frames 6100... -[2023-02-24 14:06:13,040][00980] Num frames 6200... -[2023-02-24 14:06:13,152][00980] Num frames 6300... -[2023-02-24 14:06:13,263][00980] Num frames 6400... -[2023-02-24 14:06:13,378][00980] Num frames 6500... -[2023-02-24 14:06:13,492][00980] Num frames 6600... -[2023-02-24 14:06:13,622][00980] Num frames 6700... -[2023-02-24 14:06:13,713][00980] Avg episode rewards: #0: 25.052, true rewards: #0: 11.218 -[2023-02-24 14:06:13,718][00980] Avg episode reward: 25.052, avg true_objective: 11.218 -[2023-02-24 14:06:13,801][00980] Num frames 6800... -[2023-02-24 14:06:13,913][00980] Num frames 6900... -[2023-02-24 14:06:14,031][00980] Num frames 7000... -[2023-02-24 14:06:14,144][00980] Num frames 7100... -[2023-02-24 14:06:14,264][00980] Num frames 7200... -[2023-02-24 14:06:14,407][00980] Num frames 7300... -[2023-02-24 14:06:14,587][00980] Num frames 7400... -[2023-02-24 14:06:14,653][00980] Avg episode rewards: #0: 23.147, true rewards: #0: 10.576 -[2023-02-24 14:06:14,655][00980] Avg episode reward: 23.147, avg true_objective: 10.576 -[2023-02-24 14:06:14,815][00980] Num frames 7500... -[2023-02-24 14:06:14,976][00980] Num frames 7600... -[2023-02-24 14:06:15,133][00980] Num frames 7700... -[2023-02-24 14:06:15,299][00980] Num frames 7800... -[2023-02-24 14:06:15,451][00980] Num frames 7900... -[2023-02-24 14:06:15,614][00980] Num frames 8000... -[2023-02-24 14:06:15,775][00980] Num frames 8100... -[2023-02-24 14:06:15,937][00980] Num frames 8200... -[2023-02-24 14:06:16,099][00980] Num frames 8300... -[2023-02-24 14:06:16,250][00980] Avg episode rewards: #0: 23.198, true rewards: #0: 10.447 -[2023-02-24 14:06:16,255][00980] Avg episode reward: 23.198, avg true_objective: 10.447 -[2023-02-24 14:06:16,325][00980] Num frames 8400... -[2023-02-24 14:06:16,489][00980] Num frames 8500... -[2023-02-24 14:06:16,656][00980] Num frames 8600... -[2023-02-24 14:06:16,818][00980] Num frames 8700... -[2023-02-24 14:06:16,985][00980] Num frames 8800... -[2023-02-24 14:06:17,151][00980] Num frames 8900... -[2023-02-24 14:06:17,319][00980] Num frames 9000... -[2023-02-24 14:06:17,482][00980] Num frames 9100... -[2023-02-24 14:06:17,648][00980] Num frames 9200... -[2023-02-24 14:06:17,812][00980] Num frames 9300... -[2023-02-24 14:06:17,974][00980] Num frames 9400... -[2023-02-24 14:06:18,116][00980] Num frames 9500... -[2023-02-24 14:06:18,261][00980] Avg episode rewards: #0: 23.638, true rewards: #0: 10.638 -[2023-02-24 14:06:18,262][00980] Avg episode reward: 23.638, avg true_objective: 10.638 -[2023-02-24 14:06:18,294][00980] Num frames 9600... -[2023-02-24 14:06:18,410][00980] Num frames 9700... -[2023-02-24 14:06:18,527][00980] Num frames 9800... -[2023-02-24 14:06:18,644][00980] Num frames 9900... -[2023-02-24 14:06:18,775][00980] Num frames 10000... -[2023-02-24 14:06:18,887][00980] Num frames 10100... -[2023-02-24 14:06:18,997][00980] Num frames 10200... -[2023-02-24 14:06:19,079][00980] Avg episode rewards: #0: 22.522, true rewards: #0: 10.222 -[2023-02-24 14:06:19,080][00980] Avg episode reward: 22.522, avg true_objective: 10.222 -[2023-02-24 14:07:20,449][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 14:07:20,479][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:07:20,481][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:07:20,482][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:07:20,485][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:07:20,486][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:07:20,487][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:07:20,488][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! -[2023-02-24 14:07:20,490][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:07:20,491][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! -[2023-02-24 14:07:20,492][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! -[2023-02-24 14:07:20,493][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:07:20,494][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:07:20,495][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:07:20,496][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:07:20,498][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:07:20,521][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:07:20,526][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:07:20,541][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:07:20,579][00980] Conv encoder output size: 512 -[2023-02-24 14:07:20,580][00980] Policy head output size: 512 -[2023-02-24 14:07:20,603][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-24 14:07:21,046][00980] Num frames 100... -[2023-02-24 14:07:21,160][00980] Num frames 200... -[2023-02-24 14:07:21,286][00980] Num frames 300... -[2023-02-24 14:07:21,417][00980] Num frames 400... -[2023-02-24 14:07:21,535][00980] Num frames 500... -[2023-02-24 14:07:21,654][00980] Num frames 600... -[2023-02-24 14:07:21,774][00980] Num frames 700... -[2023-02-24 14:07:21,897][00980] Num frames 800... -[2023-02-24 14:07:22,009][00980] Num frames 900... -[2023-02-24 14:07:22,126][00980] Num frames 1000... -[2023-02-24 14:07:22,242][00980] Num frames 1100... -[2023-02-24 14:07:22,360][00980] Num frames 1200... -[2023-02-24 14:07:22,476][00980] Num frames 1300... -[2023-02-24 14:07:22,587][00980] Num frames 1400... -[2023-02-24 14:07:22,688][00980] Avg episode rewards: #0: 38.400, true rewards: #0: 14.400 -[2023-02-24 14:07:22,690][00980] Avg episode reward: 38.400, avg true_objective: 14.400 -[2023-02-24 14:07:22,761][00980] Num frames 1500... -[2023-02-24 14:07:22,884][00980] Num frames 1600... -[2023-02-24 14:07:22,997][00980] Num frames 1700... -[2023-02-24 14:07:23,113][00980] Num frames 1800... -[2023-02-24 14:07:23,225][00980] Num frames 1900... -[2023-02-24 14:07:23,350][00980] Num frames 2000... -[2023-02-24 14:07:23,465][00980] Num frames 2100... -[2023-02-24 14:07:23,585][00980] Num frames 2200... -[2023-02-24 14:07:23,706][00980] Num frames 2300... -[2023-02-24 14:07:23,820][00980] Num frames 2400... -[2023-02-24 14:07:23,943][00980] Num frames 2500... -[2023-02-24 14:07:24,060][00980] Num frames 2600... -[2023-02-24 14:07:24,180][00980] Num frames 2700... -[2023-02-24 14:07:24,295][00980] Num frames 2800... -[2023-02-24 14:07:24,437][00980] Avg episode rewards: #0: 39.320, true rewards: #0: 14.320 -[2023-02-24 14:07:24,439][00980] Avg episode reward: 39.320, avg true_objective: 14.320 -[2023-02-24 14:07:24,483][00980] Num frames 2900... -[2023-02-24 14:07:24,593][00980] Num frames 3000... -[2023-02-24 14:07:24,707][00980] Num frames 3100... -[2023-02-24 14:07:24,818][00980] Num frames 3200... -[2023-02-24 14:07:24,931][00980] Num frames 3300... -[2023-02-24 14:07:25,050][00980] Num frames 3400... -[2023-02-24 14:07:25,172][00980] Num frames 3500... -[2023-02-24 14:07:25,297][00980] Num frames 3600... -[2023-02-24 14:07:25,451][00980] Num frames 3700... -[2023-02-24 14:07:25,657][00980] Avg episode rewards: #0: 32.650, true rewards: #0: 12.650 -[2023-02-24 14:07:25,659][00980] Avg episode reward: 32.650, avg true_objective: 12.650 -[2023-02-24 14:07:25,673][00980] Num frames 3800... -[2023-02-24 14:07:25,833][00980] Num frames 3900... -[2023-02-24 14:07:25,993][00980] Num frames 4000... -[2023-02-24 14:07:26,148][00980] Num frames 4100... -[2023-02-24 14:07:26,308][00980] Num frames 4200... -[2023-02-24 14:07:26,444][00980] Avg episode rewards: #0: 26.877, true rewards: #0: 10.627 -[2023-02-24 14:07:26,446][00980] Avg episode reward: 26.877, avg true_objective: 10.627 -[2023-02-24 14:07:26,529][00980] Num frames 4300... -[2023-02-24 14:07:26,686][00980] Num frames 4400... -[2023-02-24 14:07:26,847][00980] Num frames 4500... -[2023-02-24 14:07:27,009][00980] Num frames 4600... -[2023-02-24 14:07:27,168][00980] Num frames 4700... -[2023-02-24 14:07:27,331][00980] Num frames 4800... -[2023-02-24 14:07:27,483][00980] Avg episode rewards: #0: 24.118, true rewards: #0: 9.718 -[2023-02-24 14:07:27,485][00980] Avg episode reward: 24.118, avg true_objective: 9.718 -[2023-02-24 14:07:27,552][00980] Num frames 4900... -[2023-02-24 14:07:27,730][00980] Num frames 5000... -[2023-02-24 14:07:27,901][00980] Num frames 5100... -[2023-02-24 14:07:28,066][00980] Num frames 5200... -[2023-02-24 14:07:28,232][00980] Num frames 5300... -[2023-02-24 14:07:28,400][00980] Num frames 5400... -[2023-02-24 14:07:28,564][00980] Num frames 5500... -[2023-02-24 14:07:28,733][00980] Num frames 5600... -[2023-02-24 14:07:28,903][00980] Num frames 5700... -[2023-02-24 14:07:29,022][00980] Num frames 5800... -[2023-02-24 14:07:29,134][00980] Num frames 5900... -[2023-02-24 14:07:29,251][00980] Num frames 6000... -[2023-02-24 14:07:29,362][00980] Num frames 6100... -[2023-02-24 14:07:29,475][00980] Num frames 6200... -[2023-02-24 14:07:29,592][00980] Num frames 6300... -[2023-02-24 14:07:29,711][00980] Num frames 6400... -[2023-02-24 14:07:29,826][00980] Num frames 6500... -[2023-02-24 14:07:29,949][00980] Num frames 6600... -[2023-02-24 14:07:30,066][00980] Num frames 6700... -[2023-02-24 14:07:30,189][00980] Num frames 6800... -[2023-02-24 14:07:30,303][00980] Num frames 6900... -[2023-02-24 14:07:30,431][00980] Avg episode rewards: #0: 30.098, true rewards: #0: 11.598 -[2023-02-24 14:07:30,433][00980] Avg episode reward: 30.098, avg true_objective: 11.598 -[2023-02-24 14:07:30,484][00980] Num frames 7000... -[2023-02-24 14:07:30,609][00980] Num frames 7100... -[2023-02-24 14:07:30,725][00980] Num frames 7200... -[2023-02-24 14:07:30,843][00980] Num frames 7300... -[2023-02-24 14:07:30,958][00980] Num frames 7400... -[2023-02-24 14:07:31,078][00980] Num frames 7500... -[2023-02-24 14:07:31,202][00980] Num frames 7600... -[2023-02-24 14:07:31,320][00980] Num frames 7700... -[2023-02-24 14:07:31,437][00980] Num frames 7800... -[2023-02-24 14:07:31,556][00980] Num frames 7900... -[2023-02-24 14:07:31,679][00980] Num frames 8000... -[2023-02-24 14:07:31,799][00980] Num frames 8100... -[2023-02-24 14:07:31,926][00980] Num frames 8200... -[2023-02-24 14:07:32,015][00980] Avg episode rewards: #0: 29.753, true rewards: #0: 11.753 -[2023-02-24 14:07:32,016][00980] Avg episode reward: 29.753, avg true_objective: 11.753 -[2023-02-24 14:07:32,100][00980] Num frames 8300... -[2023-02-24 14:07:32,215][00980] Num frames 8400... -[2023-02-24 14:07:32,326][00980] Num frames 8500... -[2023-02-24 14:07:32,439][00980] Num frames 8600... -[2023-02-24 14:07:32,558][00980] Num frames 8700... -[2023-02-24 14:07:32,675][00980] Num frames 8800... -[2023-02-24 14:07:32,786][00980] Num frames 8900... -[2023-02-24 14:07:32,900][00980] Num frames 9000... -[2023-02-24 14:07:33,022][00980] Num frames 9100... -[2023-02-24 14:07:33,096][00980] Avg episode rewards: #0: 28.519, true rewards: #0: 11.394 -[2023-02-24 14:07:33,097][00980] Avg episode reward: 28.519, avg true_objective: 11.394 -[2023-02-24 14:07:33,197][00980] Num frames 9200... -[2023-02-24 14:07:33,309][00980] Num frames 9300... -[2023-02-24 14:07:33,431][00980] Num frames 9400... -[2023-02-24 14:07:33,547][00980] Num frames 9500... -[2023-02-24 14:07:33,667][00980] Num frames 9600... -[2023-02-24 14:07:33,792][00980] Num frames 9700... -[2023-02-24 14:07:33,909][00980] Num frames 9800... -[2023-02-24 14:07:34,026][00980] Num frames 9900... -[2023-02-24 14:07:34,137][00980] Num frames 10000... -[2023-02-24 14:07:34,271][00980] Avg episode rewards: #0: 27.523, true rewards: #0: 11.190 -[2023-02-24 14:07:34,273][00980] Avg episode reward: 27.523, avg true_objective: 11.190 -[2023-02-24 14:07:34,312][00980] Num frames 10100... -[2023-02-24 14:07:34,431][00980] Num frames 10200... -[2023-02-24 14:07:34,546][00980] Num frames 10300... -[2023-02-24 14:07:34,672][00980] Num frames 10400... -[2023-02-24 14:07:34,785][00980] Num frames 10500... -[2023-02-24 14:07:34,907][00980] Num frames 10600... -[2023-02-24 14:07:35,024][00980] Num frames 10700... -[2023-02-24 14:07:35,142][00980] Num frames 10800... -[2023-02-24 14:07:35,255][00980] Num frames 10900... -[2023-02-24 14:07:35,378][00980] Num frames 11000... -[2023-02-24 14:07:35,489][00980] Num frames 11100... -[2023-02-24 14:07:35,614][00980] Num frames 11200... -[2023-02-24 14:07:35,734][00980] Num frames 11300... -[2023-02-24 14:07:35,852][00980] Avg episode rewards: #0: 27.851, true rewards: #0: 11.351 -[2023-02-24 14:07:35,853][00980] Avg episode reward: 27.851, avg true_objective: 11.351 -[2023-02-24 14:08:43,682][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 14:08:46,837][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme -[2023-02-24 14:09:29,770][00980] Environment doom_basic already registered, overwriting... -[2023-02-24 14:09:29,773][00980] Environment doom_two_colors_easy already registered, overwriting... -[2023-02-24 14:09:29,775][00980] Environment doom_two_colors_hard already registered, overwriting... -[2023-02-24 14:09:29,776][00980] Environment doom_dm already registered, overwriting... -[2023-02-24 14:09:29,777][00980] Environment doom_dwango5 already registered, overwriting... -[2023-02-24 14:09:29,783][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... -[2023-02-24 14:09:29,784][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... -[2023-02-24 14:09:29,785][00980] Environment doom_my_way_home already registered, overwriting... -[2023-02-24 14:09:29,786][00980] Environment doom_deadly_corridor already registered, overwriting... -[2023-02-24 14:09:29,787][00980] Environment doom_defend_the_center already registered, overwriting... -[2023-02-24 14:09:29,788][00980] Environment doom_defend_the_line already registered, overwriting... -[2023-02-24 14:09:29,789][00980] Environment doom_health_gathering already registered, overwriting... -[2023-02-24 14:09:29,791][00980] Environment doom_health_gathering_supreme already registered, overwriting... -[2023-02-24 14:09:29,792][00980] Environment doom_battle already registered, overwriting... -[2023-02-24 14:09:29,793][00980] Environment doom_battle2 already registered, overwriting... -[2023-02-24 14:09:29,794][00980] Environment doom_duel_bots already registered, overwriting... -[2023-02-24 14:09:29,796][00980] Environment doom_deathmatch_bots already registered, overwriting... -[2023-02-24 14:09:29,797][00980] Environment doom_duel already registered, overwriting... -[2023-02-24 14:09:29,798][00980] Environment doom_deathmatch_full already registered, overwriting... -[2023-02-24 14:09:29,799][00980] Environment doom_benchmark already registered, overwriting... -[2023-02-24 14:09:29,801][00980] register_encoder_factory: -[2023-02-24 14:09:29,829][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:09:29,831][00980] Overriding arg 'train_for_env_steps' with value 2000000 passed from command line -[2023-02-24 14:09:29,845][00980] Experiment dir /content/train_dir/default_experiment already exists! -[2023-02-24 14:09:29,846][00980] Resuming existing experiment from /content/train_dir/default_experiment... -[2023-02-24 14:09:29,848][00980] Weights and Biases integration disabled -[2023-02-24 14:09:29,855][00980] Environment var CUDA_VISIBLE_DEVICES is 0 - -[2023-02-24 14:09:32,673][00980] Starting experiment with the following configuration: -help=False -algo=APPO -env=doom_health_gathering_supreme -experiment=default_experiment -train_dir=/content/train_dir -restart_behavior=resume -device=gpu -seed=None -num_policies=1 -async_rl=True -serial_mode=False -batched_sampling=False -num_batches_to_accumulate=2 -worker_num_splits=2 -policy_workers_per_policy=1 -max_policy_lag=1000 -num_workers=8 -num_envs_per_worker=4 -batch_size=1024 -num_batches_per_epoch=1 -num_epochs=1 -rollout=32 -recurrence=32 -shuffle_minibatches=False -gamma=0.99 -reward_scale=1.0 -reward_clip=1000.0 -value_bootstrap=False -normalize_returns=True -exploration_loss_coeff=0.001 -value_loss_coeff=0.5 -kl_loss_coeff=0.0 -exploration_loss=symmetric_kl -gae_lambda=0.95 -ppo_clip_ratio=0.1 -ppo_clip_value=0.2 -with_vtrace=False -vtrace_rho=1.0 -vtrace_c=1.0 -optimizer=adam -adam_eps=1e-06 -adam_beta1=0.9 -adam_beta2=0.999 -max_grad_norm=4.0 -learning_rate=0.0001 -lr_schedule=constant -lr_schedule_kl_threshold=0.008 -lr_adaptive_min=1e-06 -lr_adaptive_max=0.01 -obs_subtract_mean=0.0 -obs_scale=255.0 -normalize_input=True -normalize_input_keys=None -decorrelate_experience_max_seconds=0 -decorrelate_envs_on_one_worker=True -actor_worker_gpus=[] -set_workers_cpu_affinity=True -force_envs_single_thread=False -default_niceness=0 -log_to_file=True -experiment_summaries_interval=10 -flush_summaries_interval=30 -stats_avg=100 -summaries_use_frameskip=True -heartbeat_interval=20 -heartbeat_reporting_interval=600 -train_for_env_steps=2000000 -train_for_seconds=10000000000 -save_every_sec=120 -keep_checkpoints=2 -load_checkpoint_kind=latest -save_milestones_sec=-1 -save_best_every_sec=5 -save_best_metric=reward -save_best_after=100000 -benchmark=False -encoder_mlp_layers=[512, 512] -encoder_conv_architecture=convnet_simple -encoder_conv_mlp_layers=[512] -use_rnn=True -rnn_size=512 -rnn_type=gru -rnn_num_layers=1 -decoder_mlp_layers=[] -nonlinearity=elu -policy_initialization=orthogonal -policy_init_gain=1.0 -actor_critic_share_weights=True -adaptive_stddev=True -continuous_tanh_scale=0.0 -initial_stddev=1.0 -use_env_info_cache=False -env_gpu_actions=False -env_gpu_observations=True -env_frameskip=4 -env_framestack=1 -pixel_format=CHW -use_record_episode_statistics=False -with_wandb=False -wandb_user=None -wandb_project=sample_factory -wandb_group=None -wandb_job_type=SF -wandb_tags=[] -with_pbt=False -pbt_mix_policies_in_one_env=True -pbt_period_env_steps=5000000 -pbt_start_mutation=20000000 -pbt_replace_fraction=0.3 -pbt_mutation_rate=0.15 -pbt_replace_reward_gap=0.1 -pbt_replace_reward_gap_absolute=1e-06 -pbt_optimize_gamma=False -pbt_target_objective=true_objective -pbt_perturb_min=1.1 -pbt_perturb_max=1.5 -num_agents=-1 -num_humans=0 -num_bots=-1 -start_bot_difficulty=None -timelimit=None -res_w=128 -res_h=72 -wide_aspect_ratio=False -eval_env_frameskip=1 -fps=35 -command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 -cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} -git_hash=unknown -git_repo_name=not a git repository -[2023-02-24 14:09:32,676][00980] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-24 14:09:32,680][00980] Rollout worker 0 uses device cpu -[2023-02-24 14:09:32,683][00980] Rollout worker 1 uses device cpu -[2023-02-24 14:09:32,684][00980] Rollout worker 2 uses device cpu -[2023-02-24 14:09:32,685][00980] Rollout worker 3 uses device cpu -[2023-02-24 14:09:32,686][00980] Rollout worker 4 uses device cpu -[2023-02-24 14:09:32,688][00980] Rollout worker 5 uses device cpu -[2023-02-24 14:09:32,689][00980] Rollout worker 6 uses device cpu -[2023-02-24 14:09:32,691][00980] Rollout worker 7 uses device cpu -[2023-02-24 14:09:32,810][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:09:32,813][00980] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-24 14:09:32,846][00980] Starting all processes... -[2023-02-24 14:09:32,847][00980] Starting process learner_proc0 -[2023-02-24 14:09:32,984][00980] Starting all processes... -[2023-02-24 14:09:32,994][00980] Starting process inference_proc0-0 -[2023-02-24 14:09:32,994][00980] Starting process rollout_proc0 -[2023-02-24 14:09:32,996][00980] Starting process rollout_proc1 -[2023-02-24 14:09:32,999][00980] Starting process rollout_proc2 -[2023-02-24 14:09:32,999][00980] Starting process rollout_proc3 -[2023-02-24 14:09:32,999][00980] Starting process rollout_proc4 -[2023-02-24 14:09:33,000][00980] Starting process rollout_proc5 -[2023-02-24 14:09:33,000][00980] Starting process rollout_proc6 -[2023-02-24 14:09:33,000][00980] Starting process rollout_proc7 -[2023-02-24 14:09:41,559][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:09:41,559][24666] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-24 14:09:41,666][24666] Num visible devices: 1 -[2023-02-24 14:09:41,726][24666] Starting seed is not provided -[2023-02-24 14:09:41,727][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:09:41,728][24666] Initializing actor-critic model on device cuda:0 -[2023-02-24 14:09:41,729][24666] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:09:41,730][24666] RunningMeanStd input shape: (1,) -[2023-02-24 14:09:41,877][24666] ConvEncoder: input_channels=3 -[2023-02-24 14:09:43,471][24666] Conv encoder output size: 512 -[2023-02-24 14:09:43,472][24666] Policy head output size: 512 -[2023-02-24 14:09:43,665][24666] Created Actor Critic model with architecture: -[2023-02-24 14:09:43,674][24666] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): VizdoomEncoder( - (basic_encoder): ConvEncoder( - (enc): RecursiveScriptModule( - original_name=ConvEncoderImpl - (conv_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Conv2d) - (1): RecursiveScriptModule(original_name=ELU) - (2): RecursiveScriptModule(original_name=Conv2d) - (3): RecursiveScriptModule(original_name=ELU) - (4): RecursiveScriptModule(original_name=Conv2d) - (5): RecursiveScriptModule(original_name=ELU) - ) - (mlp_layers): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=ELU) - ) - ) - ) - ) - (core): ModelCoreRNN( - (core): GRU(512, 512) - ) - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=512, out_features=1, bias=True) - (action_parameterization): ActionParameterizationDefault( - (distribution_linear): Linear(in_features=512, out_features=5, bias=True) - ) -) -[2023-02-24 14:09:43,933][24680] Worker 0 uses CPU cores [0] -[2023-02-24 14:09:43,977][24681] Worker 1 uses CPU cores [1] -[2023-02-24 14:09:44,335][24683] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:09:44,340][24683] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-24 14:09:44,430][24683] Num visible devices: 1 -[2023-02-24 14:09:45,037][24690] Worker 2 uses CPU cores [0] -[2023-02-24 14:09:45,309][24688] Worker 3 uses CPU cores [1] -[2023-02-24 14:09:45,561][24695] Worker 5 uses CPU cores [1] -[2023-02-24 14:09:45,612][24693] Worker 4 uses CPU cores [0] -[2023-02-24 14:09:45,799][24701] Worker 7 uses CPU cores [1] -[2023-02-24 14:09:45,881][24703] Worker 6 uses CPU cores [0] -[2023-02-24 14:09:49,269][24666] Using optimizer -[2023-02-24 14:09:49,270][24666] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... -[2023-02-24 14:09:49,304][24666] Loading model from checkpoint -[2023-02-24 14:09:49,308][24666] Loaded experiment state at self.train_step=980, self.env_steps=4014080 -[2023-02-24 14:09:49,308][24666] Initialized policy 0 weights for model version 980 -[2023-02-24 14:09:49,312][24666] LearnerWorker_p0 finished initialization! -[2023-02-24 14:09:49,314][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:09:49,529][24683] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:09:49,531][24683] RunningMeanStd input shape: (1,) -[2023-02-24 14:09:49,543][24683] ConvEncoder: input_channels=3 -[2023-02-24 14:09:49,645][24683] Conv encoder output size: 512 -[2023-02-24 14:09:49,645][24683] Policy head output size: 512 -[2023-02-24 14:09:49,856][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4014080. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:09:51,910][00980] Inference worker 0-0 is ready! -[2023-02-24 14:09:51,911][00980] All inference workers are ready! Signal rollout workers to start! -[2023-02-24 14:09:52,016][24680] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,012][24693] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,014][24703] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,017][24690] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,031][24688] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,032][24695] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,028][24701] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,030][24681] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:09:52,803][00980] Heartbeat connected on Batcher_0 -[2023-02-24 14:09:52,810][00980] Heartbeat connected on LearnerWorker_p0 -[2023-02-24 14:09:52,840][24701] Decorrelating experience for 0 frames... -[2023-02-24 14:09:52,841][24695] Decorrelating experience for 0 frames... -[2023-02-24 14:09:52,845][00980] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-24 14:09:53,214][24693] Decorrelating experience for 0 frames... -[2023-02-24 14:09:53,221][24690] Decorrelating experience for 0 frames... -[2023-02-24 14:09:53,227][24703] Decorrelating experience for 0 frames... -[2023-02-24 14:09:53,926][24688] Decorrelating experience for 0 frames... -[2023-02-24 14:09:53,929][24695] Decorrelating experience for 32 frames... -[2023-02-24 14:09:53,931][24701] Decorrelating experience for 32 frames... -[2023-02-24 14:09:54,439][24703] Decorrelating experience for 32 frames... -[2023-02-24 14:09:54,441][24693] Decorrelating experience for 32 frames... -[2023-02-24 14:09:54,501][24680] Decorrelating experience for 0 frames... -[2023-02-24 14:09:54,732][24688] Decorrelating experience for 32 frames... -[2023-02-24 14:09:54,847][24695] Decorrelating experience for 64 frames... -[2023-02-24 14:09:54,855][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:09:55,223][24690] Decorrelating experience for 32 frames... -[2023-02-24 14:09:55,748][24680] Decorrelating experience for 32 frames... -[2023-02-24 14:09:55,944][24688] Decorrelating experience for 64 frames... -[2023-02-24 14:09:55,960][24701] Decorrelating experience for 64 frames... -[2023-02-24 14:09:55,990][24703] Decorrelating experience for 64 frames... -[2023-02-24 14:09:56,934][24681] Decorrelating experience for 0 frames... -[2023-02-24 14:09:57,310][24690] Decorrelating experience for 64 frames... -[2023-02-24 14:09:58,045][24695] Decorrelating experience for 96 frames... -[2023-02-24 14:09:58,046][24693] Decorrelating experience for 64 frames... -[2023-02-24 14:09:58,090][24680] Decorrelating experience for 64 frames... -[2023-02-24 14:09:58,226][24688] Decorrelating experience for 96 frames... -[2023-02-24 14:09:58,248][24701] Decorrelating experience for 96 frames... -[2023-02-24 14:09:58,487][00980] Heartbeat connected on RolloutWorker_w5 -[2023-02-24 14:09:58,896][00980] Heartbeat connected on RolloutWorker_w3 -[2023-02-24 14:09:58,898][00980] Heartbeat connected on RolloutWorker_w7 -[2023-02-24 14:09:59,487][24703] Decorrelating experience for 96 frames... -[2023-02-24 14:09:59,560][24681] Decorrelating experience for 32 frames... -[2023-02-24 14:09:59,855][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:10:00,072][00980] Heartbeat connected on RolloutWorker_w6 -[2023-02-24 14:10:00,336][24690] Decorrelating experience for 96 frames... -[2023-02-24 14:10:00,841][00980] Heartbeat connected on RolloutWorker_w2 -[2023-02-24 14:10:01,251][24693] Decorrelating experience for 96 frames... -[2023-02-24 14:10:01,256][24680] Decorrelating experience for 96 frames... -[2023-02-24 14:10:01,914][00980] Heartbeat connected on RolloutWorker_w0 -[2023-02-24 14:10:01,917][24681] Decorrelating experience for 64 frames... -[2023-02-24 14:10:01,957][00980] Heartbeat connected on RolloutWorker_w4 -[2023-02-24 14:10:04,856][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 116.8. Samples: 1752. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:10:04,862][00980] Avg episode reward: [(0, '1.850')] -[2023-02-24 14:10:04,999][24666] Signal inference workers to stop experience collection... -[2023-02-24 14:10:05,017][24683] InferenceWorker_p0-w0: stopping experience collection -[2023-02-24 14:10:05,324][24681] Decorrelating experience for 96 frames... -[2023-02-24 14:10:05,455][00980] Heartbeat connected on RolloutWorker_w1 -[2023-02-24 14:10:07,790][24666] Signal inference workers to resume experience collection... -[2023-02-24 14:10:07,790][24683] InferenceWorker_p0-w0: resuming experience collection -[2023-02-24 14:10:07,796][24666] Stopping Batcher_0... -[2023-02-24 14:10:07,798][24666] Loop batcher_evt_loop terminating... -[2023-02-24 14:10:07,799][00980] Component Batcher_0 stopped! -[2023-02-24 14:10:07,851][24703] Stopping RolloutWorker_w6... -[2023-02-24 14:10:07,851][00980] Component RolloutWorker_w6 stopped! -[2023-02-24 14:10:07,860][24693] Stopping RolloutWorker_w4... -[2023-02-24 14:10:07,861][24693] Loop rollout_proc4_evt_loop terminating... -[2023-02-24 14:10:07,854][24680] Stopping RolloutWorker_w0... -[2023-02-24 14:10:07,863][24680] Loop rollout_proc0_evt_loop terminating... -[2023-02-24 14:10:07,855][00980] Component RolloutWorker_w0 stopped! -[2023-02-24 14:10:07,863][00980] Component RolloutWorker_w4 stopped! -[2023-02-24 14:10:07,866][00980] Component RolloutWorker_w2 stopped! -[2023-02-24 14:10:07,866][24690] Stopping RolloutWorker_w2... -[2023-02-24 14:10:07,851][24703] Loop rollout_proc6_evt_loop terminating... -[2023-02-24 14:10:07,870][24690] Loop rollout_proc2_evt_loop terminating... -[2023-02-24 14:10:07,886][00980] Component RolloutWorker_w1 stopped! -[2023-02-24 14:10:07,886][24681] Stopping RolloutWorker_w1... -[2023-02-24 14:10:07,908][00980] Component RolloutWorker_w7 stopped! -[2023-02-24 14:10:07,916][00980] Component RolloutWorker_w3 stopped! -[2023-02-24 14:10:07,917][24688] Stopping RolloutWorker_w3... -[2023-02-24 14:10:07,923][00980] Component RolloutWorker_w5 stopped! -[2023-02-24 14:10:07,909][24701] Stopping RolloutWorker_w7... -[2023-02-24 14:10:07,904][24681] Loop rollout_proc1_evt_loop terminating... -[2023-02-24 14:10:07,924][24695] Stopping RolloutWorker_w5... -[2023-02-24 14:10:07,922][24688] Loop rollout_proc3_evt_loop terminating... -[2023-02-24 14:10:07,928][24701] Loop rollout_proc7_evt_loop terminating... -[2023-02-24 14:10:07,930][24695] Loop rollout_proc5_evt_loop terminating... -[2023-02-24 14:10:07,947][24683] Weights refcount: 2 0 -[2023-02-24 14:10:07,957][24683] Stopping InferenceWorker_p0-w0... -[2023-02-24 14:10:07,957][00980] Component InferenceWorker_p0-w0 stopped! -[2023-02-24 14:10:07,962][24683] Loop inference_proc0-0_evt_loop terminating... -[2023-02-24 14:10:10,081][24666] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... -[2023-02-24 14:10:10,180][24666] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth -[2023-02-24 14:10:10,186][24666] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... -[2023-02-24 14:10:10,319][24666] Stopping LearnerWorker_p0... -[2023-02-24 14:10:10,320][24666] Loop learner_proc0_evt_loop terminating... -[2023-02-24 14:10:10,321][00980] Component LearnerWorker_p0 stopped! -[2023-02-24 14:10:10,323][00980] Waiting for process learner_proc0 to stop... -[2023-02-24 14:10:11,370][00980] Waiting for process inference_proc0-0 to join... -[2023-02-24 14:10:11,373][00980] Waiting for process rollout_proc0 to join... -[2023-02-24 14:10:11,376][00980] Waiting for process rollout_proc1 to join... -[2023-02-24 14:10:11,379][00980] Waiting for process rollout_proc2 to join... -[2023-02-24 14:10:11,382][00980] Waiting for process rollout_proc3 to join... -[2023-02-24 14:10:11,385][00980] Waiting for process rollout_proc4 to join... -[2023-02-24 14:10:11,386][00980] Waiting for process rollout_proc5 to join... -[2023-02-24 14:10:11,388][00980] Waiting for process rollout_proc6 to join... -[2023-02-24 14:10:11,390][00980] Waiting for process rollout_proc7 to join... -[2023-02-24 14:10:11,391][00980] Batcher 0 profile tree view: -batching: 0.1455, releasing_batches: 0.0007 -[2023-02-24 14:10:11,392][00980] InferenceWorker_p0-w0 profile tree view: -wait_policy: 0.0126 - wait_policy_total: 8.5993 -update_model: 0.0278 - weight_update: 0.0017 -one_step: 0.0564 - handle_policy_step: 4.2885 - deserialize: 0.0514, stack: 0.0076, obs_to_device_normalize: 0.4279, forward: 3.3584, send_messages: 0.1076 - prepare_outputs: 0.2513 - to_cpu: 0.1288 -[2023-02-24 14:10:11,393][00980] Learner 0 profile tree view: -misc: 0.0000, prepare_batch: 5.3320 -train: 1.3646 - epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0003, kl_divergence: 0.0015, after_optimizer: 0.0074 - calculate_losses: 0.2007 - losses_init: 0.0000, forward_head: 0.1107, bptt_initial: 0.0672, tail: 0.0012, advantages_returns: 0.0008, losses: 0.0182 - bptt: 0.0023 - bptt_forward_core: 0.0021 - update: 1.1537 - clip: 0.0045 -[2023-02-24 14:10:11,395][00980] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.0015, enqueue_policy_requests: 0.6653, env_step: 1.8552, overhead: 0.1000, complete_rollouts: 0.0290 -save_policy_outputs: 0.1110 - split_output_tensors: 0.0652 -[2023-02-24 14:10:11,400][00980] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.0007, enqueue_policy_requests: 0.5677, env_step: 2.9069, overhead: 0.1583, complete_rollouts: 0.0367 -save_policy_outputs: 0.1729 - split_output_tensors: 0.1089 -[2023-02-24 14:10:11,401][00980] Loop Runner_EvtLoop terminating... -[2023-02-24 14:10:11,403][00980] Runner profile tree view: -main_loop: 38.5571 -[2023-02-24 14:10:11,405][00980] Collected {0: 4022272}, FPS: 212.5 -[2023-02-24 14:10:11,454][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:10:11,455][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:10:11,457][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:10:11,459][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:10:11,463][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:10:11,469][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:10:11,472][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:10:11,473][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:10:11,475][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-24 14:10:11,476][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-24 14:10:11,478][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:10:11,479][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:10:11,480][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:10:11,481][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:10:11,483][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:10:11,505][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:10:11,512][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:10:11,531][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:10:11,573][00980] Conv encoder output size: 512 -[2023-02-24 14:10:11,575][00980] Policy head output size: 512 -[2023-02-24 14:10:11,597][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... -[2023-02-24 14:10:12,226][00980] Num frames 100... -[2023-02-24 14:10:12,342][00980] Num frames 200... -[2023-02-24 14:10:12,466][00980] Num frames 300... -[2023-02-24 14:10:12,599][00980] Num frames 400... -[2023-02-24 14:10:12,711][00980] Num frames 500... -[2023-02-24 14:10:12,828][00980] Num frames 600... -[2023-02-24 14:10:12,947][00980] Num frames 700... -[2023-02-24 14:10:13,071][00980] Num frames 800... -[2023-02-24 14:10:13,155][00980] Avg episode rewards: #0: 22.240, true rewards: #0: 8.240 -[2023-02-24 14:10:13,157][00980] Avg episode reward: 22.240, avg true_objective: 8.240 -[2023-02-24 14:10:13,254][00980] Num frames 900... -[2023-02-24 14:10:13,391][00980] Num frames 1000... -[2023-02-24 14:10:13,503][00980] Num frames 1100... -[2023-02-24 14:10:13,625][00980] Num frames 1200... -[2023-02-24 14:10:13,742][00980] Num frames 1300... -[2023-02-24 14:10:13,869][00980] Num frames 1400... -[2023-02-24 14:10:13,995][00980] Num frames 1500... -[2023-02-24 14:10:14,112][00980] Num frames 1600... -[2023-02-24 14:10:14,228][00980] Num frames 1700... -[2023-02-24 14:10:14,350][00980] Num frames 1800... -[2023-02-24 14:10:14,468][00980] Num frames 1900... -[2023-02-24 14:10:14,612][00980] Avg episode rewards: #0: 23.880, true rewards: #0: 9.880 -[2023-02-24 14:10:14,614][00980] Avg episode reward: 23.880, avg true_objective: 9.880 -[2023-02-24 14:10:14,646][00980] Num frames 2000... -[2023-02-24 14:10:14,768][00980] Num frames 2100... -[2023-02-24 14:10:14,893][00980] Num frames 2200... -[2023-02-24 14:10:14,946][00980] Avg episode rewards: #0: 17.000, true rewards: #0: 7.333 -[2023-02-24 14:10:14,948][00980] Avg episode reward: 17.000, avg true_objective: 7.333 -[2023-02-24 14:10:15,119][00980] Num frames 2300... -[2023-02-24 14:10:15,316][00980] Num frames 2400... -[2023-02-24 14:10:15,494][00980] Num frames 2500... -[2023-02-24 14:10:15,671][00980] Num frames 2600... -[2023-02-24 14:10:15,840][00980] Num frames 2700... -[2023-02-24 14:10:16,019][00980] Num frames 2800... -[2023-02-24 14:10:16,200][00980] Num frames 2900... -[2023-02-24 14:10:16,366][00980] Num frames 3000... -[2023-02-24 14:10:16,531][00980] Num frames 3100... -[2023-02-24 14:10:16,699][00980] Num frames 3200... -[2023-02-24 14:10:16,857][00980] Num frames 3300... -[2023-02-24 14:10:17,023][00980] Num frames 3400... -[2023-02-24 14:10:17,190][00980] Num frames 3500... -[2023-02-24 14:10:17,357][00980] Num frames 3600... -[2023-02-24 14:10:17,518][00980] Num frames 3700... -[2023-02-24 14:10:17,635][00980] Avg episode rewards: #0: 22.590, true rewards: #0: 9.340 -[2023-02-24 14:10:17,636][00980] Avg episode reward: 22.590, avg true_objective: 9.340 -[2023-02-24 14:10:17,746][00980] Num frames 3800... -[2023-02-24 14:10:17,911][00980] Num frames 3900... -[2023-02-24 14:10:18,074][00980] Num frames 4000... -[2023-02-24 14:10:18,236][00980] Num frames 4100... -[2023-02-24 14:10:18,371][00980] Avg episode rewards: #0: 19.304, true rewards: #0: 8.304 -[2023-02-24 14:10:18,373][00980] Avg episode reward: 19.304, avg true_objective: 8.304 -[2023-02-24 14:10:18,451][00980] Num frames 4200... -[2023-02-24 14:10:18,613][00980] Num frames 4300... -[2023-02-24 14:10:18,785][00980] Num frames 4400... -[2023-02-24 14:10:18,963][00980] Num frames 4500... -[2023-02-24 14:10:19,145][00980] Num frames 4600... -[2023-02-24 14:10:19,318][00980] Num frames 4700... -[2023-02-24 14:10:19,493][00980] Num frames 4800... -[2023-02-24 14:10:19,629][00980] Num frames 4900... -[2023-02-24 14:10:19,750][00980] Num frames 5000... -[2023-02-24 14:10:19,875][00980] Num frames 5100... -[2023-02-24 14:10:19,993][00980] Num frames 5200... -[2023-02-24 14:10:20,122][00980] Num frames 5300... -[2023-02-24 14:10:20,241][00980] Num frames 5400... -[2023-02-24 14:10:20,361][00980] Num frames 5500... -[2023-02-24 14:10:20,480][00980] Num frames 5600... -[2023-02-24 14:10:20,592][00980] Num frames 5700... -[2023-02-24 14:10:20,709][00980] Num frames 5800... -[2023-02-24 14:10:20,823][00980] Num frames 5900... -[2023-02-24 14:10:20,893][00980] Avg episode rewards: #0: 23.187, true rewards: #0: 9.853 -[2023-02-24 14:10:20,895][00980] Avg episode reward: 23.187, avg true_objective: 9.853 -[2023-02-24 14:10:20,999][00980] Num frames 6000... -[2023-02-24 14:10:21,065][00980] Avg episode rewards: #0: 20.154, true rewards: #0: 8.583 -[2023-02-24 14:10:21,066][00980] Avg episode reward: 20.154, avg true_objective: 8.583 -[2023-02-24 14:10:21,180][00980] Num frames 6100... -[2023-02-24 14:10:21,305][00980] Num frames 6200... -[2023-02-24 14:10:21,421][00980] Num frames 6300... -[2023-02-24 14:10:21,544][00980] Num frames 6400... -[2023-02-24 14:10:21,669][00980] Num frames 6500... -[2023-02-24 14:10:21,781][00980] Num frames 6600... -[2023-02-24 14:10:21,903][00980] Num frames 6700... -[2023-02-24 14:10:22,023][00980] Num frames 6800... -[2023-02-24 14:10:22,151][00980] Num frames 6900... -[2023-02-24 14:10:22,270][00980] Num frames 7000... -[2023-02-24 14:10:22,391][00980] Num frames 7100... -[2023-02-24 14:10:22,520][00980] Avg episode rewards: #0: 20.951, true rewards: #0: 8.951 -[2023-02-24 14:10:22,521][00980] Avg episode reward: 20.951, avg true_objective: 8.951 -[2023-02-24 14:10:22,572][00980] Num frames 7200... -[2023-02-24 14:10:22,691][00980] Num frames 7300... -[2023-02-24 14:10:22,808][00980] Num frames 7400... -[2023-02-24 14:10:22,931][00980] Num frames 7500... -[2023-02-24 14:10:23,056][00980] Num frames 7600... -[2023-02-24 14:10:23,189][00980] Num frames 7700... -[2023-02-24 14:10:23,314][00980] Num frames 7800... -[2023-02-24 14:10:23,440][00980] Num frames 7900... -[2023-02-24 14:10:23,556][00980] Num frames 8000... -[2023-02-24 14:10:23,682][00980] Num frames 8100... -[2023-02-24 14:10:23,839][00980] Avg episode rewards: #0: 21.100, true rewards: #0: 9.100 -[2023-02-24 14:10:23,841][00980] Avg episode reward: 21.100, avg true_objective: 9.100 -[2023-02-24 14:10:23,857][00980] Num frames 8200... -[2023-02-24 14:10:23,974][00980] Num frames 8300... -[2023-02-24 14:10:24,100][00980] Num frames 8400... -[2023-02-24 14:10:24,219][00980] Num frames 8500... -[2023-02-24 14:10:24,333][00980] Num frames 8600... -[2023-02-24 14:10:24,450][00980] Num frames 8700... -[2023-02-24 14:10:24,566][00980] Num frames 8800... -[2023-02-24 14:10:24,690][00980] Num frames 8900... -[2023-02-24 14:10:24,807][00980] Num frames 9000... -[2023-02-24 14:10:24,928][00980] Num frames 9100... -[2023-02-24 14:10:25,045][00980] Num frames 9200... -[2023-02-24 14:10:25,176][00980] Num frames 9300... -[2023-02-24 14:10:25,295][00980] Num frames 9400... -[2023-02-24 14:10:25,420][00980] Num frames 9500... -[2023-02-24 14:10:25,546][00980] Num frames 9600... -[2023-02-24 14:10:25,620][00980] Avg episode rewards: #0: 22.314, true rewards: #0: 9.614 -[2023-02-24 14:10:25,622][00980] Avg episode reward: 22.314, avg true_objective: 9.614 -[2023-02-24 14:11:23,996][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 14:11:24,026][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:11:24,027][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:11:24,028][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:11:24,031][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:11:24,033][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:11:24,035][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:11:24,045][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! -[2023-02-24 14:11:24,047][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:11:24,050][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! -[2023-02-24 14:11:24,053][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! -[2023-02-24 14:11:24,055][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:11:24,058][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:11:24,059][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:11:24,060][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:11:24,062][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:11:24,081][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:11:24,083][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:11:24,098][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:11:24,135][00980] Conv encoder output size: 512 -[2023-02-24 14:11:24,139][00980] Policy head output size: 512 -[2023-02-24 14:11:24,159][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... -[2023-02-24 14:11:24,602][00980] Num frames 100... -[2023-02-24 14:11:24,723][00980] Num frames 200... -[2023-02-24 14:11:24,864][00980] Avg episode rewards: #0: 5.760, true rewards: #0: 2.760 -[2023-02-24 14:11:24,866][00980] Avg episode reward: 5.760, avg true_objective: 2.760 -[2023-02-24 14:11:24,899][00980] Num frames 300... -[2023-02-24 14:11:25,014][00980] Num frames 400... -[2023-02-24 14:11:25,127][00980] Num frames 500... -[2023-02-24 14:11:25,239][00980] Num frames 600... -[2023-02-24 14:11:25,361][00980] Num frames 700... -[2023-02-24 14:11:25,478][00980] Num frames 800... -[2023-02-24 14:11:25,598][00980] Num frames 900... -[2023-02-24 14:11:25,720][00980] Num frames 1000... -[2023-02-24 14:11:25,780][00980] Avg episode rewards: #0: 11.015, true rewards: #0: 5.015 -[2023-02-24 14:11:25,781][00980] Avg episode reward: 11.015, avg true_objective: 5.015 -[2023-02-24 14:11:25,893][00980] Num frames 1100... -[2023-02-24 14:11:26,017][00980] Num frames 1200... -[2023-02-24 14:11:26,132][00980] Num frames 1300... -[2023-02-24 14:11:26,254][00980] Num frames 1400... -[2023-02-24 14:11:26,368][00980] Num frames 1500... -[2023-02-24 14:11:26,484][00980] Num frames 1600... -[2023-02-24 14:11:26,621][00980] Num frames 1700... -[2023-02-24 14:11:26,803][00980] Num frames 1800... -[2023-02-24 14:11:26,965][00980] Num frames 1900... -[2023-02-24 14:11:27,126][00980] Num frames 2000... -[2023-02-24 14:11:27,288][00980] Num frames 2100... -[2023-02-24 14:11:27,438][00980] Avg episode rewards: #0: 15.183, true rewards: #0: 7.183 -[2023-02-24 14:11:27,441][00980] Avg episode reward: 15.183, avg true_objective: 7.183 -[2023-02-24 14:11:27,529][00980] Num frames 2200... -[2023-02-24 14:11:27,688][00980] Num frames 2300... -[2023-02-24 14:11:27,853][00980] Num frames 2400... -[2023-02-24 14:11:28,016][00980] Num frames 2500... -[2023-02-24 14:11:28,173][00980] Num frames 2600... -[2023-02-24 14:11:28,336][00980] Num frames 2700... -[2023-02-24 14:11:28,499][00980] Num frames 2800... -[2023-02-24 14:11:28,668][00980] Num frames 2900... -[2023-02-24 14:11:28,828][00980] Num frames 3000... -[2023-02-24 14:11:29,003][00980] Num frames 3100... -[2023-02-24 14:11:29,184][00980] Num frames 3200... -[2023-02-24 14:11:29,363][00980] Num frames 3300... -[2023-02-24 14:11:29,537][00980] Num frames 3400... -[2023-02-24 14:11:29,710][00980] Num frames 3500... -[2023-02-24 14:11:29,884][00980] Num frames 3600... -[2023-02-24 14:11:30,060][00980] Num frames 3700... -[2023-02-24 14:11:30,160][00980] Avg episode rewards: #0: 21.807, true rewards: #0: 9.307 -[2023-02-24 14:11:30,162][00980] Avg episode reward: 21.807, avg true_objective: 9.307 -[2023-02-24 14:11:30,272][00980] Num frames 3800... -[2023-02-24 14:11:30,387][00980] Num frames 3900... -[2023-02-24 14:11:30,504][00980] Num frames 4000... -[2023-02-24 14:11:30,625][00980] Num frames 4100... -[2023-02-24 14:11:30,746][00980] Num frames 4200... -[2023-02-24 14:11:30,851][00980] Avg episode rewards: #0: 19.284, true rewards: #0: 8.484 -[2023-02-24 14:11:30,853][00980] Avg episode reward: 19.284, avg true_objective: 8.484 -[2023-02-24 14:11:30,922][00980] Num frames 4300... -[2023-02-24 14:11:31,038][00980] Num frames 4400... -[2023-02-24 14:11:31,164][00980] Num frames 4500... -[2023-02-24 14:11:31,287][00980] Num frames 4600... -[2023-02-24 14:11:31,406][00980] Num frames 4700... -[2023-02-24 14:11:31,517][00980] Num frames 4800... -[2023-02-24 14:11:31,633][00980] Num frames 4900... -[2023-02-24 14:11:31,746][00980] Num frames 5000... -[2023-02-24 14:11:31,851][00980] Avg episode rewards: #0: 19.237, true rewards: #0: 8.403 -[2023-02-24 14:11:31,853][00980] Avg episode reward: 19.237, avg true_objective: 8.403 -[2023-02-24 14:11:31,924][00980] Num frames 5100... -[2023-02-24 14:11:32,046][00980] Num frames 5200... -[2023-02-24 14:11:32,170][00980] Num frames 5300... -[2023-02-24 14:11:32,286][00980] Num frames 5400... -[2023-02-24 14:11:32,397][00980] Num frames 5500... -[2023-02-24 14:11:32,512][00980] Avg episode rewards: #0: 17.506, true rewards: #0: 7.934 -[2023-02-24 14:11:32,513][00980] Avg episode reward: 17.506, avg true_objective: 7.934 -[2023-02-24 14:11:32,568][00980] Num frames 5600... -[2023-02-24 14:11:32,687][00980] Num frames 5700... -[2023-02-24 14:11:32,800][00980] Num frames 5800... -[2023-02-24 14:11:32,920][00980] Num frames 5900... -[2023-02-24 14:11:33,040][00980] Num frames 6000... -[2023-02-24 14:11:33,157][00980] Num frames 6100... -[2023-02-24 14:11:33,280][00980] Num frames 6200... -[2023-02-24 14:11:33,441][00980] Avg episode rewards: #0: 17.363, true rewards: #0: 7.862 -[2023-02-24 14:11:33,442][00980] Avg episode reward: 17.363, avg true_objective: 7.862 -[2023-02-24 14:11:33,457][00980] Num frames 6300... -[2023-02-24 14:11:33,571][00980] Num frames 6400... -[2023-02-24 14:11:33,685][00980] Num frames 6500... -[2023-02-24 14:11:33,807][00980] Num frames 6600... -[2023-02-24 14:11:33,927][00980] Num frames 6700... -[2023-02-24 14:11:34,099][00980] Avg episode rewards: #0: 16.440, true rewards: #0: 7.551 -[2023-02-24 14:11:34,102][00980] Avg episode reward: 16.440, avg true_objective: 7.551 -[2023-02-24 14:11:34,110][00980] Num frames 6800... -[2023-02-24 14:11:34,224][00980] Num frames 6900... -[2023-02-24 14:11:34,343][00980] Num frames 7000... -[2023-02-24 14:11:34,457][00980] Num frames 7100... -[2023-02-24 14:11:34,576][00980] Num frames 7200... -[2023-02-24 14:11:34,691][00980] Num frames 7300... -[2023-02-24 14:11:34,805][00980] Num frames 7400... -[2023-02-24 14:11:34,926][00980] Num frames 7500... -[2023-02-24 14:11:35,044][00980] Num frames 7600... -[2023-02-24 14:11:35,165][00980] Num frames 7700... -[2023-02-24 14:11:35,281][00980] Num frames 7800... -[2023-02-24 14:11:35,399][00980] Num frames 7900... -[2023-02-24 14:11:35,514][00980] Num frames 8000... -[2023-02-24 14:11:35,643][00980] Num frames 8100... -[2023-02-24 14:11:35,756][00980] Num frames 8200... -[2023-02-24 14:11:35,872][00980] Num frames 8300... -[2023-02-24 14:11:36,001][00980] Num frames 8400... -[2023-02-24 14:11:36,126][00980] Num frames 8500... -[2023-02-24 14:11:36,241][00980] Num frames 8600... -[2023-02-24 14:11:36,360][00980] Num frames 8700... -[2023-02-24 14:11:36,486][00980] Num frames 8800... -[2023-02-24 14:11:36,650][00980] Avg episode rewards: #0: 20.196, true rewards: #0: 8.896 -[2023-02-24 14:11:36,652][00980] Avg episode reward: 20.196, avg true_objective: 8.896 -[2023-02-24 14:12:30,762][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 14:12:33,506][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme -[2023-02-24 14:14:51,332][00980] Environment doom_basic already registered, overwriting... -[2023-02-24 14:14:51,335][00980] Environment doom_two_colors_easy already registered, overwriting... -[2023-02-24 14:14:51,337][00980] Environment doom_two_colors_hard already registered, overwriting... -[2023-02-24 14:14:51,338][00980] Environment doom_dm already registered, overwriting... -[2023-02-24 14:14:51,339][00980] Environment doom_dwango5 already registered, overwriting... -[2023-02-24 14:14:51,341][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... -[2023-02-24 14:14:51,342][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... -[2023-02-24 14:14:51,343][00980] Environment doom_my_way_home already registered, overwriting... -[2023-02-24 14:14:51,344][00980] Environment doom_deadly_corridor already registered, overwriting... -[2023-02-24 14:14:51,345][00980] Environment doom_defend_the_center already registered, overwriting... -[2023-02-24 14:14:51,347][00980] Environment doom_defend_the_line already registered, overwriting... -[2023-02-24 14:14:51,348][00980] Environment doom_health_gathering already registered, overwriting... -[2023-02-24 14:14:51,349][00980] Environment doom_health_gathering_supreme already registered, overwriting... -[2023-02-24 14:14:51,350][00980] Environment doom_battle already registered, overwriting... -[2023-02-24 14:14:51,351][00980] Environment doom_battle2 already registered, overwriting... -[2023-02-24 14:14:51,353][00980] Environment doom_duel_bots already registered, overwriting... -[2023-02-24 14:14:51,354][00980] Environment doom_deathmatch_bots already registered, overwriting... -[2023-02-24 14:14:51,356][00980] Environment doom_duel already registered, overwriting... -[2023-02-24 14:14:51,357][00980] Environment doom_deathmatch_full already registered, overwriting... -[2023-02-24 14:14:51,358][00980] Environment doom_benchmark already registered, overwriting... -[2023-02-24 14:14:51,359][00980] register_encoder_factory: -[2023-02-24 14:14:51,386][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:14:51,387][00980] Overriding arg 'train_for_env_steps' with value 1000000 passed from command line -[2023-02-24 14:14:51,393][00980] Experiment dir /content/train_dir/default_experiment already exists! -[2023-02-24 14:14:51,398][00980] Resuming existing experiment from /content/train_dir/default_experiment... -[2023-02-24 14:14:51,399][00980] Weights and Biases integration disabled -[2023-02-24 14:14:51,406][00980] Environment var CUDA_VISIBLE_DEVICES is 0 - -[2023-02-24 14:14:53,401][00980] Starting experiment with the following configuration: -help=False -algo=APPO -env=doom_health_gathering_supreme -experiment=default_experiment -train_dir=/content/train_dir -restart_behavior=resume -device=gpu -seed=None -num_policies=1 -async_rl=True -serial_mode=False -batched_sampling=False -num_batches_to_accumulate=2 -worker_num_splits=2 -policy_workers_per_policy=1 -max_policy_lag=1000 -num_workers=8 -num_envs_per_worker=4 -batch_size=1024 -num_batches_per_epoch=1 -num_epochs=1 -rollout=32 -recurrence=32 -shuffle_minibatches=False -gamma=0.99 -reward_scale=1.0 -reward_clip=1000.0 -value_bootstrap=False -normalize_returns=True -exploration_loss_coeff=0.001 -value_loss_coeff=0.5 -kl_loss_coeff=0.0 -exploration_loss=symmetric_kl -gae_lambda=0.95 -ppo_clip_ratio=0.1 -ppo_clip_value=0.2 -with_vtrace=False -vtrace_rho=1.0 -vtrace_c=1.0 -optimizer=adam -adam_eps=1e-06 -adam_beta1=0.9 -adam_beta2=0.999 -max_grad_norm=4.0 -learning_rate=0.0001 -lr_schedule=constant -lr_schedule_kl_threshold=0.008 -lr_adaptive_min=1e-06 -lr_adaptive_max=0.01 -obs_subtract_mean=0.0 -obs_scale=255.0 -normalize_input=True -normalize_input_keys=None -decorrelate_experience_max_seconds=0 -decorrelate_envs_on_one_worker=True -actor_worker_gpus=[] -set_workers_cpu_affinity=True -force_envs_single_thread=False -default_niceness=0 -log_to_file=True -experiment_summaries_interval=10 -flush_summaries_interval=30 -stats_avg=100 -summaries_use_frameskip=True -heartbeat_interval=20 -heartbeat_reporting_interval=600 -train_for_env_steps=1000000 -train_for_seconds=10000000000 -save_every_sec=120 -keep_checkpoints=2 -load_checkpoint_kind=latest -save_milestones_sec=-1 -save_best_every_sec=5 -save_best_metric=reward -save_best_after=100000 -benchmark=False -encoder_mlp_layers=[512, 512] -encoder_conv_architecture=convnet_simple -encoder_conv_mlp_layers=[512] -use_rnn=True -rnn_size=512 -rnn_type=gru -rnn_num_layers=1 -decoder_mlp_layers=[] -nonlinearity=elu -policy_initialization=orthogonal -policy_init_gain=1.0 -actor_critic_share_weights=True -adaptive_stddev=True -continuous_tanh_scale=0.0 -initial_stddev=1.0 -use_env_info_cache=False -env_gpu_actions=False -env_gpu_observations=True -env_frameskip=4 -env_framestack=1 -pixel_format=CHW -use_record_episode_statistics=False -with_wandb=False -wandb_user=None -wandb_project=sample_factory -wandb_group=None -wandb_job_type=SF -wandb_tags=[] -with_pbt=False -pbt_mix_policies_in_one_env=True -pbt_period_env_steps=5000000 -pbt_start_mutation=20000000 -pbt_replace_fraction=0.3 -pbt_mutation_rate=0.15 -pbt_replace_reward_gap=0.1 -pbt_replace_reward_gap_absolute=1e-06 -pbt_optimize_gamma=False -pbt_target_objective=true_objective -pbt_perturb_min=1.1 -pbt_perturb_max=1.5 -num_agents=-1 -num_humans=0 -num_bots=-1 -start_bot_difficulty=None -timelimit=None -res_w=128 -res_h=72 -wide_aspect_ratio=False -eval_env_frameskip=1 -fps=35 -command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 -cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} -git_hash=unknown -git_repo_name=not a git repository -[2023-02-24 14:14:53,404][00980] Saving configuration to /content/train_dir/default_experiment/config.json... -[2023-02-24 14:14:53,407][00980] Rollout worker 0 uses device cpu -[2023-02-24 14:14:53,408][00980] Rollout worker 1 uses device cpu -[2023-02-24 14:14:53,409][00980] Rollout worker 2 uses device cpu -[2023-02-24 14:14:53,414][00980] Rollout worker 3 uses device cpu -[2023-02-24 14:14:53,415][00980] Rollout worker 4 uses device cpu -[2023-02-24 14:14:53,416][00980] Rollout worker 5 uses device cpu -[2023-02-24 14:14:53,417][00980] Rollout worker 6 uses device cpu -[2023-02-24 14:14:53,418][00980] Rollout worker 7 uses device cpu -[2023-02-24 14:14:53,580][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:14:53,582][00980] InferenceWorker_p0-w0: min num requests: 2 -[2023-02-24 14:14:53,630][00980] Starting all processes... -[2023-02-24 14:14:53,634][00980] Starting process learner_proc0 -[2023-02-24 14:14:53,825][00980] Starting all processes... -[2023-02-24 14:14:53,836][00980] Starting process inference_proc0-0 -[2023-02-24 14:14:53,837][00980] Starting process rollout_proc0 -[2023-02-24 14:14:53,837][00980] Starting process rollout_proc1 -[2023-02-24 14:14:53,837][00980] Starting process rollout_proc2 -[2023-02-24 14:14:53,837][00980] Starting process rollout_proc3 -[2023-02-24 14:14:53,968][00980] Starting process rollout_proc4 -[2023-02-24 14:14:53,982][00980] Starting process rollout_proc5 -[2023-02-24 14:14:53,987][00980] Starting process rollout_proc6 -[2023-02-24 14:14:53,993][00980] Starting process rollout_proc7 -[2023-02-24 14:15:03,371][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:15:03,375][26253] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-02-24 14:15:03,434][26253] Num visible devices: 1 -[2023-02-24 14:15:03,466][26253] Starting seed is not provided -[2023-02-24 14:15:03,467][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:15:03,468][26253] Initializing actor-critic model on device cuda:0 -[2023-02-24 14:15:03,469][26253] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:15:03,470][26253] RunningMeanStd input shape: (1,) -[2023-02-24 14:15:03,546][26253] ConvEncoder: input_channels=3 -[2023-02-24 14:15:04,380][26267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:15:04,381][26267] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-02-24 14:15:04,438][26267] Num visible devices: 1 -[2023-02-24 14:15:04,447][26253] Conv encoder output size: 512 -[2023-02-24 14:15:04,451][26253] Policy head output size: 512 -[2023-02-24 14:15:04,535][26253] Created Actor Critic model with architecture: -[2023-02-24 14:15:04,537][26253] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): VizdoomEncoder( - (basic_encoder): ConvEncoder( - (enc): RecursiveScriptModule( - original_name=ConvEncoderImpl - (conv_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Conv2d) - (1): RecursiveScriptModule(original_name=ELU) - (2): RecursiveScriptModule(original_name=Conv2d) - (3): RecursiveScriptModule(original_name=ELU) - (4): RecursiveScriptModule(original_name=Conv2d) - (5): RecursiveScriptModule(original_name=ELU) - ) - (mlp_layers): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=ELU) - ) - ) - ) - ) - (core): ModelCoreRNN( - (core): GRU(512, 512) - ) - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=512, out_features=1, bias=True) - (action_parameterization): ActionParameterizationDefault( - (distribution_linear): Linear(in_features=512, out_features=5, bias=True) - ) -) -[2023-02-24 14:15:04,916][26268] Worker 1 uses CPU cores [1] -[2023-02-24 14:15:05,071][26270] Worker 0 uses CPU cores [0] -[2023-02-24 14:15:05,141][26272] Worker 3 uses CPU cores [1] -[2023-02-24 14:15:05,438][26278] Worker 2 uses CPU cores [0] -[2023-02-24 14:15:05,710][26282] Worker 6 uses CPU cores [0] -[2023-02-24 14:15:05,772][26280] Worker 4 uses CPU cores [0] -[2023-02-24 14:15:05,851][26288] Worker 7 uses CPU cores [1] -[2023-02-24 14:15:05,918][26290] Worker 5 uses CPU cores [1] -[2023-02-24 14:15:08,019][26253] Using optimizer -[2023-02-24 14:15:08,021][26253] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... -[2023-02-24 14:15:08,064][26253] Loading model from checkpoint -[2023-02-24 14:15:08,071][26253] Loaded experiment state at self.train_step=982, self.env_steps=4022272 -[2023-02-24 14:15:08,072][26253] Initialized policy 0 weights for model version 982 -[2023-02-24 14:15:08,083][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-02-24 14:15:08,090][26253] LearnerWorker_p0 finished initialization! -[2023-02-24 14:15:08,365][26267] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:15:08,367][26267] RunningMeanStd input shape: (1,) -[2023-02-24 14:15:08,389][26267] ConvEncoder: input_channels=3 -[2023-02-24 14:15:08,548][26267] Conv encoder output size: 512 -[2023-02-24 14:15:08,549][26267] Policy head output size: 512 -[2023-02-24 14:15:11,407][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4022272. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:15:11,633][00980] Inference worker 0-0 is ready! -[2023-02-24 14:15:11,635][00980] All inference workers are ready! Signal rollout workers to start! -[2023-02-24 14:15:11,741][26272] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,744][26288] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,740][26290] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,738][26268] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,837][26270] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,846][26278] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,840][26282] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:11,854][26280] Doom resolution: 160x120, resize resolution: (128, 72) -[2023-02-24 14:15:12,819][26280] Decorrelating experience for 0 frames... -[2023-02-24 14:15:12,826][26270] Decorrelating experience for 0 frames... -[2023-02-24 14:15:13,068][26288] Decorrelating experience for 0 frames... -[2023-02-24 14:15:13,073][26290] Decorrelating experience for 0 frames... -[2023-02-24 14:15:13,078][26272] Decorrelating experience for 0 frames... -[2023-02-24 14:15:13,323][26280] Decorrelating experience for 32 frames... -[2023-02-24 14:15:13,570][00980] Heartbeat connected on Batcher_0 -[2023-02-24 14:15:13,575][00980] Heartbeat connected on LearnerWorker_p0 -[2023-02-24 14:15:13,606][00980] Heartbeat connected on InferenceWorker_p0-w0 -[2023-02-24 14:15:13,949][26270] Decorrelating experience for 32 frames... -[2023-02-24 14:15:13,964][26278] Decorrelating experience for 0 frames... -[2023-02-24 14:15:14,383][26278] Decorrelating experience for 32 frames... -[2023-02-24 14:15:14,500][26290] Decorrelating experience for 32 frames... -[2023-02-24 14:15:14,514][26288] Decorrelating experience for 32 frames... -[2023-02-24 14:15:14,521][26268] Decorrelating experience for 0 frames... -[2023-02-24 14:15:14,519][26272] Decorrelating experience for 32 frames... -[2023-02-24 14:15:15,359][26278] Decorrelating experience for 64 frames... -[2023-02-24 14:15:15,390][26268] Decorrelating experience for 32 frames... -[2023-02-24 14:15:15,574][26290] Decorrelating experience for 64 frames... -[2023-02-24 14:15:15,614][26280] Decorrelating experience for 64 frames... -[2023-02-24 14:15:15,688][26270] Decorrelating experience for 64 frames... -[2023-02-24 14:15:16,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:15:16,520][26272] Decorrelating experience for 64 frames... -[2023-02-24 14:15:16,649][26278] Decorrelating experience for 96 frames... -[2023-02-24 14:15:16,700][26268] Decorrelating experience for 64 frames... -[2023-02-24 14:15:16,752][26288] Decorrelating experience for 64 frames... -[2023-02-24 14:15:16,844][00980] Heartbeat connected on RolloutWorker_w2 -[2023-02-24 14:15:16,853][26282] Decorrelating experience for 0 frames... -[2023-02-24 14:15:16,994][26280] Decorrelating experience for 96 frames... -[2023-02-24 14:15:17,218][00980] Heartbeat connected on RolloutWorker_w4 -[2023-02-24 14:15:17,608][26270] Decorrelating experience for 96 frames... -[2023-02-24 14:15:17,961][00980] Heartbeat connected on RolloutWorker_w0 -[2023-02-24 14:15:18,366][26272] Decorrelating experience for 96 frames... -[2023-02-24 14:15:18,585][26282] Decorrelating experience for 32 frames... -[2023-02-24 14:15:18,665][00980] Heartbeat connected on RolloutWorker_w3 -[2023-02-24 14:15:18,671][26290] Decorrelating experience for 96 frames... -[2023-02-24 14:15:18,677][26268] Decorrelating experience for 96 frames... -[2023-02-24 14:15:18,724][26288] Decorrelating experience for 96 frames... -[2023-02-24 14:15:19,012][00980] Heartbeat connected on RolloutWorker_w5 -[2023-02-24 14:15:19,020][00980] Heartbeat connected on RolloutWorker_w1 -[2023-02-24 14:15:19,070][00980] Heartbeat connected on RolloutWorker_w7 -[2023-02-24 14:15:21,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 175.6. Samples: 1756. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-02-24 14:15:21,413][00980] Avg episode reward: [(0, '2.045')] -[2023-02-24 14:15:21,489][26253] Signal inference workers to stop experience collection... -[2023-02-24 14:15:21,510][26267] InferenceWorker_p0-w0: stopping experience collection -[2023-02-24 14:15:21,564][26282] Decorrelating experience for 64 frames... -[2023-02-24 14:15:22,135][26282] Decorrelating experience for 96 frames... -[2023-02-24 14:15:22,232][00980] Heartbeat connected on RolloutWorker_w6 -[2023-02-24 14:15:24,615][26253] Signal inference workers to resume experience collection... -[2023-02-24 14:15:24,647][26253] Stopping Batcher_0... -[2023-02-24 14:15:24,648][26253] Loop batcher_evt_loop terminating... -[2023-02-24 14:15:24,644][26267] Weights refcount: 2 0 -[2023-02-24 14:15:24,648][00980] Component Batcher_0 stopped! -[2023-02-24 14:15:24,658][26267] Stopping InferenceWorker_p0-w0... -[2023-02-24 14:15:24,659][26267] Loop inference_proc0-0_evt_loop terminating... -[2023-02-24 14:15:24,658][00980] Component InferenceWorker_p0-w0 stopped! -[2023-02-24 14:15:24,856][00980] Component RolloutWorker_w7 stopped! -[2023-02-24 14:15:24,860][26272] Stopping RolloutWorker_w3... -[2023-02-24 14:15:24,861][00980] Component RolloutWorker_w3 stopped! -[2023-02-24 14:15:24,861][26288] Stopping RolloutWorker_w7... -[2023-02-24 14:15:24,868][26288] Loop rollout_proc7_evt_loop terminating... -[2023-02-24 14:15:24,871][26272] Loop rollout_proc3_evt_loop terminating... -[2023-02-24 14:15:24,877][00980] Component RolloutWorker_w0 stopped! -[2023-02-24 14:15:24,877][26268] Stopping RolloutWorker_w1... -[2023-02-24 14:15:24,878][26290] Stopping RolloutWorker_w5... -[2023-02-24 14:15:24,881][00980] Component RolloutWorker_w1 stopped! -[2023-02-24 14:15:24,886][00980] Component RolloutWorker_w5 stopped! -[2023-02-24 14:15:24,881][26268] Loop rollout_proc1_evt_loop terminating... -[2023-02-24 14:15:24,881][26290] Loop rollout_proc5_evt_loop terminating... -[2023-02-24 14:15:24,899][00980] Component RolloutWorker_w4 stopped! -[2023-02-24 14:15:24,905][26280] Stopping RolloutWorker_w4... -[2023-02-24 14:15:24,906][26280] Loop rollout_proc4_evt_loop terminating... -[2023-02-24 14:15:24,911][26282] Stopping RolloutWorker_w6... -[2023-02-24 14:15:24,911][26282] Loop rollout_proc6_evt_loop terminating... -[2023-02-24 14:15:24,880][26270] Stopping RolloutWorker_w0... -[2023-02-24 14:15:24,914][26270] Loop rollout_proc0_evt_loop terminating... -[2023-02-24 14:15:24,910][00980] Component RolloutWorker_w6 stopped! -[2023-02-24 14:15:24,931][26278] Stopping RolloutWorker_w2... -[2023-02-24 14:15:24,932][26278] Loop rollout_proc2_evt_loop terminating... -[2023-02-24 14:15:24,931][00980] Component RolloutWorker_w2 stopped! -[2023-02-24 14:15:28,114][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... -[2023-02-24 14:15:28,273][26253] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth -[2023-02-24 14:15:28,279][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... -[2023-02-24 14:15:28,479][00980] Component LearnerWorker_p0 stopped! -[2023-02-24 14:15:28,483][00980] Waiting for process learner_proc0 to stop... -[2023-02-24 14:15:28,485][26253] Stopping LearnerWorker_p0... -[2023-02-24 14:15:28,486][26253] Loop learner_proc0_evt_loop terminating... -[2023-02-24 14:15:29,668][00980] Waiting for process inference_proc0-0 to join... -[2023-02-24 14:15:29,670][00980] Waiting for process rollout_proc0 to join... -[2023-02-24 14:15:29,672][00980] Waiting for process rollout_proc1 to join... -[2023-02-24 14:15:29,674][00980] Waiting for process rollout_proc2 to join... -[2023-02-24 14:15:29,678][00980] Waiting for process rollout_proc3 to join... -[2023-02-24 14:15:29,680][00980] Waiting for process rollout_proc4 to join... -[2023-02-24 14:15:29,682][00980] Waiting for process rollout_proc5 to join... -[2023-02-24 14:15:29,685][00980] Waiting for process rollout_proc6 to join... -[2023-02-24 14:15:29,687][00980] Waiting for process rollout_proc7 to join... -[2023-02-24 14:15:29,689][00980] Batcher 0 profile tree view: -batching: 0.0539, releasing_batches: 0.0311 -[2023-02-24 14:15:29,691][00980] InferenceWorker_p0-w0 profile tree view: -update_model: 0.0124 -wait_policy: 0.0012 - wait_policy_total: 6.5012 -one_step: 0.0023 - handle_policy_step: 3.1320 - deserialize: 0.0360, stack: 0.0068, obs_to_device_normalize: 0.2785, forward: 2.5249, send_messages: 0.0576 - prepare_outputs: 0.1663 - to_cpu: 0.0961 -[2023-02-24 14:15:29,693][00980] Learner 0 profile tree view: -misc: 0.0000, prepare_batch: 6.1913 -train: 1.6359 - epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0004, kl_divergence: 0.0013, after_optimizer: 0.0296 - calculate_losses: 0.2418 - losses_init: 0.0000, forward_head: 0.1123, bptt_initial: 0.1049, tail: 0.0029, advantages_returns: 0.0009, losses: 0.0165 - bptt: 0.0040 - bptt_forward_core: 0.0039 - update: 1.3507 - clip: 0.0144 -[2023-02-24 14:15:29,696][00980] RolloutWorker_w0 profile tree view: -wait_for_trajectories: 0.0009, enqueue_policy_requests: 0.4267, env_step: 2.1053, overhead: 0.0685, complete_rollouts: 0.0505 -save_policy_outputs: 0.0325 - split_output_tensors: 0.0155 -[2023-02-24 14:15:29,698][00980] RolloutWorker_w7 profile tree view: -wait_for_trajectories: 0.0007, enqueue_policy_requests: 0.4189, env_step: 1.8447, overhead: 0.0353, complete_rollouts: 0.0094 -save_policy_outputs: 0.0314 - split_output_tensors: 0.0157 -[2023-02-24 14:15:29,702][00980] Loop Runner_EvtLoop terminating... -[2023-02-24 14:15:29,706][00980] Runner profile tree view: -main_loop: 36.0757 -[2023-02-24 14:15:29,709][00980] Collected {0: 4030464}, FPS: 227.1 -[2023-02-24 14:15:29,766][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:15:29,767][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:15:29,770][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:15:29,772][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:15:29,773][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:15:29,777][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:15:29,779][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:15:29,785][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:15:29,787][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! -[2023-02-24 14:15:29,790][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! -[2023-02-24 14:15:29,792][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:15:29,794][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:15:29,796][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:15:29,798][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:15:29,799][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:15:29,822][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:15:29,824][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:15:29,839][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:15:29,891][00980] Conv encoder output size: 512 -[2023-02-24 14:15:29,893][00980] Policy head output size: 512 -[2023-02-24 14:15:29,920][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... -[2023-02-24 14:15:30,396][00980] Num frames 100... -[2023-02-24 14:15:30,515][00980] Num frames 200... -[2023-02-24 14:15:30,642][00980] Num frames 300... -[2023-02-24 14:15:30,770][00980] Num frames 400... -[2023-02-24 14:15:30,891][00980] Num frames 500... -[2023-02-24 14:15:31,011][00980] Num frames 600... -[2023-02-24 14:15:31,130][00980] Num frames 700... -[2023-02-24 14:15:31,247][00980] Num frames 800... -[2023-02-24 14:15:31,380][00980] Num frames 900... -[2023-02-24 14:15:31,500][00980] Num frames 1000... -[2023-02-24 14:15:31,622][00980] Num frames 1100... -[2023-02-24 14:15:31,753][00980] Num frames 1200... -[2023-02-24 14:15:31,868][00980] Num frames 1300... -[2023-02-24 14:15:31,937][00980] Avg episode rewards: #0: 32.120, true rewards: #0: 13.120 -[2023-02-24 14:15:31,941][00980] Avg episode reward: 32.120, avg true_objective: 13.120 -[2023-02-24 14:15:32,038][00980] Num frames 1400... -[2023-02-24 14:15:32,150][00980] Num frames 1500... -[2023-02-24 14:15:32,267][00980] Num frames 1600... -[2023-02-24 14:15:32,426][00980] Num frames 1700... -[2023-02-24 14:15:32,547][00980] Num frames 1800... -[2023-02-24 14:15:32,662][00980] Num frames 1900... -[2023-02-24 14:15:32,779][00980] Num frames 2000... -[2023-02-24 14:15:32,896][00980] Num frames 2100... -[2023-02-24 14:15:33,028][00980] Num frames 2200... -[2023-02-24 14:15:33,171][00980] Avg episode rewards: #0: 26.360, true rewards: #0: 11.360 -[2023-02-24 14:15:33,172][00980] Avg episode reward: 26.360, avg true_objective: 11.360 -[2023-02-24 14:15:33,210][00980] Num frames 2300... -[2023-02-24 14:15:33,326][00980] Num frames 2400... -[2023-02-24 14:15:33,457][00980] Num frames 2500... -[2023-02-24 14:15:33,571][00980] Num frames 2600... -[2023-02-24 14:15:33,687][00980] Num frames 2700... -[2023-02-24 14:15:33,804][00980] Num frames 2800... -[2023-02-24 14:15:33,921][00980] Num frames 2900... -[2023-02-24 14:15:34,041][00980] Num frames 3000... -[2023-02-24 14:15:34,159][00980] Num frames 3100... -[2023-02-24 14:15:34,273][00980] Num frames 3200... -[2023-02-24 14:15:34,400][00980] Num frames 3300... -[2023-02-24 14:15:34,516][00980] Num frames 3400... -[2023-02-24 14:15:34,640][00980] Num frames 3500... -[2023-02-24 14:15:34,761][00980] Num frames 3600... -[2023-02-24 14:15:34,880][00980] Num frames 3700... -[2023-02-24 14:15:35,005][00980] Num frames 3800... -[2023-02-24 14:15:35,126][00980] Num frames 3900... -[2023-02-24 14:15:35,241][00980] Num frames 4000... -[2023-02-24 14:15:35,361][00980] Num frames 4100... -[2023-02-24 14:15:35,490][00980] Num frames 4200... -[2023-02-24 14:15:35,608][00980] Num frames 4300... -[2023-02-24 14:15:35,744][00980] Avg episode rewards: #0: 36.906, true rewards: #0: 14.573 -[2023-02-24 14:15:35,747][00980] Avg episode reward: 36.906, avg true_objective: 14.573 -[2023-02-24 14:15:35,781][00980] Num frames 4400... -[2023-02-24 14:15:35,898][00980] Num frames 4500... -[2023-02-24 14:15:36,012][00980] Num frames 4600... -[2023-02-24 14:15:36,127][00980] Num frames 4700... -[2023-02-24 14:15:36,241][00980] Num frames 4800... -[2023-02-24 14:15:36,362][00980] Num frames 4900... -[2023-02-24 14:15:36,488][00980] Num frames 5000... -[2023-02-24 14:15:36,611][00980] Num frames 5100... -[2023-02-24 14:15:36,733][00980] Num frames 5200... -[2023-02-24 14:15:36,851][00980] Num frames 5300... -[2023-02-24 14:15:36,973][00980] Num frames 5400... -[2023-02-24 14:15:37,089][00980] Num frames 5500... -[2023-02-24 14:15:37,211][00980] Num frames 5600... -[2023-02-24 14:15:37,290][00980] Avg episode rewards: #0: 34.550, true rewards: #0: 14.050 -[2023-02-24 14:15:37,292][00980] Avg episode reward: 34.550, avg true_objective: 14.050 -[2023-02-24 14:15:37,387][00980] Num frames 5700... -[2023-02-24 14:15:37,516][00980] Num frames 5800... -[2023-02-24 14:15:37,633][00980] Num frames 5900... -[2023-02-24 14:15:37,749][00980] Num frames 6000... -[2023-02-24 14:15:37,864][00980] Num frames 6100... -[2023-02-24 14:15:37,987][00980] Num frames 6200... -[2023-02-24 14:15:38,103][00980] Num frames 6300... -[2023-02-24 14:15:38,225][00980] Num frames 6400... -[2023-02-24 14:15:38,353][00980] Num frames 6500... -[2023-02-24 14:15:38,486][00980] Num frames 6600... -[2023-02-24 14:15:38,604][00980] Num frames 6700... -[2023-02-24 14:15:38,729][00980] Num frames 6800... -[2023-02-24 14:15:38,852][00980] Num frames 6900... -[2023-02-24 14:15:38,971][00980] Num frames 7000... -[2023-02-24 14:15:39,085][00980] Num frames 7100... -[2023-02-24 14:15:39,209][00980] Num frames 7200... -[2023-02-24 14:15:39,329][00980] Num frames 7300... -[2023-02-24 14:15:39,455][00980] Num frames 7400... -[2023-02-24 14:15:39,636][00980] Num frames 7500... -[2023-02-24 14:15:39,812][00980] Num frames 7600... -[2023-02-24 14:15:39,987][00980] Num frames 7700... -[2023-02-24 14:15:40,080][00980] Avg episode rewards: #0: 38.440, true rewards: #0: 15.440 -[2023-02-24 14:15:40,085][00980] Avg episode reward: 38.440, avg true_objective: 15.440 -[2023-02-24 14:15:40,231][00980] Num frames 7800... -[2023-02-24 14:15:40,392][00980] Num frames 7900... -[2023-02-24 14:15:40,558][00980] Num frames 8000... -[2023-02-24 14:15:40,738][00980] Num frames 8100... -[2023-02-24 14:15:40,913][00980] Num frames 8200... -[2023-02-24 14:15:41,077][00980] Num frames 8300... -[2023-02-24 14:15:41,241][00980] Num frames 8400... -[2023-02-24 14:15:41,422][00980] Num frames 8500... -[2023-02-24 14:15:41,602][00980] Num frames 8600... -[2023-02-24 14:15:41,767][00980] Num frames 8700... -[2023-02-24 14:15:41,940][00980] Num frames 8800... -[2023-02-24 14:15:42,110][00980] Num frames 8900... -[2023-02-24 14:15:42,289][00980] Num frames 9000... -[2023-02-24 14:15:42,463][00980] Num frames 9100... -[2023-02-24 14:15:42,635][00980] Num frames 9200... -[2023-02-24 14:15:42,803][00980] Num frames 9300... -[2023-02-24 14:15:42,946][00980] Avg episode rewards: #0: 39.086, true rewards: #0: 15.587 -[2023-02-24 14:15:42,949][00980] Avg episode reward: 39.086, avg true_objective: 15.587 -[2023-02-24 14:15:43,029][00980] Num frames 9400... -[2023-02-24 14:15:43,183][00980] Num frames 9500... -[2023-02-24 14:15:43,304][00980] Num frames 9600... -[2023-02-24 14:15:43,420][00980] Num frames 9700... -[2023-02-24 14:15:43,533][00980] Num frames 9800... -[2023-02-24 14:15:43,655][00980] Num frames 9900... -[2023-02-24 14:15:43,779][00980] Num frames 10000... -[2023-02-24 14:15:43,897][00980] Num frames 10100... -[2023-02-24 14:15:44,021][00980] Avg episode rewards: #0: 35.645, true rewards: #0: 14.503 -[2023-02-24 14:15:44,023][00980] Avg episode reward: 35.645, avg true_objective: 14.503 -[2023-02-24 14:15:44,080][00980] Num frames 10200... -[2023-02-24 14:15:44,204][00980] Num frames 10300... -[2023-02-24 14:15:44,320][00980] Num frames 10400... -[2023-02-24 14:15:44,439][00980] Num frames 10500... -[2023-02-24 14:15:44,561][00980] Num frames 10600... -[2023-02-24 14:15:44,685][00980] Num frames 10700... -[2023-02-24 14:15:44,805][00980] Num frames 10800... -[2023-02-24 14:15:44,890][00980] Avg episode rewards: #0: 33.030, true rewards: #0: 13.530 -[2023-02-24 14:15:44,892][00980] Avg episode reward: 33.030, avg true_objective: 13.530 -[2023-02-24 14:15:44,983][00980] Num frames 10900... -[2023-02-24 14:15:45,100][00980] Num frames 11000... -[2023-02-24 14:15:45,217][00980] Num frames 11100... -[2023-02-24 14:15:45,335][00980] Num frames 11200... -[2023-02-24 14:15:45,451][00980] Num frames 11300... -[2023-02-24 14:15:45,575][00980] Num frames 11400... -[2023-02-24 14:15:45,700][00980] Num frames 11500... -[2023-02-24 14:15:45,823][00980] Num frames 11600... -[2023-02-24 14:15:45,942][00980] Num frames 11700... -[2023-02-24 14:15:46,060][00980] Num frames 11800... -[2023-02-24 14:15:46,186][00980] Num frames 11900... -[2023-02-24 14:15:46,306][00980] Num frames 12000... -[2023-02-24 14:15:46,428][00980] Num frames 12100... -[2023-02-24 14:15:46,545][00980] Num frames 12200... -[2023-02-24 14:15:46,675][00980] Num frames 12300... -[2023-02-24 14:15:46,792][00980] Num frames 12400... -[2023-02-24 14:15:46,916][00980] Num frames 12500... -[2023-02-24 14:15:47,032][00980] Num frames 12600... -[2023-02-24 14:15:47,152][00980] Num frames 12700... -[2023-02-24 14:15:47,277][00980] Num frames 12800... -[2023-02-24 14:15:47,398][00980] Num frames 12900... -[2023-02-24 14:15:47,490][00980] Avg episode rewards: #0: 36.026, true rewards: #0: 14.360 -[2023-02-24 14:15:47,491][00980] Avg episode reward: 36.026, avg true_objective: 14.360 -[2023-02-24 14:15:47,582][00980] Num frames 13000... -[2023-02-24 14:15:47,716][00980] Num frames 13100... -[2023-02-24 14:15:47,833][00980] Num frames 13200... -[2023-02-24 14:15:47,956][00980] Num frames 13300... -[2023-02-24 14:15:48,076][00980] Num frames 13400... -[2023-02-24 14:15:48,190][00980] Num frames 13500... -[2023-02-24 14:15:48,313][00980] Num frames 13600... -[2023-02-24 14:15:48,435][00980] Num frames 13700... -[2023-02-24 14:15:48,560][00980] Num frames 13800... -[2023-02-24 14:15:48,680][00980] Num frames 13900... -[2023-02-24 14:15:48,804][00980] Num frames 14000... -[2023-02-24 14:15:48,921][00980] Num frames 14100... -[2023-02-24 14:15:49,040][00980] Num frames 14200... -[2023-02-24 14:15:49,157][00980] Num frames 14300... -[2023-02-24 14:15:49,279][00980] Num frames 14400... -[2023-02-24 14:15:49,411][00980] Num frames 14500... -[2023-02-24 14:15:49,532][00980] Avg episode rewards: #0: 36.556, true rewards: #0: 14.556 -[2023-02-24 14:15:49,535][00980] Avg episode reward: 36.556, avg true_objective: 14.556 -[2023-02-24 14:17:18,905][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! -[2023-02-24 14:17:18,985][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json -[2023-02-24 14:17:18,989][00980] Overriding arg 'num_workers' with value 1 passed from command line -[2023-02-24 14:17:18,992][00980] Adding new argument 'no_render'=True that is not in the saved config file! -[2023-02-24 14:17:18,996][00980] Adding new argument 'save_video'=True that is not in the saved config file! -[2023-02-24 14:17:18,999][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! -[2023-02-24 14:17:19,001][00980] Adding new argument 'video_name'=None that is not in the saved config file! -[2023-02-24 14:17:19,003][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! -[2023-02-24 14:17:19,006][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! -[2023-02-24 14:17:19,007][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! -[2023-02-24 14:17:19,012][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! -[2023-02-24 14:17:19,013][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! -[2023-02-24 14:17:19,014][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! -[2023-02-24 14:17:19,015][00980] Adding new argument 'train_script'=None that is not in the saved config file! -[2023-02-24 14:17:19,016][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! -[2023-02-24 14:17:19,018][00980] Using frameskip 1 and render_action_repeat=4 for evaluation -[2023-02-24 14:17:19,052][00980] RunningMeanStd input shape: (3, 72, 128) -[2023-02-24 14:17:19,055][00980] RunningMeanStd input shape: (1,) -[2023-02-24 14:17:19,074][00980] ConvEncoder: input_channels=3 -[2023-02-24 14:17:19,143][00980] Conv encoder output size: 512 -[2023-02-24 14:17:19,145][00980] Policy head output size: 512 -[2023-02-24 14:17:19,181][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... -[2023-02-24 14:17:19,838][00980] Num frames 100... -[2023-02-24 14:17:20,012][00980] Num frames 200... -[2023-02-24 14:17:20,171][00980] Num frames 300... -[2023-02-24 14:17:20,352][00980] Num frames 400... -[2023-02-24 14:17:20,438][00980] Avg episode rewards: #0: 6.160, true rewards: #0: 4.160 -[2023-02-24 14:17:20,439][00980] Avg episode reward: 6.160, avg true_objective: 4.160 -[2023-02-24 14:17:20,580][00980] Num frames 500... -[2023-02-24 14:17:20,745][00980] Num frames 600... -[2023-02-24 14:17:20,914][00980] Num frames 700... -[2023-02-24 14:17:21,087][00980] Num frames 800... -[2023-02-24 14:17:21,258][00980] Num frames 900... -[2023-02-24 14:17:21,441][00980] Num frames 1000... -[2023-02-24 14:17:21,622][00980] Num frames 1100... -[2023-02-24 14:17:21,792][00980] Num frames 1200... -[2023-02-24 14:17:21,958][00980] Num frames 1300... -[2023-02-24 14:17:22,089][00980] Num frames 1400... -[2023-02-24 14:17:22,210][00980] Num frames 1500... -[2023-02-24 14:17:22,333][00980] Num frames 1600... -[2023-02-24 14:17:22,450][00980] Num frames 1700... -[2023-02-24 14:17:22,567][00980] Num frames 1800... -[2023-02-24 14:17:22,680][00980] Num frames 1900... -[2023-02-24 14:17:22,797][00980] Num frames 2000... -[2023-02-24 14:17:22,912][00980] Num frames 2100... -[2023-02-24 14:17:23,025][00980] Num frames 2200... -[2023-02-24 14:17:23,148][00980] Num frames 2300... -[2023-02-24 14:17:23,270][00980] Num frames 2400... -[2023-02-24 14:17:23,398][00980] Num frames 2500... -[2023-02-24 14:17:23,475][00980] Avg episode rewards: #0: 29.079, true rewards: #0: 12.580 -[2023-02-24 14:17:23,478][00980] Avg episode reward: 29.079, avg true_objective: 12.580 -[2023-02-24 14:17:23,575][00980] Num frames 2600... -[2023-02-24 14:17:23,691][00980] Num frames 2700... -[2023-02-24 14:17:23,831][00980] Avg episode rewards: #0: 20.906, true rewards: #0: 9.240 -[2023-02-24 14:17:23,833][00980] Avg episode reward: 20.906, avg true_objective: 9.240 -[2023-02-24 14:17:23,868][00980] Num frames 2800... -[2023-02-24 14:17:23,988][00980] Num frames 2900... -[2023-02-24 14:17:24,105][00980] Num frames 3000... -[2023-02-24 14:17:24,223][00980] Num frames 3100... -[2023-02-24 14:17:24,347][00980] Num frames 3200... -[2023-02-24 14:17:24,463][00980] Num frames 3300... -[2023-02-24 14:17:24,578][00980] Num frames 3400... -[2023-02-24 14:17:24,692][00980] Num frames 3500... -[2023-02-24 14:17:24,812][00980] Num frames 3600... -[2023-02-24 14:17:24,929][00980] Num frames 3700... -[2023-02-24 14:17:25,002][00980] Avg episode rewards: #0: 21.285, true rewards: #0: 9.285 -[2023-02-24 14:17:25,003][00980] Avg episode reward: 21.285, avg true_objective: 9.285 -[2023-02-24 14:17:25,104][00980] Num frames 3800... -[2023-02-24 14:17:25,229][00980] Num frames 3900... -[2023-02-24 14:17:25,358][00980] Num frames 4000... -[2023-02-24 14:17:25,475][00980] Num frames 4100... -[2023-02-24 14:17:25,593][00980] Num frames 4200... -[2023-02-24 14:17:25,712][00980] Num frames 4300... -[2023-02-24 14:17:25,829][00980] Num frames 4400... -[2023-02-24 14:17:25,953][00980] Num frames 4500... -[2023-02-24 14:17:26,071][00980] Num frames 4600... -[2023-02-24 14:17:26,194][00980] Num frames 4700... -[2023-02-24 14:17:26,308][00980] Num frames 4800... -[2023-02-24 14:17:26,431][00980] Num frames 4900... -[2023-02-24 14:17:26,549][00980] Num frames 5000... -[2023-02-24 14:17:26,623][00980] Avg episode rewards: #0: 24.032, true rewards: #0: 10.032 -[2023-02-24 14:17:26,627][00980] Avg episode reward: 24.032, avg true_objective: 10.032 -[2023-02-24 14:17:26,723][00980] Num frames 5100... -[2023-02-24 14:17:26,837][00980] Num frames 5200... -[2023-02-24 14:17:26,959][00980] Num frames 5300... -[2023-02-24 14:17:27,078][00980] Num frames 5400... -[2023-02-24 14:17:27,197][00980] Num frames 5500... -[2023-02-24 14:17:27,320][00980] Num frames 5600... -[2023-02-24 14:17:27,438][00980] Num frames 5700... -[2023-02-24 14:17:27,560][00980] Num frames 5800... -[2023-02-24 14:17:27,707][00980] Avg episode rewards: #0: 23.467, true rewards: #0: 9.800 -[2023-02-24 14:17:27,709][00980] Avg episode reward: 23.467, avg true_objective: 9.800 -[2023-02-24 14:17:27,735][00980] Num frames 5900... -[2023-02-24 14:17:27,855][00980] Num frames 6000... -[2023-02-24 14:17:27,971][00980] Num frames 6100... -[2023-02-24 14:17:28,088][00980] Num frames 6200... -[2023-02-24 14:17:28,208][00980] Num frames 6300... -[2023-02-24 14:17:28,326][00980] Num frames 6400... -[2023-02-24 14:17:28,447][00980] Num frames 6500... -[2023-02-24 14:17:28,572][00980] Num frames 6600... -[2023-02-24 14:17:28,748][00980] Avg episode rewards: #0: 22.854, true rewards: #0: 9.569 -[2023-02-24 14:17:28,751][00980] Avg episode reward: 22.854, avg true_objective: 9.569 -[2023-02-24 14:17:28,755][00980] Num frames 6700... -[2023-02-24 14:17:28,880][00980] Num frames 6800... -[2023-02-24 14:17:28,994][00980] Num frames 6900... -[2023-02-24 14:17:29,113][00980] Num frames 7000... -[2023-02-24 14:17:29,237][00980] Num frames 7100... -[2023-02-24 14:17:29,355][00980] Num frames 7200... -[2023-02-24 14:17:29,470][00980] Avg episode rewards: #0: 21.427, true rewards: #0: 9.052 -[2023-02-24 14:17:29,473][00980] Avg episode reward: 21.427, avg true_objective: 9.052 -[2023-02-24 14:17:29,547][00980] Num frames 7300... -[2023-02-24 14:17:29,672][00980] Num frames 7400... -[2023-02-24 14:17:29,787][00980] Num frames 7500... -[2023-02-24 14:17:29,913][00980] Num frames 7600... -[2023-02-24 14:17:30,036][00980] Num frames 7700... -[2023-02-24 14:17:30,152][00980] Num frames 7800... -[2023-02-24 14:17:30,268][00980] Num frames 7900... -[2023-02-24 14:17:30,391][00980] Num frames 8000... -[2023-02-24 14:17:30,516][00980] Num frames 8100... -[2023-02-24 14:17:30,644][00980] Num frames 8200... -[2023-02-24 14:17:30,764][00980] Num frames 8300... -[2023-02-24 14:17:30,883][00980] Num frames 8400... -[2023-02-24 14:17:31,008][00980] Num frames 8500... -[2023-02-24 14:17:31,128][00980] Num frames 8600... -[2023-02-24 14:17:31,250][00980] Num frames 8700... -[2023-02-24 14:17:31,399][00980] Avg episode rewards: #0: 23.308, true rewards: #0: 9.752 -[2023-02-24 14:17:31,400][00980] Avg episode reward: 23.308, avg true_objective: 9.752 -[2023-02-24 14:17:31,435][00980] Num frames 8800... -[2023-02-24 14:17:31,553][00980] Num frames 8900... -[2023-02-24 14:17:31,671][00980] Num frames 9000... -[2023-02-24 14:17:31,794][00980] Num frames 9100... -[2023-02-24 14:17:31,913][00980] Num frames 9200... -[2023-02-24 14:17:32,045][00980] Num frames 9300... -[2023-02-24 14:17:32,220][00980] Num frames 9400... -[2023-02-24 14:17:32,391][00980] Num frames 9500... -[2023-02-24 14:17:32,585][00980] Num frames 9600... -[2023-02-24 14:17:32,749][00980] Num frames 9700... -[2023-02-24 14:17:32,909][00980] Num frames 9800... -[2023-02-24 14:17:33,124][00980] Avg episode rewards: #0: 23.597, true rewards: #0: 9.897 -[2023-02-24 14:17:33,127][00980] Avg episode reward: 23.597, avg true_objective: 9.897 -[2023-02-24 14:17:33,136][00980] Num frames 9900... -[2023-02-24 14:18:34,473][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-24 15:06:57,959][00176] Heartbeat connected on Batcher_0 +[2023-02-24 15:06:57,969][00176] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-24 15:06:57,984][00176] Heartbeat connected on RolloutWorker_w0 +[2023-02-24 15:06:57,989][00176] Heartbeat connected on RolloutWorker_w1 +[2023-02-24 15:06:57,993][00176] Heartbeat connected on RolloutWorker_w2 +[2023-02-24 15:06:57,999][00176] Heartbeat connected on RolloutWorker_w3 +[2023-02-24 15:06:58,004][00176] Heartbeat connected on RolloutWorker_w4 +[2023-02-24 15:06:58,012][00176] Heartbeat connected on RolloutWorker_w5 +[2023-02-24 15:06:58,016][00176] Heartbeat connected on RolloutWorker_w6 +[2023-02-24 15:06:58,022][00176] Heartbeat connected on RolloutWorker_w7 +[2023-02-24 15:07:00,544][10336] Using optimizer +[2023-02-24 15:07:00,545][10336] No checkpoints found +[2023-02-24 15:07:00,546][10336] Did not load from checkpoint, starting from scratch! +[2023-02-24 15:07:00,546][10336] Initialized policy 0 weights for model version 0 +[2023-02-24 15:07:00,550][10336] LearnerWorker_p0 finished initialization! +[2023-02-24 15:07:00,552][00176] Heartbeat connected on LearnerWorker_p0 +[2023-02-24 15:07:00,550][10336] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-24 15:07:00,767][10350] RunningMeanStd input shape: (3, 72, 128) +[2023-02-24 15:07:00,769][10350] RunningMeanStd input shape: (1,) +[2023-02-24 15:07:00,781][10350] ConvEncoder: input_channels=3 +[2023-02-24 15:07:00,883][10350] Conv encoder output size: 512 +[2023-02-24 15:07:00,883][10350] Policy head output size: 512 +[2023-02-24 15:07:03,476][00176] Inference worker 0-0 is ready! +[2023-02-24 15:07:03,480][00176] All inference workers are ready! Signal rollout workers to start! +[2023-02-24 15:07:03,581][10356] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,588][10355] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,612][10352] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,626][10357] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,769][10353] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,785][10358] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,790][10351] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:03,827][10354] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:07:05,219][10353] Decorrelating experience for 0 frames... +[2023-02-24 15:07:05,299][10356] Decorrelating experience for 0 frames... +[2023-02-24 15:07:05,309][10355] Decorrelating experience for 0 frames... +[2023-02-24 15:07:05,311][10352] Decorrelating experience for 0 frames... +[2023-02-24 15:07:05,333][10357] Decorrelating experience for 0 frames... +[2023-02-24 15:07:05,447][00176] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-24 15:07:06,475][10356] Decorrelating experience for 32 frames... +[2023-02-24 15:07:06,478][10355] Decorrelating experience for 32 frames... +[2023-02-24 15:07:06,501][10357] Decorrelating experience for 32 frames... +[2023-02-24 15:07:06,824][10353] Decorrelating experience for 32 frames... +[2023-02-24 15:07:06,938][10351] Decorrelating experience for 0 frames... +[2023-02-24 15:07:07,986][10354] Decorrelating experience for 0 frames... +[2023-02-24 15:07:08,031][10352] Decorrelating experience for 32 frames... +[2023-02-24 15:07:08,237][10355] Decorrelating experience for 64 frames... +[2023-02-24 15:07:08,240][10356] Decorrelating experience for 64 frames... +[2023-02-24 15:07:08,427][10358] Decorrelating experience for 0 frames... +[2023-02-24 15:07:08,488][10353] Decorrelating experience for 64 frames... +[2023-02-24 15:07:09,475][10351] Decorrelating experience for 32 frames... +[2023-02-24 15:07:09,638][10354] Decorrelating experience for 32 frames... +[2023-02-24 15:07:09,734][10353] Decorrelating experience for 96 frames... +[2023-02-24 15:07:09,788][10352] Decorrelating experience for 64 frames... +[2023-02-24 15:07:09,827][10357] Decorrelating experience for 64 frames... +[2023-02-24 15:07:09,946][10356] Decorrelating experience for 96 frames... +[2023-02-24 15:07:10,073][10355] Decorrelating experience for 96 frames... +[2023-02-24 15:07:10,447][00176] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-24 15:07:10,592][10358] Decorrelating experience for 32 frames... +[2023-02-24 15:07:11,016][10351] Decorrelating experience for 64 frames... +[2023-02-24 15:07:11,260][10352] Decorrelating experience for 96 frames... +[2023-02-24 15:07:11,261][10357] Decorrelating experience for 96 frames... +[2023-02-24 15:07:11,586][10354] Decorrelating experience for 64 frames... +[2023-02-24 15:07:11,713][10358] Decorrelating experience for 64 frames... +[2023-02-24 15:07:12,476][10354] Decorrelating experience for 96 frames... +[2023-02-24 15:07:12,480][10351] Decorrelating experience for 96 frames... +[2023-02-24 15:07:12,651][10358] Decorrelating experience for 96 frames... +[2023-02-24 15:07:15,447][00176] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 3.6. Samples: 36. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-24 15:07:15,450][00176] Avg episode reward: [(0, '1.016')] +[2023-02-24 15:07:16,357][10336] Signal inference workers to stop experience collection... +[2023-02-24 15:07:16,369][10350] InferenceWorker_p0-w0: stopping experience collection +[2023-02-24 15:07:19,005][10336] Signal inference workers to resume experience collection... +[2023-02-24 15:07:19,006][10350] InferenceWorker_p0-w0: resuming experience collection +[2023-02-24 15:07:20,448][00176] Fps is (10 sec: 409.6, 60 sec: 273.0, 300 sec: 273.0). Total num frames: 4096. Throughput: 0: 145.5. Samples: 2182. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-02-24 15:07:20,455][00176] Avg episode reward: [(0, '2.163')] +[2023-02-24 15:07:25,447][00176] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 254.7. Samples: 5094. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) +[2023-02-24 15:07:25,450][00176] Avg episode reward: [(0, '3.569')] +[2023-02-24 15:07:30,447][00176] Fps is (10 sec: 3277.2, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 36864. Throughput: 0: 283.8. Samples: 7096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:07:30,455][00176] Avg episode reward: [(0, '3.989')] +[2023-02-24 15:07:31,130][10350] Updated weights for policy 0, policy_version 10 (0.0021) +[2023-02-24 15:07:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 57344. Throughput: 0: 445.8. Samples: 13374. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:07:35,449][00176] Avg episode reward: [(0, '4.599')] +[2023-02-24 15:07:40,453][00176] Fps is (10 sec: 4093.6, 60 sec: 2223.2, 300 sec: 2223.2). Total num frames: 77824. Throughput: 0: 550.4. Samples: 19268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:07:40,456][00176] Avg episode reward: [(0, '4.412')] +[2023-02-24 15:07:42,090][10350] Updated weights for policy 0, policy_version 20 (0.0012) +[2023-02-24 15:07:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 90112. Throughput: 0: 532.1. Samples: 21286. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:07:45,450][00176] Avg episode reward: [(0, '4.251')] +[2023-02-24 15:07:50,447][00176] Fps is (10 sec: 2868.9, 60 sec: 2366.6, 300 sec: 2366.6). Total num frames: 106496. Throughput: 0: 564.7. Samples: 25410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:07:50,450][00176] Avg episode reward: [(0, '4.164')] +[2023-02-24 15:07:50,459][10336] Saving new best policy, reward=4.164! +[2023-02-24 15:07:54,034][10350] Updated weights for policy 0, policy_version 30 (0.0012) +[2023-02-24 15:07:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 2539.5, 300 sec: 2539.5). Total num frames: 126976. Throughput: 0: 705.6. Samples: 31752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:07:55,456][00176] Avg episode reward: [(0, '4.379')] +[2023-02-24 15:07:55,461][10336] Saving new best policy, reward=4.379! +[2023-02-24 15:08:00,452][00176] Fps is (10 sec: 3684.6, 60 sec: 2606.3, 300 sec: 2606.3). Total num frames: 143360. Throughput: 0: 770.1. Samples: 34694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:08:00,454][00176] Avg episode reward: [(0, '4.446')] +[2023-02-24 15:08:00,465][10336] Saving new best policy, reward=4.446! +[2023-02-24 15:08:05,447][00176] Fps is (10 sec: 2867.1, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 155648. Throughput: 0: 819.6. Samples: 39064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:08:05,452][00176] Avg episode reward: [(0, '4.505')] +[2023-02-24 15:08:05,455][10336] Saving new best policy, reward=4.505! +[2023-02-24 15:08:07,110][10350] Updated weights for policy 0, policy_version 40 (0.0023) +[2023-02-24 15:08:10,447][00176] Fps is (10 sec: 2868.6, 60 sec: 2867.2, 300 sec: 2646.6). Total num frames: 172032. Throughput: 0: 855.6. Samples: 43596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:08:10,449][00176] Avg episode reward: [(0, '4.598')] +[2023-02-24 15:08:10,474][10336] Saving new best policy, reward=4.598! +[2023-02-24 15:08:15,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3276.8, 300 sec: 2808.7). Total num frames: 196608. Throughput: 0: 882.0. Samples: 46786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:08:15,449][00176] Avg episode reward: [(0, '4.561')] +[2023-02-24 15:08:16,869][10350] Updated weights for policy 0, policy_version 50 (0.0024) +[2023-02-24 15:08:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 2894.5). Total num frames: 217088. Throughput: 0: 892.6. Samples: 53542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:08:20,454][00176] Avg episode reward: [(0, '4.421')] +[2023-02-24 15:08:25,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 859.4. Samples: 57936. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:08:25,450][00176] Avg episode reward: [(0, '4.346')] +[2023-02-24 15:08:29,169][10350] Updated weights for policy 0, policy_version 60 (0.0023) +[2023-02-24 15:08:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2939.5). Total num frames: 249856. Throughput: 0: 864.0. Samples: 60168. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:08:30,449][00176] Avg episode reward: [(0, '4.398')] +[2023-02-24 15:08:30,459][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth... +[2023-02-24 15:08:35,448][00176] Fps is (10 sec: 4095.7, 60 sec: 3549.8, 300 sec: 3003.7). Total num frames: 270336. Throughput: 0: 904.3. Samples: 66106. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:08:35,455][00176] Avg episode reward: [(0, '4.458')] +[2023-02-24 15:08:39,092][10350] Updated weights for policy 0, policy_version 70 (0.0012) +[2023-02-24 15:08:40,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3482.0, 300 sec: 3018.1). Total num frames: 286720. Throughput: 0: 903.8. Samples: 72424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:08:40,452][00176] Avg episode reward: [(0, '4.413')] +[2023-02-24 15:08:45,447][00176] Fps is (10 sec: 3277.1, 60 sec: 3549.9, 300 sec: 3031.0). Total num frames: 303104. Throughput: 0: 887.4. Samples: 74624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:08:45,456][00176] Avg episode reward: [(0, '4.439')] +[2023-02-24 15:08:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3042.7). Total num frames: 319488. Throughput: 0: 891.3. Samples: 79170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:08:50,454][00176] Avg episode reward: [(0, '4.506')] +[2023-02-24 15:08:51,442][10350] Updated weights for policy 0, policy_version 80 (0.0032) +[2023-02-24 15:08:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3127.9). Total num frames: 344064. Throughput: 0: 944.6. Samples: 86104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:08:55,455][00176] Avg episode reward: [(0, '4.324')] +[2023-02-24 15:09:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3686.7, 300 sec: 3169.9). Total num frames: 364544. Throughput: 0: 949.6. Samples: 89520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:09:00,449][00176] Avg episode reward: [(0, '4.114')] +[2023-02-24 15:09:01,288][10350] Updated weights for policy 0, policy_version 90 (0.0015) +[2023-02-24 15:09:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3140.3). Total num frames: 376832. Throughput: 0: 904.7. Samples: 94254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:09:05,450][00176] Avg episode reward: [(0, '4.275')] +[2023-02-24 15:09:10,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3178.5). Total num frames: 397312. Throughput: 0: 917.5. Samples: 99224. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:09:10,449][00176] Avg episode reward: [(0, '4.641')] +[2023-02-24 15:09:10,463][10336] Saving new best policy, reward=4.641! +[2023-02-24 15:09:12,770][10350] Updated weights for policy 0, policy_version 100 (0.0016) +[2023-02-24 15:09:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3213.8). Total num frames: 417792. Throughput: 0: 942.6. Samples: 102584. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:09:15,454][00176] Avg episode reward: [(0, '4.671')] +[2023-02-24 15:09:15,538][10336] Saving new best policy, reward=4.671! +[2023-02-24 15:09:20,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3246.5). Total num frames: 438272. Throughput: 0: 957.2. Samples: 109178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:09:20,454][00176] Avg episode reward: [(0, '4.370')] +[2023-02-24 15:09:24,135][10350] Updated weights for policy 0, policy_version 110 (0.0019) +[2023-02-24 15:09:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3218.3). Total num frames: 450560. Throughput: 0: 912.0. Samples: 113464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:09:25,450][00176] Avg episode reward: [(0, '4.443')] +[2023-02-24 15:09:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3248.6). Total num frames: 471040. Throughput: 0: 908.4. Samples: 115504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:09:30,449][00176] Avg episode reward: [(0, '4.420')] +[2023-02-24 15:09:34,562][10350] Updated weights for policy 0, policy_version 120 (0.0023) +[2023-02-24 15:09:35,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3304.1). Total num frames: 495616. Throughput: 0: 954.8. Samples: 122138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:09:35,454][00176] Avg episode reward: [(0, '4.436')] +[2023-02-24 15:09:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3303.2). Total num frames: 512000. Throughput: 0: 942.2. Samples: 128504. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:09:40,451][00176] Avg episode reward: [(0, '4.485')] +[2023-02-24 15:09:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3302.4). Total num frames: 528384. Throughput: 0: 915.5. Samples: 130716. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:09:45,454][00176] Avg episode reward: [(0, '4.540')] +[2023-02-24 15:09:46,450][10350] Updated weights for policy 0, policy_version 130 (0.0016) +[2023-02-24 15:09:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3301.6). Total num frames: 544768. Throughput: 0: 912.1. Samples: 135298. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:09:50,452][00176] Avg episode reward: [(0, '4.527')] +[2023-02-24 15:09:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3349.1). Total num frames: 569344. Throughput: 0: 954.3. Samples: 142166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:09:55,453][00176] Avg episode reward: [(0, '4.596')] +[2023-02-24 15:09:56,072][10350] Updated weights for policy 0, policy_version 140 (0.0022) +[2023-02-24 15:10:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3370.4). Total num frames: 589824. Throughput: 0: 956.1. Samples: 145610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:00,455][00176] Avg episode reward: [(0, '4.370')] +[2023-02-24 15:10:05,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3754.6, 300 sec: 3345.1). Total num frames: 602112. Throughput: 0: 910.5. Samples: 150150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:05,452][00176] Avg episode reward: [(0, '4.304')] +[2023-02-24 15:10:08,631][10350] Updated weights for policy 0, policy_version 150 (0.0021) +[2023-02-24 15:10:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3365.4). Total num frames: 622592. Throughput: 0: 926.5. Samples: 155156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:10,454][00176] Avg episode reward: [(0, '4.333')] +[2023-02-24 15:10:15,447][00176] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3384.6). Total num frames: 643072. Throughput: 0: 958.9. Samples: 158654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:10:15,455][00176] Avg episode reward: [(0, '4.476')] +[2023-02-24 15:10:17,427][10350] Updated weights for policy 0, policy_version 160 (0.0017) +[2023-02-24 15:10:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3402.8). Total num frames: 663552. Throughput: 0: 967.6. Samples: 165678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:20,454][00176] Avg episode reward: [(0, '4.554')] +[2023-02-24 15:10:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3399.7). Total num frames: 679936. Throughput: 0: 919.6. Samples: 169886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:25,456][00176] Avg episode reward: [(0, '4.545')] +[2023-02-24 15:10:30,303][10350] Updated weights for policy 0, policy_version 170 (0.0019) +[2023-02-24 15:10:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3396.7). Total num frames: 696320. Throughput: 0: 917.8. Samples: 172018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:30,450][00176] Avg episode reward: [(0, '4.549')] +[2023-02-24 15:10:30,460][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000170_696320.pth... +[2023-02-24 15:10:35,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3413.3). Total num frames: 716800. Throughput: 0: 952.6. Samples: 178164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:10:35,450][00176] Avg episode reward: [(0, '4.621')] +[2023-02-24 15:10:40,245][10350] Updated weights for policy 0, policy_version 180 (0.0014) +[2023-02-24 15:10:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3429.2). Total num frames: 737280. Throughput: 0: 938.4. Samples: 184396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:40,454][00176] Avg episode reward: [(0, '4.674')] +[2023-02-24 15:10:40,480][10336] Saving new best policy, reward=4.674! +[2023-02-24 15:10:45,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3407.1). Total num frames: 749568. Throughput: 0: 905.5. Samples: 186358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:10:45,458][00176] Avg episode reward: [(0, '4.739')] +[2023-02-24 15:10:45,466][10336] Saving new best policy, reward=4.739! +[2023-02-24 15:10:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3404.2). Total num frames: 765952. Throughput: 0: 896.9. Samples: 190510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:10:50,454][00176] Avg episode reward: [(0, '4.952')] +[2023-02-24 15:10:50,465][10336] Saving new best policy, reward=4.952! +[2023-02-24 15:10:52,883][10350] Updated weights for policy 0, policy_version 190 (0.0012) +[2023-02-24 15:10:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3419.3). Total num frames: 786432. Throughput: 0: 928.2. Samples: 196924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:10:55,451][00176] Avg episode reward: [(0, '5.015')] +[2023-02-24 15:10:55,455][10336] Saving new best policy, reward=5.015! +[2023-02-24 15:11:00,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3433.7). Total num frames: 806912. Throughput: 0: 923.5. Samples: 200210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:11:00,449][00176] Avg episode reward: [(0, '4.718')] +[2023-02-24 15:11:04,376][10350] Updated weights for policy 0, policy_version 200 (0.0012) +[2023-02-24 15:11:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3413.3). Total num frames: 819200. Throughput: 0: 868.2. Samples: 204746. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:11:05,452][00176] Avg episode reward: [(0, '4.754')] +[2023-02-24 15:11:10,447][00176] Fps is (10 sec: 2048.0, 60 sec: 3413.3, 300 sec: 3377.1). Total num frames: 827392. Throughput: 0: 840.3. Samples: 207700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:11:10,449][00176] Avg episode reward: [(0, '4.629')] +[2023-02-24 15:11:15,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3481.6, 300 sec: 3407.9). Total num frames: 851968. Throughput: 0: 851.6. Samples: 210340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:11:15,448][00176] Avg episode reward: [(0, '4.682')] +[2023-02-24 15:11:17,203][10350] Updated weights for policy 0, policy_version 210 (0.0014) +[2023-02-24 15:11:20,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3413.3, 300 sec: 3405.3). Total num frames: 868352. Throughput: 0: 862.3. Samples: 216968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:11:20,453][00176] Avg episode reward: [(0, '4.584')] +[2023-02-24 15:11:25,448][00176] Fps is (10 sec: 3276.5, 60 sec: 3413.3, 300 sec: 3402.8). Total num frames: 884736. Throughput: 0: 817.0. Samples: 221162. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:11:25,450][00176] Avg episode reward: [(0, '4.474')] +[2023-02-24 15:11:29,869][10350] Updated weights for policy 0, policy_version 220 (0.0021) +[2023-02-24 15:11:30,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3400.5). Total num frames: 901120. Throughput: 0: 821.2. Samples: 223310. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:11:30,449][00176] Avg episode reward: [(0, '4.473')] +[2023-02-24 15:11:35,447][00176] Fps is (10 sec: 3686.7, 60 sec: 3413.4, 300 sec: 3413.3). Total num frames: 921600. Throughput: 0: 871.6. Samples: 229730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:11:35,454][00176] Avg episode reward: [(0, '4.608')] +[2023-02-24 15:11:40,047][10350] Updated weights for policy 0, policy_version 230 (0.0017) +[2023-02-24 15:11:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3425.7). Total num frames: 942080. Throughput: 0: 863.4. Samples: 235778. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:11:40,450][00176] Avg episode reward: [(0, '4.707')] +[2023-02-24 15:11:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3408.5). Total num frames: 954368. Throughput: 0: 834.9. Samples: 237780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:11:45,454][00176] Avg episode reward: [(0, '4.837')] +[2023-02-24 15:11:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3406.1). Total num frames: 970752. Throughput: 0: 828.1. Samples: 242010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:11:50,454][00176] Avg episode reward: [(0, '5.019')] +[2023-02-24 15:11:50,464][10336] Saving new best policy, reward=5.019! +[2023-02-24 15:11:52,534][10350] Updated weights for policy 0, policy_version 240 (0.0030) +[2023-02-24 15:11:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3432.2). Total num frames: 995328. Throughput: 0: 905.3. Samples: 248440. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:11:55,455][00176] Avg episode reward: [(0, '4.939')] +[2023-02-24 15:12:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1011712. Throughput: 0: 920.1. Samples: 251746. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:12:00,449][00176] Avg episode reward: [(0, '4.901')] +[2023-02-24 15:12:04,341][10350] Updated weights for policy 0, policy_version 250 (0.0020) +[2023-02-24 15:12:05,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 1024000. Throughput: 0: 861.7. Samples: 255746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:12:05,449][00176] Avg episode reward: [(0, '4.869')] +[2023-02-24 15:12:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1040384. Throughput: 0: 868.0. Samples: 260220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:12:10,449][00176] Avg episode reward: [(0, '4.874')] +[2023-02-24 15:12:15,452][00176] Fps is (10 sec: 3684.6, 60 sec: 3481.3, 300 sec: 3582.2). Total num frames: 1060864. Throughput: 0: 893.0. Samples: 263498. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:12:15,454][00176] Avg episode reward: [(0, '4.930')] +[2023-02-24 15:12:15,659][10350] Updated weights for policy 0, policy_version 260 (0.0029) +[2023-02-24 15:12:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1081344. Throughput: 0: 894.8. Samples: 269996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:12:20,453][00176] Avg episode reward: [(0, '4.822')] +[2023-02-24 15:12:25,449][00176] Fps is (10 sec: 3277.8, 60 sec: 3481.5, 300 sec: 3582.2). Total num frames: 1093632. Throughput: 0: 853.1. Samples: 274168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:12:25,452][00176] Avg episode reward: [(0, '4.772')] +[2023-02-24 15:12:28,360][10350] Updated weights for policy 0, policy_version 270 (0.0013) +[2023-02-24 15:12:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1114112. Throughput: 0: 856.4. Samples: 276318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:12:30,450][00176] Avg episode reward: [(0, '4.853')] +[2023-02-24 15:12:30,464][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth... +[2023-02-24 15:12:30,584][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth +[2023-02-24 15:12:35,447][00176] Fps is (10 sec: 4096.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 1134592. Throughput: 0: 908.4. Samples: 282886. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:12:35,455][00176] Avg episode reward: [(0, '4.990')] +[2023-02-24 15:12:37,373][10350] Updated weights for policy 0, policy_version 280 (0.0015) +[2023-02-24 15:12:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1155072. Throughput: 0: 909.7. Samples: 289378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:12:40,449][00176] Avg episode reward: [(0, '5.257')] +[2023-02-24 15:12:40,463][10336] Saving new best policy, reward=5.257! +[2023-02-24 15:12:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 1171456. Throughput: 0: 882.5. Samples: 291460. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:12:45,453][00176] Avg episode reward: [(0, '5.545')] +[2023-02-24 15:12:45,456][10336] Saving new best policy, reward=5.545! +[2023-02-24 15:12:50,179][10350] Updated weights for policy 0, policy_version 290 (0.0016) +[2023-02-24 15:12:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 1187840. Throughput: 0: 886.1. Samples: 295620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:12:50,450][00176] Avg episode reward: [(0, '5.773')] +[2023-02-24 15:12:50,459][10336] Saving new best policy, reward=5.773! +[2023-02-24 15:12:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3610.1). Total num frames: 1208320. Throughput: 0: 937.1. Samples: 302388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:12:55,449][00176] Avg episode reward: [(0, '5.953')] +[2023-02-24 15:12:55,525][10336] Saving new best policy, reward=5.953! +[2023-02-24 15:12:59,402][10350] Updated weights for policy 0, policy_version 300 (0.0015) +[2023-02-24 15:13:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 1228800. Throughput: 0: 938.9. Samples: 305742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:13:00,455][00176] Avg episode reward: [(0, '6.217')] +[2023-02-24 15:13:00,466][10336] Saving new best policy, reward=6.217! +[2023-02-24 15:13:05,454][00176] Fps is (10 sec: 3683.8, 60 sec: 3686.0, 300 sec: 3637.7). Total num frames: 1245184. Throughput: 0: 906.6. Samples: 310800. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:13:05,460][00176] Avg episode reward: [(0, '6.060')] +[2023-02-24 15:13:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1261568. Throughput: 0: 922.7. Samples: 315686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:13:10,452][00176] Avg episode reward: [(0, '6.427')] +[2023-02-24 15:13:10,465][10336] Saving new best policy, reward=6.427! +[2023-02-24 15:13:11,563][10350] Updated weights for policy 0, policy_version 310 (0.0027) +[2023-02-24 15:13:15,447][00176] Fps is (10 sec: 4098.8, 60 sec: 3755.0, 300 sec: 3623.9). Total num frames: 1286144. Throughput: 0: 951.2. Samples: 319124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:13:15,449][00176] Avg episode reward: [(0, '6.353')] +[2023-02-24 15:13:20,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1306624. Throughput: 0: 961.8. Samples: 326166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:13:20,450][00176] Avg episode reward: [(0, '7.394')] +[2023-02-24 15:13:20,466][10336] Saving new best policy, reward=7.394! +[2023-02-24 15:13:21,189][10350] Updated weights for policy 0, policy_version 320 (0.0018) +[2023-02-24 15:13:25,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3754.8, 300 sec: 3623.9). Total num frames: 1318912. Throughput: 0: 916.3. Samples: 330612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:13:25,455][00176] Avg episode reward: [(0, '7.900')] +[2023-02-24 15:13:25,456][10336] Saving new best policy, reward=7.900! +[2023-02-24 15:13:30,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 1339392. Throughput: 0: 917.9. Samples: 332766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:13:30,454][00176] Avg episode reward: [(0, '8.028')] +[2023-02-24 15:13:30,467][10336] Saving new best policy, reward=8.028! +[2023-02-24 15:13:32,864][10350] Updated weights for policy 0, policy_version 330 (0.0029) +[2023-02-24 15:13:35,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3651.7). Total num frames: 1363968. Throughput: 0: 969.6. Samples: 339254. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:13:35,454][00176] Avg episode reward: [(0, '7.876')] +[2023-02-24 15:13:40,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3754.5, 300 sec: 3651.7). Total num frames: 1380352. Throughput: 0: 963.5. Samples: 345748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:13:40,455][00176] Avg episode reward: [(0, '8.004')] +[2023-02-24 15:13:43,510][10350] Updated weights for policy 0, policy_version 340 (0.0012) +[2023-02-24 15:13:45,450][00176] Fps is (10 sec: 3275.9, 60 sec: 3754.5, 300 sec: 3651.7). Total num frames: 1396736. Throughput: 0: 938.2. Samples: 347964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:13:45,452][00176] Avg episode reward: [(0, '8.604')] +[2023-02-24 15:13:45,454][10336] Saving new best policy, reward=8.604! +[2023-02-24 15:13:50,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 1413120. Throughput: 0: 928.0. Samples: 352554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:13:50,449][00176] Avg episode reward: [(0, '8.362')] +[2023-02-24 15:13:54,063][10350] Updated weights for policy 0, policy_version 350 (0.0034) +[2023-02-24 15:13:55,448][00176] Fps is (10 sec: 4096.6, 60 sec: 3822.8, 300 sec: 3637.8). Total num frames: 1437696. Throughput: 0: 972.9. Samples: 359466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:13:55,451][00176] Avg episode reward: [(0, '8.281')] +[2023-02-24 15:14:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 1458176. Throughput: 0: 976.1. Samples: 363050. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:14:00,449][00176] Avg episode reward: [(0, '8.271')] +[2023-02-24 15:14:05,007][10350] Updated weights for policy 0, policy_version 360 (0.0012) +[2023-02-24 15:14:05,447][00176] Fps is (10 sec: 3687.0, 60 sec: 3823.4, 300 sec: 3651.7). Total num frames: 1474560. Throughput: 0: 931.3. Samples: 368076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:14:05,451][00176] Avg episode reward: [(0, '8.705')] +[2023-02-24 15:14:05,459][10336] Saving new best policy, reward=8.705! +[2023-02-24 15:14:10,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 1490944. Throughput: 0: 942.0. Samples: 373004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:14:10,452][00176] Avg episode reward: [(0, '9.088')] +[2023-02-24 15:14:10,467][10336] Saving new best policy, reward=9.088! +[2023-02-24 15:14:14,908][10350] Updated weights for policy 0, policy_version 370 (0.0015) +[2023-02-24 15:14:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3651.7). Total num frames: 1515520. Throughput: 0: 972.8. Samples: 376542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:14:15,454][00176] Avg episode reward: [(0, '9.149')] +[2023-02-24 15:14:15,459][10336] Saving new best policy, reward=9.149! +[2023-02-24 15:14:20,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3823.0, 300 sec: 3679.5). Total num frames: 1536000. Throughput: 0: 978.2. Samples: 383274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:14:20,453][00176] Avg episode reward: [(0, '8.977')] +[2023-02-24 15:14:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3651.7). Total num frames: 1548288. Throughput: 0: 930.0. Samples: 387594. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:14:25,450][00176] Avg episode reward: [(0, '8.773')] +[2023-02-24 15:14:27,066][10350] Updated weights for policy 0, policy_version 380 (0.0012) +[2023-02-24 15:14:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 1568768. Throughput: 0: 931.4. Samples: 389874. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:14:30,449][00176] Avg episode reward: [(0, '8.706')] +[2023-02-24 15:14:30,462][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000383_1568768.pth... +[2023-02-24 15:14:30,579][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000170_696320.pth +[2023-02-24 15:14:35,450][00176] Fps is (10 sec: 4094.7, 60 sec: 3754.5, 300 sec: 3651.6). Total num frames: 1589248. Throughput: 0: 975.1. Samples: 396438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:14:35,454][00176] Avg episode reward: [(0, '9.716')] +[2023-02-24 15:14:35,460][10336] Saving new best policy, reward=9.716! +[2023-02-24 15:14:36,595][10350] Updated weights for policy 0, policy_version 390 (0.0016) +[2023-02-24 15:14:40,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3822.9, 300 sec: 3665.5). Total num frames: 1609728. Throughput: 0: 967.8. Samples: 403018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:14:40,453][00176] Avg episode reward: [(0, '10.159')] +[2023-02-24 15:14:40,465][10336] Saving new best policy, reward=10.159! +[2023-02-24 15:14:45,447][00176] Fps is (10 sec: 3687.6, 60 sec: 3823.1, 300 sec: 3665.6). Total num frames: 1626112. Throughput: 0: 936.4. Samples: 405190. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:14:45,453][00176] Avg episode reward: [(0, '10.211')] +[2023-02-24 15:14:45,455][10336] Saving new best policy, reward=10.211! +[2023-02-24 15:14:48,948][10350] Updated weights for policy 0, policy_version 400 (0.0024) +[2023-02-24 15:14:50,447][00176] Fps is (10 sec: 3277.5, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 1642496. Throughput: 0: 920.5. Samples: 409498. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:14:50,459][00176] Avg episode reward: [(0, '9.854')] +[2023-02-24 15:14:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3651.7). Total num frames: 1667072. Throughput: 0: 966.9. Samples: 416514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:14:55,451][00176] Avg episode reward: [(0, '9.448')] +[2023-02-24 15:14:57,918][10350] Updated weights for policy 0, policy_version 410 (0.0013) +[2023-02-24 15:15:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 1687552. Throughput: 0: 967.4. Samples: 420074. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:15:00,452][00176] Avg episode reward: [(0, '8.943')] +[2023-02-24 15:15:05,448][00176] Fps is (10 sec: 3276.2, 60 sec: 3754.6, 300 sec: 3651.7). Total num frames: 1699840. Throughput: 0: 927.3. Samples: 425006. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:15:05,459][00176] Avg episode reward: [(0, '9.078')] +[2023-02-24 15:15:10,356][10350] Updated weights for policy 0, policy_version 420 (0.0041) +[2023-02-24 15:15:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3651.7). Total num frames: 1720320. Throughput: 0: 938.0. Samples: 429806. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:15:10,450][00176] Avg episode reward: [(0, '9.871')] +[2023-02-24 15:15:15,447][00176] Fps is (10 sec: 4096.7, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1740800. Throughput: 0: 962.9. Samples: 433206. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:15:15,453][00176] Avg episode reward: [(0, '11.325')] +[2023-02-24 15:15:15,458][10336] Saving new best policy, reward=11.325! +[2023-02-24 15:15:19,769][10350] Updated weights for policy 0, policy_version 430 (0.0015) +[2023-02-24 15:15:20,452][00176] Fps is (10 sec: 4094.0, 60 sec: 3754.4, 300 sec: 3665.5). Total num frames: 1761280. Throughput: 0: 963.7. Samples: 439804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:15:20,459][00176] Avg episode reward: [(0, '12.014')] +[2023-02-24 15:15:20,472][10336] Saving new best policy, reward=12.014! +[2023-02-24 15:15:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1773568. Throughput: 0: 911.3. Samples: 444024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:15:25,455][00176] Avg episode reward: [(0, '11.809')] +[2023-02-24 15:15:30,447][00176] Fps is (10 sec: 2868.5, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 1789952. Throughput: 0: 909.6. Samples: 446122. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:15:30,450][00176] Avg episode reward: [(0, '11.452')] +[2023-02-24 15:15:32,252][10350] Updated weights for policy 0, policy_version 440 (0.0013) +[2023-02-24 15:15:35,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1814528. Throughput: 0: 954.8. Samples: 452468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:15:35,451][00176] Avg episode reward: [(0, '11.161')] +[2023-02-24 15:15:40,447][00176] Fps is (10 sec: 4096.2, 60 sec: 3686.5, 300 sec: 3665.6). Total num frames: 1830912. Throughput: 0: 937.6. Samples: 458708. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:15:40,457][00176] Avg episode reward: [(0, '12.067')] +[2023-02-24 15:15:40,473][10336] Saving new best policy, reward=12.067! +[2023-02-24 15:15:43,724][10350] Updated weights for policy 0, policy_version 450 (0.0018) +[2023-02-24 15:15:45,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 1847296. Throughput: 0: 902.6. Samples: 460690. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:15:45,453][00176] Avg episode reward: [(0, '13.182')] +[2023-02-24 15:15:45,457][10336] Saving new best policy, reward=13.182! +[2023-02-24 15:15:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 1863680. Throughput: 0: 882.6. Samples: 464722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:15:50,449][00176] Avg episode reward: [(0, '13.703')] +[2023-02-24 15:15:50,460][10336] Saving new best policy, reward=13.703! +[2023-02-24 15:15:54,841][10350] Updated weights for policy 0, policy_version 460 (0.0016) +[2023-02-24 15:15:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 1884160. Throughput: 0: 921.7. Samples: 471282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:15:55,454][00176] Avg episode reward: [(0, '12.973')] +[2023-02-24 15:16:00,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1904640. Throughput: 0: 919.2. Samples: 474570. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:16:00,450][00176] Avg episode reward: [(0, '12.507')] +[2023-02-24 15:16:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3693.3). Total num frames: 1916928. Throughput: 0: 878.2. Samples: 479320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:16:05,452][00176] Avg episode reward: [(0, '12.229')] +[2023-02-24 15:16:07,345][10350] Updated weights for policy 0, policy_version 470 (0.0020) +[2023-02-24 15:16:10,451][00176] Fps is (10 sec: 2866.2, 60 sec: 3549.6, 300 sec: 3665.5). Total num frames: 1933312. Throughput: 0: 884.9. Samples: 483846. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:16:10,453][00176] Avg episode reward: [(0, '12.371')] +[2023-02-24 15:16:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1957888. Throughput: 0: 909.4. Samples: 487046. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:16:15,449][00176] Avg episode reward: [(0, '13.967')] +[2023-02-24 15:16:15,452][10336] Saving new best policy, reward=13.967! +[2023-02-24 15:16:17,140][10350] Updated weights for policy 0, policy_version 480 (0.0024) +[2023-02-24 15:16:20,447][00176] Fps is (10 sec: 4097.5, 60 sec: 3550.1, 300 sec: 3693.4). Total num frames: 1974272. Throughput: 0: 916.2. Samples: 493696. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:16:20,451][00176] Avg episode reward: [(0, '13.896')] +[2023-02-24 15:16:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1990656. Throughput: 0: 874.5. Samples: 498060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:16:25,449][00176] Avg episode reward: [(0, '14.650')] +[2023-02-24 15:16:25,462][10336] Saving new best policy, reward=14.650! +[2023-02-24 15:16:30,027][10350] Updated weights for policy 0, policy_version 490 (0.0015) +[2023-02-24 15:16:30,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3618.2, 300 sec: 3679.5). Total num frames: 2007040. Throughput: 0: 877.2. Samples: 500162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:16:30,454][00176] Avg episode reward: [(0, '15.207')] +[2023-02-24 15:16:30,467][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth... +[2023-02-24 15:16:30,581][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth +[2023-02-24 15:16:30,590][10336] Saving new best policy, reward=15.207! +[2023-02-24 15:16:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3550.0, 300 sec: 3679.5). Total num frames: 2027520. Throughput: 0: 920.9. Samples: 506164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:16:35,455][00176] Avg episode reward: [(0, '15.017')] +[2023-02-24 15:16:39,631][10350] Updated weights for policy 0, policy_version 500 (0.0024) +[2023-02-24 15:16:40,449][00176] Fps is (10 sec: 4095.1, 60 sec: 3618.0, 300 sec: 3707.2). Total num frames: 2048000. Throughput: 0: 918.7. Samples: 512626. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:16:40,454][00176] Avg episode reward: [(0, '15.067')] +[2023-02-24 15:16:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 2064384. Throughput: 0: 889.3. Samples: 514586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:16:45,452][00176] Avg episode reward: [(0, '14.768')] +[2023-02-24 15:16:50,447][00176] Fps is (10 sec: 3277.5, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2080768. Throughput: 0: 879.2. Samples: 518882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:16:50,450][00176] Avg episode reward: [(0, '13.991')] +[2023-02-24 15:16:52,124][10350] Updated weights for policy 0, policy_version 510 (0.0021) +[2023-02-24 15:16:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2101248. Throughput: 0: 931.1. Samples: 525740. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:16:55,449][00176] Avg episode reward: [(0, '14.461')] +[2023-02-24 15:17:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3735.0). Total num frames: 2125824. Throughput: 0: 938.5. Samples: 529278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:17:00,449][00176] Avg episode reward: [(0, '15.694')] +[2023-02-24 15:17:00,460][10336] Saving new best policy, reward=15.694! +[2023-02-24 15:17:01,803][10350] Updated weights for policy 0, policy_version 520 (0.0022) +[2023-02-24 15:17:05,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2138112. Throughput: 0: 900.2. Samples: 534204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:17:05,449][00176] Avg episode reward: [(0, '15.333')] +[2023-02-24 15:17:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.6, 300 sec: 3707.3). Total num frames: 2154496. Throughput: 0: 902.0. Samples: 538652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:17:10,455][00176] Avg episode reward: [(0, '15.980')] +[2023-02-24 15:17:10,470][10336] Saving new best policy, reward=15.980! +[2023-02-24 15:17:13,603][10350] Updated weights for policy 0, policy_version 530 (0.0017) +[2023-02-24 15:17:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2179072. Throughput: 0: 928.7. Samples: 541954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:17:15,455][00176] Avg episode reward: [(0, '18.023')] +[2023-02-24 15:17:15,458][10336] Saving new best policy, reward=18.023! +[2023-02-24 15:17:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2199552. Throughput: 0: 947.1. Samples: 548782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:17:20,449][00176] Avg episode reward: [(0, '18.002')] +[2023-02-24 15:17:24,569][10350] Updated weights for policy 0, policy_version 540 (0.0016) +[2023-02-24 15:17:25,453][00176] Fps is (10 sec: 3274.9, 60 sec: 3686.0, 300 sec: 3721.0). Total num frames: 2211840. Throughput: 0: 905.0. Samples: 553356. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:17:25,455][00176] Avg episode reward: [(0, '17.193')] +[2023-02-24 15:17:30,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2228224. Throughput: 0: 908.0. Samples: 555446. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:17:30,450][00176] Avg episode reward: [(0, '17.293')] +[2023-02-24 15:17:35,447][00176] Fps is (10 sec: 3688.6, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2248704. Throughput: 0: 946.9. Samples: 561492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:17:35,450][00176] Avg episode reward: [(0, '17.517')] +[2023-02-24 15:17:35,568][10350] Updated weights for policy 0, policy_version 550 (0.0014) +[2023-02-24 15:17:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3721.1). Total num frames: 2269184. Throughput: 0: 939.4. Samples: 568014. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:17:40,453][00176] Avg episode reward: [(0, '17.162')] +[2023-02-24 15:17:45,449][00176] Fps is (10 sec: 3685.7, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 2285568. Throughput: 0: 907.9. Samples: 570136. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:17:45,454][00176] Avg episode reward: [(0, '17.778')] +[2023-02-24 15:17:47,917][10350] Updated weights for policy 0, policy_version 560 (0.0012) +[2023-02-24 15:17:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2301952. Throughput: 0: 897.7. Samples: 574602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:17:50,449][00176] Avg episode reward: [(0, '17.625')] +[2023-02-24 15:17:55,447][00176] Fps is (10 sec: 4096.8, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2326528. Throughput: 0: 946.0. Samples: 581222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:17:55,449][00176] Avg episode reward: [(0, '17.829')] +[2023-02-24 15:17:57,205][10350] Updated weights for policy 0, policy_version 570 (0.0012) +[2023-02-24 15:18:00,460][00176] Fps is (10 sec: 4499.6, 60 sec: 3685.6, 300 sec: 3734.9). Total num frames: 2347008. Throughput: 0: 946.0. Samples: 584538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:18:00,462][00176] Avg episode reward: [(0, '16.989')] +[2023-02-24 15:18:05,449][00176] Fps is (10 sec: 3276.2, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 2359296. Throughput: 0: 907.7. Samples: 589632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:18:05,454][00176] Avg episode reward: [(0, '15.920')] +[2023-02-24 15:18:09,706][10350] Updated weights for policy 0, policy_version 580 (0.0023) +[2023-02-24 15:18:10,447][00176] Fps is (10 sec: 2870.9, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2375680. Throughput: 0: 908.0. Samples: 594212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:18:10,455][00176] Avg episode reward: [(0, '15.754')] +[2023-02-24 15:18:15,447][00176] Fps is (10 sec: 4096.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2400256. Throughput: 0: 933.7. Samples: 597464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:18:15,449][00176] Avg episode reward: [(0, '16.175')] +[2023-02-24 15:18:18,970][10350] Updated weights for policy 0, policy_version 590 (0.0031) +[2023-02-24 15:18:20,449][00176] Fps is (10 sec: 4504.8, 60 sec: 3686.3, 300 sec: 3735.0). Total num frames: 2420736. Throughput: 0: 950.2. Samples: 604252. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:18:20,456][00176] Avg episode reward: [(0, '16.088')] +[2023-02-24 15:18:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.8, 300 sec: 3707.2). Total num frames: 2433024. Throughput: 0: 905.2. Samples: 608748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:18:25,451][00176] Avg episode reward: [(0, '15.463')] +[2023-02-24 15:18:30,447][00176] Fps is (10 sec: 2867.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2449408. Throughput: 0: 905.4. Samples: 610876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:18:30,454][00176] Avg episode reward: [(0, '16.494')] +[2023-02-24 15:18:30,527][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000599_2453504.pth... +[2023-02-24 15:18:30,651][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000383_1568768.pth +[2023-02-24 15:18:31,430][10350] Updated weights for policy 0, policy_version 600 (0.0015) +[2023-02-24 15:18:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.3). Total num frames: 2473984. Throughput: 0: 949.1. Samples: 617310. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:18:35,449][00176] Avg episode reward: [(0, '15.955')] +[2023-02-24 15:18:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2494464. Throughput: 0: 953.6. Samples: 624132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:18:40,451][00176] Avg episode reward: [(0, '16.795')] +[2023-02-24 15:18:40,943][10350] Updated weights for policy 0, policy_version 610 (0.0012) +[2023-02-24 15:18:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3721.1). Total num frames: 2510848. Throughput: 0: 928.9. Samples: 626326. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:18:45,454][00176] Avg episode reward: [(0, '17.072')] +[2023-02-24 15:18:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3693.4). Total num frames: 2527232. Throughput: 0: 912.1. Samples: 630676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:18:50,448][00176] Avg episode reward: [(0, '16.411')] +[2023-02-24 15:18:52,987][10350] Updated weights for policy 0, policy_version 620 (0.0022) +[2023-02-24 15:18:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2547712. Throughput: 0: 955.1. Samples: 637190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:18:55,449][00176] Avg episode reward: [(0, '16.674')] +[2023-02-24 15:19:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3687.2, 300 sec: 3707.2). Total num frames: 2568192. Throughput: 0: 954.4. Samples: 640414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:19:00,449][00176] Avg episode reward: [(0, '16.671')] +[2023-02-24 15:19:04,331][10350] Updated weights for policy 0, policy_version 630 (0.0029) +[2023-02-24 15:19:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3693.3). Total num frames: 2580480. Throughput: 0: 907.3. Samples: 645080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:19:05,450][00176] Avg episode reward: [(0, '16.340')] +[2023-02-24 15:19:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2596864. Throughput: 0: 903.1. Samples: 649388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:19:10,449][00176] Avg episode reward: [(0, '16.733')] +[2023-02-24 15:19:15,316][10350] Updated weights for policy 0, policy_version 640 (0.0026) +[2023-02-24 15:19:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2621440. Throughput: 0: 928.7. Samples: 652666. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:19:15,454][00176] Avg episode reward: [(0, '17.228')] +[2023-02-24 15:19:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 2641920. Throughput: 0: 933.4. Samples: 659312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:19:20,450][00176] Avg episode reward: [(0, '17.223')] +[2023-02-24 15:19:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2654208. Throughput: 0: 878.1. Samples: 663646. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:19:25,449][00176] Avg episode reward: [(0, '17.123')] +[2023-02-24 15:19:28,134][10350] Updated weights for policy 0, policy_version 650 (0.0035) +[2023-02-24 15:19:30,447][00176] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2666496. Throughput: 0: 872.6. Samples: 665592. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:19:30,449][00176] Avg episode reward: [(0, '17.561')] +[2023-02-24 15:19:35,448][00176] Fps is (10 sec: 3276.2, 60 sec: 3549.8, 300 sec: 3651.7). Total num frames: 2686976. Throughput: 0: 897.0. Samples: 671044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:19:35,451][00176] Avg episode reward: [(0, '17.928')] +[2023-02-24 15:19:38,736][10350] Updated weights for policy 0, policy_version 660 (0.0015) +[2023-02-24 15:19:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2707456. Throughput: 0: 889.2. Samples: 677202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:19:40,451][00176] Avg episode reward: [(0, '17.629')] +[2023-02-24 15:19:45,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 2719744. Throughput: 0: 859.5. Samples: 679090. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-24 15:19:45,449][00176] Avg episode reward: [(0, '18.092')] +[2023-02-24 15:19:45,462][10336] Saving new best policy, reward=18.092! +[2023-02-24 15:19:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 2736128. Throughput: 0: 845.4. Samples: 683124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:19:50,454][00176] Avg episode reward: [(0, '18.011')] +[2023-02-24 15:19:52,159][10350] Updated weights for policy 0, policy_version 670 (0.0026) +[2023-02-24 15:19:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 2756608. Throughput: 0: 884.7. Samples: 689198. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:19:55,456][00176] Avg episode reward: [(0, '17.848')] +[2023-02-24 15:20:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 2777088. Throughput: 0: 884.3. Samples: 692458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:00,450][00176] Avg episode reward: [(0, '17.838')] +[2023-02-24 15:20:02,899][10350] Updated weights for policy 0, policy_version 680 (0.0015) +[2023-02-24 15:20:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 2789376. Throughput: 0: 845.6. Samples: 697366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:20:05,455][00176] Avg episode reward: [(0, '18.049')] +[2023-02-24 15:20:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 2805760. Throughput: 0: 844.9. Samples: 701666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:10,456][00176] Avg episode reward: [(0, '17.611')] +[2023-02-24 15:20:14,527][10350] Updated weights for policy 0, policy_version 690 (0.0015) +[2023-02-24 15:20:15,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3610.1). Total num frames: 2826240. Throughput: 0: 873.6. Samples: 704904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:15,451][00176] Avg episode reward: [(0, '19.120')] +[2023-02-24 15:20:15,521][10336] Saving new best policy, reward=19.120! +[2023-02-24 15:20:20,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3413.3, 300 sec: 3637.8). Total num frames: 2846720. Throughput: 0: 893.9. Samples: 711270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:20,454][00176] Avg episode reward: [(0, '18.417')] +[2023-02-24 15:20:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3637.8). Total num frames: 2863104. Throughput: 0: 855.2. Samples: 715684. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:25,455][00176] Avg episode reward: [(0, '18.602')] +[2023-02-24 15:20:26,963][10350] Updated weights for policy 0, policy_version 700 (0.0017) +[2023-02-24 15:20:30,447][00176] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 2875392. Throughput: 0: 858.3. Samples: 717714. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:30,455][00176] Avg episode reward: [(0, '19.921')] +[2023-02-24 15:20:30,468][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000702_2875392.pth... +[2023-02-24 15:20:30,598][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth +[2023-02-24 15:20:30,619][10336] Saving new best policy, reward=19.921! +[2023-02-24 15:20:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3610.0). Total num frames: 2895872. Throughput: 0: 892.8. Samples: 723298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:20:35,455][00176] Avg episode reward: [(0, '20.938')] +[2023-02-24 15:20:35,458][10336] Saving new best policy, reward=20.938! +[2023-02-24 15:20:37,668][10350] Updated weights for policy 0, policy_version 710 (0.0021) +[2023-02-24 15:20:40,449][00176] Fps is (10 sec: 4095.3, 60 sec: 3481.5, 300 sec: 3623.9). Total num frames: 2916352. Throughput: 0: 899.1. Samples: 729658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:20:40,460][00176] Avg episode reward: [(0, '20.946')] +[2023-02-24 15:20:40,475][10336] Saving new best policy, reward=20.946! +[2023-02-24 15:20:45,449][00176] Fps is (10 sec: 3276.2, 60 sec: 3481.5, 300 sec: 3610.0). Total num frames: 2928640. Throughput: 0: 870.1. Samples: 731612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:45,458][00176] Avg episode reward: [(0, '20.403')] +[2023-02-24 15:20:50,447][00176] Fps is (10 sec: 2867.7, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 2945024. Throughput: 0: 852.1. Samples: 735712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:20:50,450][00176] Avg episode reward: [(0, '20.740')] +[2023-02-24 15:20:51,174][10350] Updated weights for policy 0, policy_version 720 (0.0030) +[2023-02-24 15:20:55,447][00176] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 2965504. Throughput: 0: 877.9. Samples: 741170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:20:55,456][00176] Avg episode reward: [(0, '19.647')] +[2023-02-24 15:21:00,452][00176] Fps is (10 sec: 3684.6, 60 sec: 3413.1, 300 sec: 3610.0). Total num frames: 2981888. Throughput: 0: 871.7. Samples: 744136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:21:00,454][00176] Avg episode reward: [(0, '18.490')] +[2023-02-24 15:21:02,154][10350] Updated weights for policy 0, policy_version 730 (0.0013) +[2023-02-24 15:21:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3610.1). Total num frames: 2998272. Throughput: 0: 835.7. Samples: 748878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:21:05,454][00176] Avg episode reward: [(0, '18.799')] +[2023-02-24 15:21:10,447][00176] Fps is (10 sec: 2868.6, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3010560. Throughput: 0: 826.5. Samples: 752878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:21:10,448][00176] Avg episode reward: [(0, '19.950')] +[2023-02-24 15:21:14,767][10350] Updated weights for policy 0, policy_version 740 (0.0016) +[2023-02-24 15:21:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3582.3). Total num frames: 3031040. Throughput: 0: 847.5. Samples: 755850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:15,449][00176] Avg episode reward: [(0, '19.984')] +[2023-02-24 15:21:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3596.1). Total num frames: 3051520. Throughput: 0: 864.3. Samples: 762192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:20,456][00176] Avg episode reward: [(0, '20.511')] +[2023-02-24 15:21:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3596.2). Total num frames: 3067904. Throughput: 0: 829.2. Samples: 766972. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:25,449][00176] Avg episode reward: [(0, '21.783')] +[2023-02-24 15:21:25,454][10336] Saving new best policy, reward=21.783! +[2023-02-24 15:21:26,827][10350] Updated weights for policy 0, policy_version 750 (0.0013) +[2023-02-24 15:21:30,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3080192. Throughput: 0: 829.1. Samples: 768918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:30,449][00176] Avg episode reward: [(0, '22.311')] +[2023-02-24 15:21:30,460][10336] Saving new best policy, reward=22.311! +[2023-02-24 15:21:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3100672. Throughput: 0: 853.0. Samples: 774096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:35,450][00176] Avg episode reward: [(0, '21.299')] +[2023-02-24 15:21:38,086][10350] Updated weights for policy 0, policy_version 760 (0.0016) +[2023-02-24 15:21:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3582.3). Total num frames: 3121152. Throughput: 0: 873.1. Samples: 780460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:21:40,449][00176] Avg episode reward: [(0, '21.718')] +[2023-02-24 15:21:45,449][00176] Fps is (10 sec: 3276.2, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 3133440. Throughput: 0: 855.9. Samples: 782650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:21:45,455][00176] Avg episode reward: [(0, '21.804')] +[2023-02-24 15:21:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3554.5). Total num frames: 3149824. Throughput: 0: 839.3. Samples: 786648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:21:50,455][00176] Avg episode reward: [(0, '21.212')] +[2023-02-24 15:21:51,372][10350] Updated weights for policy 0, policy_version 770 (0.0043) +[2023-02-24 15:21:55,447][00176] Fps is (10 sec: 3687.1, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 3170304. Throughput: 0: 879.9. Samples: 792472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:21:55,449][00176] Avg episode reward: [(0, '19.755')] +[2023-02-24 15:22:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3481.9, 300 sec: 3568.4). Total num frames: 3190784. Throughput: 0: 886.1. Samples: 795726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:22:00,451][00176] Avg episode reward: [(0, '20.669')] +[2023-02-24 15:22:01,347][10350] Updated weights for policy 0, policy_version 780 (0.0019) +[2023-02-24 15:22:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3554.5). Total num frames: 3203072. Throughput: 0: 858.8. Samples: 800836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:22:05,449][00176] Avg episode reward: [(0, '19.640')] +[2023-02-24 15:22:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3219456. Throughput: 0: 849.0. Samples: 805176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:22:10,449][00176] Avg episode reward: [(0, '18.488')] +[2023-02-24 15:22:13,528][10350] Updated weights for policy 0, policy_version 790 (0.0024) +[2023-02-24 15:22:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 3244032. Throughput: 0: 877.6. Samples: 808408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:22:15,454][00176] Avg episode reward: [(0, '16.955')] +[2023-02-24 15:22:20,452][00176] Fps is (10 sec: 4503.4, 60 sec: 3549.6, 300 sec: 3568.4). Total num frames: 3264512. Throughput: 0: 914.6. Samples: 815258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:22:20,455][00176] Avg episode reward: [(0, '17.644')] +[2023-02-24 15:22:24,054][10350] Updated weights for policy 0, policy_version 800 (0.0039) +[2023-02-24 15:22:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 3280896. Throughput: 0: 880.9. Samples: 820102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:22:25,458][00176] Avg episode reward: [(0, '17.961')] +[2023-02-24 15:22:30,447][00176] Fps is (10 sec: 2868.6, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 3293184. Throughput: 0: 880.4. Samples: 822266. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:22:30,449][00176] Avg episode reward: [(0, '18.023')] +[2023-02-24 15:22:30,511][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000805_3297280.pth... +[2023-02-24 15:22:30,628][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000599_2453504.pth +[2023-02-24 15:22:35,137][10350] Updated weights for policy 0, policy_version 810 (0.0014) +[2023-02-24 15:22:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3317760. Throughput: 0: 921.5. Samples: 828116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:22:35,452][00176] Avg episode reward: [(0, '19.403')] +[2023-02-24 15:22:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 3338240. Throughput: 0: 945.1. Samples: 835002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:22:40,449][00176] Avg episode reward: [(0, '19.885')] +[2023-02-24 15:22:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3568.4). Total num frames: 3354624. Throughput: 0: 921.6. Samples: 837196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:22:45,449][00176] Avg episode reward: [(0, '17.740')] +[2023-02-24 15:22:46,773][10350] Updated weights for policy 0, policy_version 820 (0.0011) +[2023-02-24 15:22:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3366912. Throughput: 0: 905.3. Samples: 841574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:22:50,449][00176] Avg episode reward: [(0, '18.553')] +[2023-02-24 15:22:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.8). Total num frames: 3391488. Throughput: 0: 949.6. Samples: 847910. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:22:55,449][00176] Avg episode reward: [(0, '17.864')] +[2023-02-24 15:22:56,776][10350] Updated weights for policy 0, policy_version 830 (0.0020) +[2023-02-24 15:23:00,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 3411968. Throughput: 0: 953.4. Samples: 851312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:23:00,449][00176] Avg episode reward: [(0, '18.292')] +[2023-02-24 15:23:05,450][00176] Fps is (10 sec: 3685.3, 60 sec: 3754.5, 300 sec: 3568.3). Total num frames: 3428352. Throughput: 0: 915.5. Samples: 856454. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:23:05,458][00176] Avg episode reward: [(0, '19.240')] +[2023-02-24 15:23:09,420][10350] Updated weights for policy 0, policy_version 840 (0.0031) +[2023-02-24 15:23:10,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3540.6). Total num frames: 3444736. Throughput: 0: 902.2. Samples: 860702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:23:10,452][00176] Avg episode reward: [(0, '19.510')] +[2023-02-24 15:23:15,447][00176] Fps is (10 sec: 3687.6, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3465216. Throughput: 0: 923.9. Samples: 863840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:23:15,452][00176] Avg episode reward: [(0, '19.932')] +[2023-02-24 15:23:18,827][10350] Updated weights for policy 0, policy_version 850 (0.0012) +[2023-02-24 15:23:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.7, 300 sec: 3568.4). Total num frames: 3485696. Throughput: 0: 940.4. Samples: 870432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:23:20,451][00176] Avg episode reward: [(0, '20.480')] +[2023-02-24 15:23:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3497984. Throughput: 0: 892.8. Samples: 875180. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:23:25,456][00176] Avg episode reward: [(0, '20.163')] +[2023-02-24 15:23:30,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3514368. Throughput: 0: 890.5. Samples: 877270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:23:30,455][00176] Avg episode reward: [(0, '19.707')] +[2023-02-24 15:23:31,794][10350] Updated weights for policy 0, policy_version 860 (0.0026) +[2023-02-24 15:23:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3534848. Throughput: 0: 922.2. Samples: 883074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:23:35,455][00176] Avg episode reward: [(0, '21.036')] +[2023-02-24 15:23:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 3555328. Throughput: 0: 931.3. Samples: 889820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:23:40,453][00176] Avg episode reward: [(0, '21.649')] +[2023-02-24 15:23:41,996][10350] Updated weights for policy 0, policy_version 870 (0.0012) +[2023-02-24 15:23:45,450][00176] Fps is (10 sec: 3685.3, 60 sec: 3618.0, 300 sec: 3540.6). Total num frames: 3571712. Throughput: 0: 902.3. Samples: 891918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:23:45,457][00176] Avg episode reward: [(0, '20.640')] +[2023-02-24 15:23:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3588096. Throughput: 0: 884.3. Samples: 896246. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:23:50,452][00176] Avg episode reward: [(0, '21.998')] +[2023-02-24 15:23:53,568][10350] Updated weights for policy 0, policy_version 880 (0.0011) +[2023-02-24 15:23:55,447][00176] Fps is (10 sec: 4097.0, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3612672. Throughput: 0: 932.1. Samples: 902646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:23:55,452][00176] Avg episode reward: [(0, '22.701')] +[2023-02-24 15:23:55,457][10336] Saving new best policy, reward=22.701! +[2023-02-24 15:24:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 3633152. Throughput: 0: 936.6. Samples: 905986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:24:00,457][00176] Avg episode reward: [(0, '22.623')] +[2023-02-24 15:24:04,771][10350] Updated weights for policy 0, policy_version 890 (0.0014) +[2023-02-24 15:24:05,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3618.3, 300 sec: 3554.5). Total num frames: 3645440. Throughput: 0: 900.8. Samples: 910968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:24:05,450][00176] Avg episode reward: [(0, '22.644')] +[2023-02-24 15:24:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3661824. Throughput: 0: 887.5. Samples: 915116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:24:10,448][00176] Avg episode reward: [(0, '22.341')] +[2023-02-24 15:24:15,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 3682304. Throughput: 0: 911.4. Samples: 918282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:24:15,453][00176] Avg episode reward: [(0, '22.628')] +[2023-02-24 15:24:16,315][10350] Updated weights for policy 0, policy_version 900 (0.0017) +[2023-02-24 15:24:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3702784. Throughput: 0: 925.4. Samples: 924718. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:24:20,452][00176] Avg episode reward: [(0, '24.079')] +[2023-02-24 15:24:20,463][10336] Saving new best policy, reward=24.079! +[2023-02-24 15:24:25,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 3715072. Throughput: 0: 875.2. Samples: 929206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:25,455][00176] Avg episode reward: [(0, '23.331')] +[2023-02-24 15:24:29,247][10350] Updated weights for policy 0, policy_version 910 (0.0022) +[2023-02-24 15:24:30,452][00176] Fps is (10 sec: 2865.8, 60 sec: 3617.8, 300 sec: 3540.6). Total num frames: 3731456. Throughput: 0: 873.3. Samples: 931218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:30,456][00176] Avg episode reward: [(0, '22.480')] +[2023-02-24 15:24:30,473][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000911_3731456.pth... +[2023-02-24 15:24:30,590][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000702_2875392.pth +[2023-02-24 15:24:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 3751936. Throughput: 0: 897.9. Samples: 936650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:35,456][00176] Avg episode reward: [(0, '21.606')] +[2023-02-24 15:24:39,101][10350] Updated weights for policy 0, policy_version 920 (0.0018) +[2023-02-24 15:24:40,447][00176] Fps is (10 sec: 4098.0, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 3772416. Throughput: 0: 899.8. Samples: 943136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:40,452][00176] Avg episode reward: [(0, '20.670')] +[2023-02-24 15:24:45,449][00176] Fps is (10 sec: 3276.1, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 3784704. Throughput: 0: 871.9. Samples: 945222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:45,457][00176] Avg episode reward: [(0, '20.487')] +[2023-02-24 15:24:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 3801088. Throughput: 0: 849.9. Samples: 949212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:50,449][00176] Avg episode reward: [(0, '19.980')] +[2023-02-24 15:24:52,355][10350] Updated weights for policy 0, policy_version 930 (0.0030) +[2023-02-24 15:24:55,447][00176] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3821568. Throughput: 0: 888.1. Samples: 955082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:24:55,450][00176] Avg episode reward: [(0, '20.078')] +[2023-02-24 15:25:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 3842048. Throughput: 0: 891.5. Samples: 958398. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:00,454][00176] Avg episode reward: [(0, '21.629')] +[2023-02-24 15:25:03,017][10350] Updated weights for policy 0, policy_version 940 (0.0038) +[2023-02-24 15:25:05,450][00176] Fps is (10 sec: 3275.9, 60 sec: 3481.4, 300 sec: 3554.5). Total num frames: 3854336. Throughput: 0: 861.9. Samples: 963504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:25:05,455][00176] Avg episode reward: [(0, '21.827')] +[2023-02-24 15:25:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3870720. Throughput: 0: 855.7. Samples: 967714. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:25:10,453][00176] Avg episode reward: [(0, '21.789')] +[2023-02-24 15:25:14,596][10350] Updated weights for policy 0, policy_version 950 (0.0017) +[2023-02-24 15:25:15,447][00176] Fps is (10 sec: 3687.5, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 3891200. Throughput: 0: 882.7. Samples: 970934. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:15,452][00176] Avg episode reward: [(0, '21.395')] +[2023-02-24 15:25:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 3911680. Throughput: 0: 912.4. Samples: 977708. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:25:20,449][00176] Avg episode reward: [(0, '22.035')] +[2023-02-24 15:25:25,447][00176] Fps is (10 sec: 3686.2, 60 sec: 3549.8, 300 sec: 3568.4). Total num frames: 3928064. Throughput: 0: 871.9. Samples: 982372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:25:25,455][00176] Avg episode reward: [(0, '21.789')] +[2023-02-24 15:25:26,319][10350] Updated weights for policy 0, policy_version 960 (0.0013) +[2023-02-24 15:25:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3550.2, 300 sec: 3554.5). Total num frames: 3944448. Throughput: 0: 872.8. Samples: 984496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:25:30,449][00176] Avg episode reward: [(0, '21.186')] +[2023-02-24 15:25:35,447][00176] Fps is (10 sec: 3686.6, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 3964928. Throughput: 0: 916.4. Samples: 990452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:35,449][00176] Avg episode reward: [(0, '20.673')] +[2023-02-24 15:25:36,496][10350] Updated weights for policy 0, policy_version 970 (0.0027) +[2023-02-24 15:25:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3989504. Throughput: 0: 939.3. Samples: 997348. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:40,453][00176] Avg episode reward: [(0, '21.109')] +[2023-02-24 15:25:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3582.3). Total num frames: 4001792. Throughput: 0: 914.0. Samples: 999528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:45,448][00176] Avg episode reward: [(0, '20.664')] +[2023-02-24 15:25:48,709][10350] Updated weights for policy 0, policy_version 980 (0.0019) +[2023-02-24 15:25:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 4018176. Throughput: 0: 899.2. Samples: 1003964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:25:50,450][00176] Avg episode reward: [(0, '20.195')] +[2023-02-24 15:25:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 4042752. Throughput: 0: 951.5. Samples: 1010530. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:25:55,453][00176] Avg episode reward: [(0, '19.917')] +[2023-02-24 15:25:57,690][10350] Updated weights for policy 0, policy_version 990 (0.0015) +[2023-02-24 15:26:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 4063232. Throughput: 0: 959.2. Samples: 1014096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:26:00,452][00176] Avg episode reward: [(0, '20.575')] +[2023-02-24 15:26:05,448][00176] Fps is (10 sec: 3685.9, 60 sec: 3754.8, 300 sec: 3623.9). Total num frames: 4079616. Throughput: 0: 926.0. Samples: 1019380. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:26:05,456][00176] Avg episode reward: [(0, '20.724')] +[2023-02-24 15:26:10,094][10350] Updated weights for policy 0, policy_version 1000 (0.0029) +[2023-02-24 15:26:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3610.0). Total num frames: 4096000. Throughput: 0: 924.1. Samples: 1023958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:10,449][00176] Avg episode reward: [(0, '20.052')] +[2023-02-24 15:26:15,447][00176] Fps is (10 sec: 4096.6, 60 sec: 3822.9, 300 sec: 3623.9). Total num frames: 4120576. Throughput: 0: 956.4. Samples: 1027532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:15,450][00176] Avg episode reward: [(0, '19.931')] +[2023-02-24 15:26:18,545][10350] Updated weights for policy 0, policy_version 1010 (0.0015) +[2023-02-24 15:26:20,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 4141056. Throughput: 0: 983.7. Samples: 1034720. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:26:20,455][00176] Avg episode reward: [(0, '20.749')] +[2023-02-24 15:26:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3651.7). Total num frames: 4157440. Throughput: 0: 937.8. Samples: 1039550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:26:25,449][00176] Avg episode reward: [(0, '20.364')] +[2023-02-24 15:26:30,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3637.8). Total num frames: 4173824. Throughput: 0: 937.4. Samples: 1041712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:26:30,449][00176] Avg episode reward: [(0, '19.988')] +[2023-02-24 15:26:30,461][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001019_4173824.pth... +[2023-02-24 15:26:30,583][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000805_3297280.pth +[2023-02-24 15:26:30,789][10350] Updated weights for policy 0, policy_version 1020 (0.0012) +[2023-02-24 15:26:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3651.7). Total num frames: 4198400. Throughput: 0: 982.1. Samples: 1048158. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:35,452][00176] Avg episode reward: [(0, '20.343')] +[2023-02-24 15:26:39,763][10350] Updated weights for policy 0, policy_version 1030 (0.0015) +[2023-02-24 15:26:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 4218880. Throughput: 0: 990.3. Samples: 1055094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:40,452][00176] Avg episode reward: [(0, '18.807')] +[2023-02-24 15:26:45,456][00176] Fps is (10 sec: 3273.9, 60 sec: 3822.4, 300 sec: 3665.5). Total num frames: 4231168. Throughput: 0: 959.4. Samples: 1057276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:45,462][00176] Avg episode reward: [(0, '18.852')] +[2023-02-24 15:26:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3665.6). Total num frames: 4251648. Throughput: 0: 946.3. Samples: 1061964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:26:50,449][00176] Avg episode reward: [(0, '19.625')] +[2023-02-24 15:26:51,522][10350] Updated weights for policy 0, policy_version 1040 (0.0027) +[2023-02-24 15:26:55,447][00176] Fps is (10 sec: 4509.6, 60 sec: 3891.2, 300 sec: 3679.5). Total num frames: 4276224. Throughput: 0: 1003.0. Samples: 1069092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:26:55,449][00176] Avg episode reward: [(0, '21.409')] +[2023-02-24 15:27:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 4296704. Throughput: 0: 1002.1. Samples: 1072628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:00,451][00176] Avg episode reward: [(0, '21.761')] +[2023-02-24 15:27:01,544][10350] Updated weights for policy 0, policy_version 1050 (0.0011) +[2023-02-24 15:27:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3693.3). Total num frames: 4308992. Throughput: 0: 944.8. Samples: 1077234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:05,454][00176] Avg episode reward: [(0, '23.349')] +[2023-02-24 15:27:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3693.3). Total num frames: 4333568. Throughput: 0: 964.9. Samples: 1082972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:10,449][00176] Avg episode reward: [(0, '25.616')] +[2023-02-24 15:27:10,464][10336] Saving new best policy, reward=25.616! +[2023-02-24 15:27:12,037][10350] Updated weights for policy 0, policy_version 1060 (0.0023) +[2023-02-24 15:27:15,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3693.4). Total num frames: 4354048. Throughput: 0: 993.8. Samples: 1086432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:15,450][00176] Avg episode reward: [(0, '26.516')] +[2023-02-24 15:27:15,452][10336] Saving new best policy, reward=26.516! +[2023-02-24 15:27:20,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3693.3). Total num frames: 4370432. Throughput: 0: 985.9. Samples: 1092524. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:27:20,449][00176] Avg episode reward: [(0, '26.725')] +[2023-02-24 15:27:20,458][10336] Saving new best policy, reward=26.725! +[2023-02-24 15:27:23,994][10350] Updated weights for policy 0, policy_version 1070 (0.0015) +[2023-02-24 15:27:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 4386816. Throughput: 0: 923.0. Samples: 1096630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:27:25,453][00176] Avg episode reward: [(0, '25.944')] +[2023-02-24 15:27:30,448][00176] Fps is (10 sec: 3686.1, 60 sec: 3891.1, 300 sec: 3693.3). Total num frames: 4407296. Throughput: 0: 938.0. Samples: 1099480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:30,450][00176] Avg episode reward: [(0, '23.575')] +[2023-02-24 15:27:33,534][10350] Updated weights for policy 0, policy_version 1080 (0.0020) +[2023-02-24 15:27:35,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 4431872. Throughput: 0: 988.7. Samples: 1106456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:35,452][00176] Avg episode reward: [(0, '22.781')] +[2023-02-24 15:27:40,447][00176] Fps is (10 sec: 4096.4, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 4448256. Throughput: 0: 950.6. Samples: 1111870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:27:40,451][00176] Avg episode reward: [(0, '22.126')] +[2023-02-24 15:27:45,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3823.5, 300 sec: 3707.2). Total num frames: 4460544. Throughput: 0: 922.0. Samples: 1114116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:27:45,449][00176] Avg episode reward: [(0, '22.191')] +[2023-02-24 15:27:45,600][10350] Updated weights for policy 0, policy_version 1090 (0.0025) +[2023-02-24 15:27:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 4485120. Throughput: 0: 952.0. Samples: 1120074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:27:50,449][00176] Avg episode reward: [(0, '22.069')] +[2023-02-24 15:27:54,446][10350] Updated weights for policy 0, policy_version 1100 (0.0012) +[2023-02-24 15:27:55,448][00176] Fps is (10 sec: 4504.8, 60 sec: 3822.8, 300 sec: 3707.2). Total num frames: 4505600. Throughput: 0: 979.3. Samples: 1127042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:27:55,453][00176] Avg episode reward: [(0, '22.633')] +[2023-02-24 15:28:00,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.3). Total num frames: 4521984. Throughput: 0: 953.7. Samples: 1129350. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:00,449][00176] Avg episode reward: [(0, '22.652')] +[2023-02-24 15:28:05,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 4538368. Throughput: 0: 913.9. Samples: 1133648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:05,453][00176] Avg episode reward: [(0, '22.816')] +[2023-02-24 15:28:06,587][10350] Updated weights for policy 0, policy_version 1110 (0.0024) +[2023-02-24 15:28:10,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 4562944. Throughput: 0: 978.2. Samples: 1140650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:28:10,452][00176] Avg episode reward: [(0, '22.730')] +[2023-02-24 15:28:15,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 4583424. Throughput: 0: 992.5. Samples: 1144140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:15,450][00176] Avg episode reward: [(0, '22.103')] +[2023-02-24 15:28:16,444][10350] Updated weights for policy 0, policy_version 1120 (0.0014) +[2023-02-24 15:28:20,447][00176] Fps is (10 sec: 3276.6, 60 sec: 3754.6, 300 sec: 3721.1). Total num frames: 4595712. Throughput: 0: 941.5. Samples: 1148826. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:28:20,450][00176] Avg episode reward: [(0, '23.531')] +[2023-02-24 15:28:25,448][00176] Fps is (10 sec: 3276.5, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 4616192. Throughput: 0: 940.6. Samples: 1154196. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:28:25,454][00176] Avg episode reward: [(0, '23.645')] +[2023-02-24 15:28:27,485][10350] Updated weights for policy 0, policy_version 1130 (0.0020) +[2023-02-24 15:28:30,450][00176] Fps is (10 sec: 4504.2, 60 sec: 3891.0, 300 sec: 3748.8). Total num frames: 4640768. Throughput: 0: 969.6. Samples: 1157750. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:28:30,457][00176] Avg episode reward: [(0, '22.710')] +[2023-02-24 15:28:30,470][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001133_4640768.pth... +[2023-02-24 15:28:30,608][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000911_3731456.pth +[2023-02-24 15:28:35,447][00176] Fps is (10 sec: 4096.4, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 4657152. Throughput: 0: 977.2. Samples: 1164046. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:35,452][00176] Avg episode reward: [(0, '23.146')] +[2023-02-24 15:28:39,077][10350] Updated weights for policy 0, policy_version 1140 (0.0013) +[2023-02-24 15:28:40,448][00176] Fps is (10 sec: 2868.0, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 4669440. Throughput: 0: 914.4. Samples: 1168190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:40,451][00176] Avg episode reward: [(0, '23.144')] +[2023-02-24 15:28:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 4694016. Throughput: 0: 921.6. Samples: 1170824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:45,449][00176] Avg episode reward: [(0, '21.710')] +[2023-02-24 15:28:49,331][10350] Updated weights for policy 0, policy_version 1150 (0.0015) +[2023-02-24 15:28:50,447][00176] Fps is (10 sec: 4506.0, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 4714496. Throughput: 0: 969.4. Samples: 1177272. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:28:50,449][00176] Avg episode reward: [(0, '22.979')] +[2023-02-24 15:28:55,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 4726784. Throughput: 0: 922.8. Samples: 1182174. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:28:55,453][00176] Avg episode reward: [(0, '23.571')] +[2023-02-24 15:29:00,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 4743168. Throughput: 0: 888.7. Samples: 1184132. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:29:00,449][00176] Avg episode reward: [(0, '23.241')] +[2023-02-24 15:29:02,263][10350] Updated weights for policy 0, policy_version 1160 (0.0050) +[2023-02-24 15:29:05,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 4763648. Throughput: 0: 906.3. Samples: 1189610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:29:05,449][00176] Avg episode reward: [(0, '23.547')] +[2023-02-24 15:29:10,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3735.0). Total num frames: 4784128. Throughput: 0: 930.5. Samples: 1196066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:29:10,449][00176] Avg episode reward: [(0, '23.202')] +[2023-02-24 15:29:13,015][10350] Updated weights for policy 0, policy_version 1170 (0.0017) +[2023-02-24 15:29:15,452][00176] Fps is (10 sec: 3275.2, 60 sec: 3549.6, 300 sec: 3707.2). Total num frames: 4796416. Throughput: 0: 901.4. Samples: 1198316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:29:15,454][00176] Avg episode reward: [(0, '24.870')] +[2023-02-24 15:29:20,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3618.2, 300 sec: 3721.1). Total num frames: 4812800. Throughput: 0: 856.6. Samples: 1202594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:29:20,457][00176] Avg episode reward: [(0, '24.461')] +[2023-02-24 15:29:24,213][10350] Updated weights for policy 0, policy_version 1180 (0.0030) +[2023-02-24 15:29:25,447][00176] Fps is (10 sec: 4098.0, 60 sec: 3686.5, 300 sec: 3748.9). Total num frames: 4837376. Throughput: 0: 909.4. Samples: 1209110. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:29:25,454][00176] Avg episode reward: [(0, '24.278')] +[2023-02-24 15:29:30,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3550.1, 300 sec: 3735.0). Total num frames: 4853760. Throughput: 0: 924.1. Samples: 1212410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:29:30,449][00176] Avg episode reward: [(0, '23.977')] +[2023-02-24 15:29:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3721.1). Total num frames: 4870144. Throughput: 0: 886.6. Samples: 1217170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:29:35,450][00176] Avg episode reward: [(0, '22.884')] +[2023-02-24 15:29:36,033][10350] Updated weights for policy 0, policy_version 1190 (0.0027) +[2023-02-24 15:29:40,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3748.9). Total num frames: 4890624. Throughput: 0: 896.0. Samples: 1222496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:29:40,449][00176] Avg episode reward: [(0, '23.612')] +[2023-02-24 15:29:45,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3762.8). Total num frames: 4911104. Throughput: 0: 929.0. Samples: 1225936. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:29:45,450][00176] Avg episode reward: [(0, '22.943')] +[2023-02-24 15:29:45,482][10350] Updated weights for policy 0, policy_version 1200 (0.0012) +[2023-02-24 15:29:50,454][00176] Fps is (10 sec: 4093.1, 60 sec: 3617.7, 300 sec: 3762.7). Total num frames: 4931584. Throughput: 0: 955.5. Samples: 1232612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:29:50,457][00176] Avg episode reward: [(0, '23.677')] +[2023-02-24 15:29:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 4947968. Throughput: 0: 909.4. Samples: 1236988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:29:55,456][00176] Avg episode reward: [(0, '22.802')] +[2023-02-24 15:29:57,640][10350] Updated weights for policy 0, policy_version 1210 (0.0024) +[2023-02-24 15:30:00,450][00176] Fps is (10 sec: 3687.9, 60 sec: 3754.5, 300 sec: 3776.7). Total num frames: 4968448. Throughput: 0: 917.5. Samples: 1239602. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:30:00,451][00176] Avg episode reward: [(0, '23.894')] +[2023-02-24 15:30:05,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 4988928. Throughput: 0: 977.6. Samples: 1246588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:30:05,449][00176] Avg episode reward: [(0, '24.380')] +[2023-02-24 15:30:06,560][10350] Updated weights for policy 0, policy_version 1220 (0.0012) +[2023-02-24 15:30:10,447][00176] Fps is (10 sec: 3687.4, 60 sec: 3686.4, 300 sec: 3776.6). Total num frames: 5005312. Throughput: 0: 954.2. Samples: 1252048. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:30:10,456][00176] Avg episode reward: [(0, '24.220')] +[2023-02-24 15:30:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3755.0, 300 sec: 3762.8). Total num frames: 5021696. Throughput: 0: 928.1. Samples: 1254176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:30:15,455][00176] Avg episode reward: [(0, '23.035')] +[2023-02-24 15:30:18,624][10350] Updated weights for policy 0, policy_version 1230 (0.0026) +[2023-02-24 15:30:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 5046272. Throughput: 0: 950.4. Samples: 1259936. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:30:20,450][00176] Avg episode reward: [(0, '20.969')] +[2023-02-24 15:30:25,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 5066752. Throughput: 0: 990.9. Samples: 1267086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:30:25,455][00176] Avg episode reward: [(0, '20.846')] +[2023-02-24 15:30:28,355][10350] Updated weights for policy 0, policy_version 1240 (0.0012) +[2023-02-24 15:30:30,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 5083136. Throughput: 0: 974.1. Samples: 1269772. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:30:30,449][00176] Avg episode reward: [(0, '20.371')] +[2023-02-24 15:30:30,466][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001241_5083136.pth... +[2023-02-24 15:30:30,642][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001019_4173824.pth +[2023-02-24 15:30:35,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 5099520. Throughput: 0: 925.5. Samples: 1274252. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:30:35,454][00176] Avg episode reward: [(0, '20.946')] +[2023-02-24 15:30:39,352][10350] Updated weights for policy 0, policy_version 1250 (0.0021) +[2023-02-24 15:30:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 5124096. Throughput: 0: 975.6. Samples: 1280892. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:30:40,455][00176] Avg episode reward: [(0, '22.150')] +[2023-02-24 15:30:45,448][00176] Fps is (10 sec: 4504.9, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 5144576. Throughput: 0: 996.0. Samples: 1284422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:30:45,453][00176] Avg episode reward: [(0, '22.537')] +[2023-02-24 15:30:49,904][10350] Updated weights for policy 0, policy_version 1260 (0.0020) +[2023-02-24 15:30:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.4, 300 sec: 3790.5). Total num frames: 5160960. Throughput: 0: 956.3. Samples: 1289622. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:30:50,451][00176] Avg episode reward: [(0, '22.703')] +[2023-02-24 15:30:55,447][00176] Fps is (10 sec: 3277.3, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 5177344. Throughput: 0: 947.2. Samples: 1294670. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:30:55,449][00176] Avg episode reward: [(0, '22.635')] +[2023-02-24 15:30:59,985][10350] Updated weights for policy 0, policy_version 1270 (0.0019) +[2023-02-24 15:31:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3804.4). Total num frames: 5201920. Throughput: 0: 977.8. Samples: 1298178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:00,454][00176] Avg episode reward: [(0, '22.031')] +[2023-02-24 15:31:05,447][00176] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 5222400. Throughput: 0: 1004.7. Samples: 1305148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:05,458][00176] Avg episode reward: [(0, '20.744')] +[2023-02-24 15:31:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 5238784. Throughput: 0: 945.3. Samples: 1309624. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:31:10,449][00176] Avg episode reward: [(0, '20.218')] +[2023-02-24 15:31:11,503][10350] Updated weights for policy 0, policy_version 1280 (0.0023) +[2023-02-24 15:31:15,447][00176] Fps is (10 sec: 3686.6, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 5259264. Throughput: 0: 940.6. Samples: 1312100. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:31:15,449][00176] Avg episode reward: [(0, '22.566')] +[2023-02-24 15:31:20,325][10350] Updated weights for policy 0, policy_version 1290 (0.0021) +[2023-02-24 15:31:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 5283840. Throughput: 0: 1002.3. Samples: 1319354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:31:20,449][00176] Avg episode reward: [(0, '22.433')] +[2023-02-24 15:31:25,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 5300224. Throughput: 0: 986.7. Samples: 1325292. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:31:25,453][00176] Avg episode reward: [(0, '23.025')] +[2023-02-24 15:31:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 5316608. Throughput: 0: 958.2. Samples: 1327538. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:31:30,456][00176] Avg episode reward: [(0, '22.359')] +[2023-02-24 15:31:32,405][10350] Updated weights for policy 0, policy_version 1300 (0.0019) +[2023-02-24 15:31:35,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 5337088. Throughput: 0: 963.6. Samples: 1332984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:31:35,449][00176] Avg episode reward: [(0, '23.732')] +[2023-02-24 15:31:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3832.3). Total num frames: 5361664. Throughput: 0: 1006.9. Samples: 1339980. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:40,449][00176] Avg episode reward: [(0, '22.680')] +[2023-02-24 15:31:41,482][10350] Updated weights for policy 0, policy_version 1310 (0.0022) +[2023-02-24 15:31:45,453][00176] Fps is (10 sec: 4093.6, 60 sec: 3890.9, 300 sec: 3818.2). Total num frames: 5378048. Throughput: 0: 993.6. Samples: 1342898. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:45,456][00176] Avg episode reward: [(0, '22.364')] +[2023-02-24 15:31:50,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 5390336. Throughput: 0: 935.6. Samples: 1347248. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:50,449][00176] Avg episode reward: [(0, '23.030')] +[2023-02-24 15:31:53,721][10350] Updated weights for policy 0, policy_version 1320 (0.0014) +[2023-02-24 15:31:55,447][00176] Fps is (10 sec: 3278.7, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 5410816. Throughput: 0: 969.9. Samples: 1353270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:31:55,448][00176] Avg episode reward: [(0, '22.457')] +[2023-02-24 15:32:00,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 5435392. Throughput: 0: 991.5. Samples: 1356716. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:32:00,451][00176] Avg episode reward: [(0, '22.320')] +[2023-02-24 15:32:04,037][10350] Updated weights for policy 0, policy_version 1330 (0.0012) +[2023-02-24 15:32:05,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.6). Total num frames: 5447680. Throughput: 0: 946.5. Samples: 1361948. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:32:05,453][00176] Avg episode reward: [(0, '22.208')] +[2023-02-24 15:32:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 5464064. Throughput: 0: 911.0. Samples: 1366286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:32:10,450][00176] Avg episode reward: [(0, '21.646')] +[2023-02-24 15:32:15,309][10350] Updated weights for policy 0, policy_version 1340 (0.0023) +[2023-02-24 15:32:15,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 5488640. Throughput: 0: 935.3. Samples: 1369628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:32:15,449][00176] Avg episode reward: [(0, '21.216')] +[2023-02-24 15:32:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 5505024. Throughput: 0: 965.0. Samples: 1376408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:32:20,450][00176] Avg episode reward: [(0, '21.252')] +[2023-02-24 15:32:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 5521408. Throughput: 0: 902.2. Samples: 1380578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:32:25,452][00176] Avg episode reward: [(0, '22.518')] +[2023-02-24 15:32:28,094][10350] Updated weights for policy 0, policy_version 1350 (0.0018) +[2023-02-24 15:32:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 5537792. Throughput: 0: 882.0. Samples: 1382584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:32:30,449][00176] Avg episode reward: [(0, '22.510')] +[2023-02-24 15:32:30,465][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001352_5537792.pth... +[2023-02-24 15:32:30,600][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001133_4640768.pth +[2023-02-24 15:32:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 5562368. Throughput: 0: 929.2. Samples: 1389062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:32:35,454][00176] Avg episode reward: [(0, '21.793')] +[2023-02-24 15:32:37,156][10350] Updated weights for policy 0, policy_version 1360 (0.0015) +[2023-02-24 15:32:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3790.5). Total num frames: 5578752. Throughput: 0: 931.2. Samples: 1395172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:32:40,457][00176] Avg episode reward: [(0, '20.646')] +[2023-02-24 15:32:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.5, 300 sec: 3762.8). Total num frames: 5595136. Throughput: 0: 902.4. Samples: 1397326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:32:45,450][00176] Avg episode reward: [(0, '20.105')] +[2023-02-24 15:32:48,965][10350] Updated weights for policy 0, policy_version 1370 (0.0018) +[2023-02-24 15:32:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 5615616. Throughput: 0: 906.3. Samples: 1402730. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:32:50,457][00176] Avg episode reward: [(0, '21.111')] +[2023-02-24 15:32:55,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 5640192. Throughput: 0: 967.8. Samples: 1409836. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:32:55,456][00176] Avg episode reward: [(0, '21.046')] +[2023-02-24 15:32:58,225][10350] Updated weights for policy 0, policy_version 1380 (0.0015) +[2023-02-24 15:33:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 5656576. Throughput: 0: 967.8. Samples: 1413178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:00,452][00176] Avg episode reward: [(0, '20.584')] +[2023-02-24 15:33:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 5672960. Throughput: 0: 919.1. Samples: 1417766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:05,453][00176] Avg episode reward: [(0, '21.200')] +[2023-02-24 15:33:09,437][10350] Updated weights for policy 0, policy_version 1390 (0.0017) +[2023-02-24 15:33:10,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 5697536. Throughput: 0: 962.3. Samples: 1423880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:10,454][00176] Avg episode reward: [(0, '23.609')] +[2023-02-24 15:33:15,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 5718016. Throughput: 0: 996.9. Samples: 1427444. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:33:15,460][00176] Avg episode reward: [(0, '23.503')] +[2023-02-24 15:33:19,634][10350] Updated weights for policy 0, policy_version 1400 (0.0022) +[2023-02-24 15:33:20,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 5734400. Throughput: 0: 983.4. Samples: 1433316. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:20,452][00176] Avg episode reward: [(0, '24.498')] +[2023-02-24 15:33:25,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 5750784. Throughput: 0: 950.9. Samples: 1437962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:33:25,458][00176] Avg episode reward: [(0, '24.555')] +[2023-02-24 15:33:30,236][10350] Updated weights for policy 0, policy_version 1410 (0.0015) +[2023-02-24 15:33:30,454][00176] Fps is (10 sec: 4093.0, 60 sec: 3959.0, 300 sec: 3790.4). Total num frames: 5775360. Throughput: 0: 974.0. Samples: 1441162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:30,456][00176] Avg episode reward: [(0, '24.709')] +[2023-02-24 15:33:35,451][00176] Fps is (10 sec: 4913.2, 60 sec: 3959.2, 300 sec: 3832.1). Total num frames: 5799936. Throughput: 0: 1014.6. Samples: 1448392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:33:35,457][00176] Avg episode reward: [(0, '22.913')] +[2023-02-24 15:33:40,447][00176] Fps is (10 sec: 3689.1, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 5812224. Throughput: 0: 970.5. Samples: 1453508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:33:40,453][00176] Avg episode reward: [(0, '22.837')] +[2023-02-24 15:33:40,991][10350] Updated weights for policy 0, policy_version 1420 (0.0011) +[2023-02-24 15:33:45,447][00176] Fps is (10 sec: 2868.4, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 5828608. Throughput: 0: 944.5. Samples: 1455682. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:33:45,453][00176] Avg episode reward: [(0, '20.973')] +[2023-02-24 15:33:50,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 5853184. Throughput: 0: 984.6. Samples: 1462072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:50,456][00176] Avg episode reward: [(0, '21.231')] +[2023-02-24 15:33:50,928][10350] Updated weights for policy 0, policy_version 1430 (0.0022) +[2023-02-24 15:33:55,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 5873664. Throughput: 0: 998.9. Samples: 1468830. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:33:55,452][00176] Avg episode reward: [(0, '22.009')] +[2023-02-24 15:34:00,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 5885952. Throughput: 0: 967.6. Samples: 1470986. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:00,453][00176] Avg episode reward: [(0, '21.958')] +[2023-02-24 15:34:03,218][10350] Updated weights for policy 0, policy_version 1440 (0.0025) +[2023-02-24 15:34:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 5906432. Throughput: 0: 936.0. Samples: 1475438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:05,454][00176] Avg episode reward: [(0, '22.495')] +[2023-02-24 15:34:10,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5931008. Throughput: 0: 993.9. Samples: 1482686. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:34:10,449][00176] Avg episode reward: [(0, '23.272')] +[2023-02-24 15:34:11,767][10350] Updated weights for policy 0, policy_version 1450 (0.0019) +[2023-02-24 15:34:15,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5951488. Throughput: 0: 1002.4. Samples: 1486264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:34:15,449][00176] Avg episode reward: [(0, '22.754')] +[2023-02-24 15:34:20,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 5967872. Throughput: 0: 946.0. Samples: 1490956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:34:20,452][00176] Avg episode reward: [(0, '22.302')] +[2023-02-24 15:34:23,565][10350] Updated weights for policy 0, policy_version 1460 (0.0025) +[2023-02-24 15:34:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 5988352. Throughput: 0: 957.2. Samples: 1496584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:25,453][00176] Avg episode reward: [(0, '21.515')] +[2023-02-24 15:34:30,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.7, 300 sec: 3860.0). Total num frames: 6008832. Throughput: 0: 986.8. Samples: 1500090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:34:30,454][00176] Avg episode reward: [(0, '21.126')] +[2023-02-24 15:34:30,473][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth... +[2023-02-24 15:34:30,587][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001241_5083136.pth +[2023-02-24 15:34:32,822][10350] Updated weights for policy 0, policy_version 1470 (0.0012) +[2023-02-24 15:34:35,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3754.9, 300 sec: 3846.1). Total num frames: 6025216. Throughput: 0: 982.7. Samples: 1506292. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:35,450][00176] Avg episode reward: [(0, '20.610')] +[2023-02-24 15:34:40,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 6041600. Throughput: 0: 934.1. Samples: 1510864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:40,451][00176] Avg episode reward: [(0, '20.546')] +[2023-02-24 15:34:44,390][10350] Updated weights for policy 0, policy_version 1480 (0.0014) +[2023-02-24 15:34:45,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3846.2). Total num frames: 6066176. Throughput: 0: 952.7. Samples: 1513858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:45,449][00176] Avg episode reward: [(0, '20.452')] +[2023-02-24 15:34:50,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6086656. Throughput: 0: 1009.6. Samples: 1520872. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:34:50,449][00176] Avg episode reward: [(0, '21.399')] +[2023-02-24 15:34:55,054][10350] Updated weights for policy 0, policy_version 1490 (0.0013) +[2023-02-24 15:34:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 6103040. Throughput: 0: 963.3. Samples: 1526034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:34:55,450][00176] Avg episode reward: [(0, '22.360')] +[2023-02-24 15:35:00,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 6115328. Throughput: 0: 931.0. Samples: 1528160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:35:00,455][00176] Avg episode reward: [(0, '21.215')] +[2023-02-24 15:35:05,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 6139904. Throughput: 0: 953.0. Samples: 1533842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:35:05,449][00176] Avg episode reward: [(0, '21.333')] +[2023-02-24 15:35:05,993][10350] Updated weights for policy 0, policy_version 1500 (0.0016) +[2023-02-24 15:35:10,451][00176] Fps is (10 sec: 4503.8, 60 sec: 3822.7, 300 sec: 3859.9). Total num frames: 6160384. Throughput: 0: 978.1. Samples: 1540604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:35:10,453][00176] Avg episode reward: [(0, '23.437')] +[2023-02-24 15:35:15,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 6176768. Throughput: 0: 950.9. Samples: 1542880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:35:15,451][00176] Avg episode reward: [(0, '24.864')] +[2023-02-24 15:35:18,044][10350] Updated weights for policy 0, policy_version 1510 (0.0024) +[2023-02-24 15:35:20,447][00176] Fps is (10 sec: 3278.1, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 6193152. Throughput: 0: 905.2. Samples: 1547028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:35:20,449][00176] Avg episode reward: [(0, '24.934')] +[2023-02-24 15:35:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 6213632. Throughput: 0: 948.8. Samples: 1553562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:35:25,454][00176] Avg episode reward: [(0, '25.808')] +[2023-02-24 15:35:27,610][10350] Updated weights for policy 0, policy_version 1520 (0.0014) +[2023-02-24 15:35:30,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 6234112. Throughput: 0: 958.9. Samples: 1557010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:35:30,456][00176] Avg episode reward: [(0, '24.965')] +[2023-02-24 15:35:35,451][00176] Fps is (10 sec: 3684.9, 60 sec: 3754.4, 300 sec: 3818.3). Total num frames: 6250496. Throughput: 0: 913.7. Samples: 1561994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:35:35,457][00176] Avg episode reward: [(0, '24.620')] +[2023-02-24 15:35:40,052][10350] Updated weights for policy 0, policy_version 1530 (0.0015) +[2023-02-24 15:35:40,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 6266880. Throughput: 0: 905.5. Samples: 1566780. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:35:40,456][00176] Avg episode reward: [(0, '23.192')] +[2023-02-24 15:35:45,447][00176] Fps is (10 sec: 3687.9, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 6287360. Throughput: 0: 932.9. Samples: 1570142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:35:45,448][00176] Avg episode reward: [(0, '22.241')] +[2023-02-24 15:35:49,421][10350] Updated weights for policy 0, policy_version 1540 (0.0035) +[2023-02-24 15:35:50,454][00176] Fps is (10 sec: 4093.2, 60 sec: 3686.0, 300 sec: 3832.1). Total num frames: 6307840. Throughput: 0: 952.8. Samples: 1576724. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:35:50,456][00176] Avg episode reward: [(0, '22.874')] +[2023-02-24 15:35:55,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 6324224. Throughput: 0: 902.3. Samples: 1581202. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:35:55,449][00176] Avg episode reward: [(0, '23.110')] +[2023-02-24 15:36:00,447][00176] Fps is (10 sec: 3688.9, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 6344704. Throughput: 0: 903.8. Samples: 1583552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:36:00,454][00176] Avg episode reward: [(0, '24.584')] +[2023-02-24 15:36:01,196][10350] Updated weights for policy 0, policy_version 1550 (0.0025) +[2023-02-24 15:36:05,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 6365184. Throughput: 0: 964.9. Samples: 1590450. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:36:05,457][00176] Avg episode reward: [(0, '24.218')] +[2023-02-24 15:36:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3804.4). Total num frames: 6381568. Throughput: 0: 946.4. Samples: 1596148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:36:10,454][00176] Avg episode reward: [(0, '24.737')] +[2023-02-24 15:36:12,205][10350] Updated weights for policy 0, policy_version 1560 (0.0020) +[2023-02-24 15:36:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3776.7). Total num frames: 6397952. Throughput: 0: 917.1. Samples: 1598278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:36:15,451][00176] Avg episode reward: [(0, '25.958')] +[2023-02-24 15:36:20,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 6418432. Throughput: 0: 920.0. Samples: 1603392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:36:20,458][00176] Avg episode reward: [(0, '24.901')] +[2023-02-24 15:36:22,824][10350] Updated weights for policy 0, policy_version 1570 (0.0021) +[2023-02-24 15:36:25,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 6438912. Throughput: 0: 962.0. Samples: 1610072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:36:25,454][00176] Avg episode reward: [(0, '25.388')] +[2023-02-24 15:36:30,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 6455296. Throughput: 0: 948.9. Samples: 1612844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:36:30,452][00176] Avg episode reward: [(0, '24.371')] +[2023-02-24 15:36:30,466][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001576_6455296.pth... +[2023-02-24 15:36:30,600][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001352_5537792.pth +[2023-02-24 15:36:35,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3618.4, 300 sec: 3748.9). Total num frames: 6467584. Throughput: 0: 897.3. Samples: 1617098. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:36:35,448][00176] Avg episode reward: [(0, '24.945')] +[2023-02-24 15:36:35,515][10350] Updated weights for policy 0, policy_version 1580 (0.0015) +[2023-02-24 15:36:40,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 6492160. Throughput: 0: 929.2. Samples: 1623016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:36:40,454][00176] Avg episode reward: [(0, '24.455')] +[2023-02-24 15:36:44,627][10350] Updated weights for policy 0, policy_version 1590 (0.0012) +[2023-02-24 15:36:45,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 6512640. Throughput: 0: 950.3. Samples: 1626314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:36:45,449][00176] Avg episode reward: [(0, '23.732')] +[2023-02-24 15:36:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3686.8, 300 sec: 3790.5). Total num frames: 6529024. Throughput: 0: 917.2. Samples: 1631726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:36:50,452][00176] Avg episode reward: [(0, '23.206')] +[2023-02-24 15:36:55,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3762.8). Total num frames: 6545408. Throughput: 0: 892.3. Samples: 1636302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:36:55,449][00176] Avg episode reward: [(0, '21.278')] +[2023-02-24 15:36:56,703][10350] Updated weights for policy 0, policy_version 1600 (0.0020) +[2023-02-24 15:37:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 6569984. Throughput: 0: 924.1. Samples: 1639864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:37:00,455][00176] Avg episode reward: [(0, '20.819')] +[2023-02-24 15:37:05,451][00176] Fps is (10 sec: 4503.7, 60 sec: 3754.4, 300 sec: 3818.3). Total num frames: 6590464. Throughput: 0: 969.6. Samples: 1647026. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:37:05,454][00176] Avg episode reward: [(0, '21.277')] +[2023-02-24 15:37:05,904][10350] Updated weights for policy 0, policy_version 1610 (0.0012) +[2023-02-24 15:37:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 6606848. Throughput: 0: 930.4. Samples: 1651940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:37:10,449][00176] Avg episode reward: [(0, '21.839')] +[2023-02-24 15:37:15,447][00176] Fps is (10 sec: 3278.1, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 6623232. Throughput: 0: 919.3. Samples: 1654214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:37:15,454][00176] Avg episode reward: [(0, '21.127')] +[2023-02-24 15:37:17,375][10350] Updated weights for policy 0, policy_version 1620 (0.0029) +[2023-02-24 15:37:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 6647808. Throughput: 0: 971.4. Samples: 1660810. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:37:20,449][00176] Avg episode reward: [(0, '23.382')] +[2023-02-24 15:37:25,449][00176] Fps is (10 sec: 4504.9, 60 sec: 3822.8, 300 sec: 3832.2). Total num frames: 6668288. Throughput: 0: 982.9. Samples: 1667250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:37:25,451][00176] Avg episode reward: [(0, '23.770')] +[2023-02-24 15:37:27,887][10350] Updated weights for policy 0, policy_version 1630 (0.0029) +[2023-02-24 15:37:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 6680576. Throughput: 0: 959.4. Samples: 1669486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:37:30,453][00176] Avg episode reward: [(0, '23.636')] +[2023-02-24 15:37:35,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 6701056. Throughput: 0: 949.8. Samples: 1674468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:37:35,449][00176] Avg episode reward: [(0, '23.823')] +[2023-02-24 15:37:38,083][10350] Updated weights for policy 0, policy_version 1640 (0.0015) +[2023-02-24 15:37:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 6725632. Throughput: 0: 1004.3. Samples: 1681494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:37:40,448][00176] Avg episode reward: [(0, '24.427')] +[2023-02-24 15:37:45,448][00176] Fps is (10 sec: 4095.3, 60 sec: 3822.8, 300 sec: 3818.3). Total num frames: 6742016. Throughput: 0: 1002.5. Samples: 1684976. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:37:45,451][00176] Avg episode reward: [(0, '23.706')] +[2023-02-24 15:37:49,827][10350] Updated weights for policy 0, policy_version 1650 (0.0025) +[2023-02-24 15:37:50,447][00176] Fps is (10 sec: 3276.6, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 6758400. Throughput: 0: 937.1. Samples: 1689194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:37:50,456][00176] Avg episode reward: [(0, '23.527')] +[2023-02-24 15:37:55,447][00176] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 6778880. Throughput: 0: 958.8. Samples: 1695088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:37:55,454][00176] Avg episode reward: [(0, '23.307')] +[2023-02-24 15:37:59,188][10350] Updated weights for policy 0, policy_version 1660 (0.0019) +[2023-02-24 15:38:00,447][00176] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 6803456. Throughput: 0: 986.1. Samples: 1698586. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:38:00,454][00176] Avg episode reward: [(0, '25.093')] +[2023-02-24 15:38:05,447][00176] Fps is (10 sec: 4095.8, 60 sec: 3823.2, 300 sec: 3804.4). Total num frames: 6819840. Throughput: 0: 972.3. Samples: 1704566. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:38:05,454][00176] Avg episode reward: [(0, '24.437')] +[2023-02-24 15:38:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 6836224. Throughput: 0: 927.1. Samples: 1708968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:38:10,457][00176] Avg episode reward: [(0, '23.325')] +[2023-02-24 15:38:11,061][10350] Updated weights for policy 0, policy_version 1670 (0.0021) +[2023-02-24 15:38:15,447][00176] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 6860800. Throughput: 0: 951.9. Samples: 1712322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:38:15,453][00176] Avg episode reward: [(0, '25.512')] +[2023-02-24 15:38:19,733][10350] Updated weights for policy 0, policy_version 1680 (0.0024) +[2023-02-24 15:38:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 6881280. Throughput: 0: 999.8. Samples: 1719458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:38:20,449][00176] Avg episode reward: [(0, '26.373')] +[2023-02-24 15:38:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3804.5). Total num frames: 6897664. Throughput: 0: 959.6. Samples: 1724674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:38:25,459][00176] Avg episode reward: [(0, '27.117')] +[2023-02-24 15:38:25,465][10336] Saving new best policy, reward=27.117! +[2023-02-24 15:38:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 6914048. Throughput: 0: 931.0. Samples: 1726868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:38:30,458][00176] Avg episode reward: [(0, '26.777')] +[2023-02-24 15:38:30,470][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001688_6914048.pth... +[2023-02-24 15:38:30,613][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001467_6008832.pth +[2023-02-24 15:38:31,522][10350] Updated weights for policy 0, policy_version 1690 (0.0026) +[2023-02-24 15:38:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 6938624. Throughput: 0: 982.7. Samples: 1733416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:38:35,452][00176] Avg episode reward: [(0, '26.390')] +[2023-02-24 15:38:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 6959104. Throughput: 0: 1002.4. Samples: 1740194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:38:40,451][00176] Avg episode reward: [(0, '25.245')] +[2023-02-24 15:38:41,193][10350] Updated weights for policy 0, policy_version 1700 (0.0022) +[2023-02-24 15:38:45,448][00176] Fps is (10 sec: 3686.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 6975488. Throughput: 0: 972.7. Samples: 1742360. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:38:45,455][00176] Avg episode reward: [(0, '22.517')] +[2023-02-24 15:38:50,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 6991872. Throughput: 0: 947.3. Samples: 1747194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:38:50,450][00176] Avg episode reward: [(0, '22.308')] +[2023-02-24 15:38:52,467][10350] Updated weights for policy 0, policy_version 1710 (0.0028) +[2023-02-24 15:38:55,447][00176] Fps is (10 sec: 4096.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7016448. Throughput: 0: 1001.8. Samples: 1754048. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:38:55,449][00176] Avg episode reward: [(0, '22.191')] +[2023-02-24 15:39:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 7032832. Throughput: 0: 1006.6. Samples: 1757620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:39:00,453][00176] Avg episode reward: [(0, '21.057')] +[2023-02-24 15:39:03,397][10350] Updated weights for policy 0, policy_version 1720 (0.0024) +[2023-02-24 15:39:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 7049216. Throughput: 0: 945.8. Samples: 1762020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:39:05,450][00176] Avg episode reward: [(0, '21.072')] +[2023-02-24 15:39:10,450][00176] Fps is (10 sec: 3685.3, 60 sec: 3891.0, 300 sec: 3790.5). Total num frames: 7069696. Throughput: 0: 958.2. Samples: 1767794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:39:10,452][00176] Avg episode reward: [(0, '22.798')] +[2023-02-24 15:39:13,228][10350] Updated weights for policy 0, policy_version 1730 (0.0021) +[2023-02-24 15:39:15,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 7094272. Throughput: 0: 989.0. Samples: 1771372. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:39:15,449][00176] Avg episode reward: [(0, '22.124')] +[2023-02-24 15:39:20,447][00176] Fps is (10 sec: 4097.2, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 7110656. Throughput: 0: 981.4. Samples: 1777578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:39:20,451][00176] Avg episode reward: [(0, '21.341')] +[2023-02-24 15:39:24,760][10350] Updated weights for policy 0, policy_version 1740 (0.0015) +[2023-02-24 15:39:25,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 7127040. Throughput: 0: 929.8. Samples: 1782036. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:39:25,454][00176] Avg episode reward: [(0, '22.354')] +[2023-02-24 15:39:30,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 7151616. Throughput: 0: 949.7. Samples: 1785096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:39:30,449][00176] Avg episode reward: [(0, '23.713')] +[2023-02-24 15:39:33,900][10350] Updated weights for policy 0, policy_version 1750 (0.0018) +[2023-02-24 15:39:35,447][00176] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 7172096. Throughput: 0: 999.7. Samples: 1792182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:39:35,455][00176] Avg episode reward: [(0, '24.257')] +[2023-02-24 15:39:40,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 7188480. Throughput: 0: 961.8. Samples: 1797330. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:39:40,453][00176] Avg episode reward: [(0, '24.761')] +[2023-02-24 15:39:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 7204864. Throughput: 0: 931.4. Samples: 1799534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:39:45,449][00176] Avg episode reward: [(0, '24.859')] +[2023-02-24 15:39:45,838][10350] Updated weights for policy 0, policy_version 1760 (0.0023) +[2023-02-24 15:39:50,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 7229440. Throughput: 0: 975.7. Samples: 1805928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:39:50,449][00176] Avg episode reward: [(0, '26.462')] +[2023-02-24 15:39:54,658][10350] Updated weights for policy 0, policy_version 1770 (0.0017) +[2023-02-24 15:39:55,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7249920. Throughput: 0: 1002.8. Samples: 1812918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:39:55,450][00176] Avg episode reward: [(0, '25.067')] +[2023-02-24 15:40:00,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 7266304. Throughput: 0: 972.9. Samples: 1815154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:40:00,448][00176] Avg episode reward: [(0, '25.203')] +[2023-02-24 15:40:05,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3818.4). Total num frames: 7286784. Throughput: 0: 943.9. Samples: 1820054. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:05,457][00176] Avg episode reward: [(0, '23.563')] +[2023-02-24 15:40:06,222][10350] Updated weights for policy 0, policy_version 1780 (0.0028) +[2023-02-24 15:40:10,447][00176] Fps is (10 sec: 4505.6, 60 sec: 4027.9, 300 sec: 3846.1). Total num frames: 7311360. Throughput: 0: 1006.6. Samples: 1827334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:10,449][00176] Avg episode reward: [(0, '23.854')] +[2023-02-24 15:40:15,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3891.1, 300 sec: 3846.0). Total num frames: 7327744. Throughput: 0: 1018.0. Samples: 1830908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:15,453][00176] Avg episode reward: [(0, '21.939')] +[2023-02-24 15:40:15,754][10350] Updated weights for policy 0, policy_version 1790 (0.0011) +[2023-02-24 15:40:20,447][00176] Fps is (10 sec: 3276.5, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 7344128. Throughput: 0: 965.8. Samples: 1835642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:20,450][00176] Avg episode reward: [(0, '23.375')] +[2023-02-24 15:40:25,447][00176] Fps is (10 sec: 3687.1, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7364608. Throughput: 0: 976.7. Samples: 1841280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:40:25,455][00176] Avg episode reward: [(0, '23.396')] +[2023-02-24 15:40:26,605][10350] Updated weights for policy 0, policy_version 1800 (0.0013) +[2023-02-24 15:40:30,447][00176] Fps is (10 sec: 4505.9, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 7389184. Throughput: 0: 1006.9. Samples: 1844844. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:40:30,454][00176] Avg episode reward: [(0, '23.713')] +[2023-02-24 15:40:30,474][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001804_7389184.pth... +[2023-02-24 15:40:30,600][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001576_6455296.pth +[2023-02-24 15:40:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7405568. Throughput: 0: 1002.8. Samples: 1851052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:35,450][00176] Avg episode reward: [(0, '22.452')] +[2023-02-24 15:40:38,048][10350] Updated weights for policy 0, policy_version 1810 (0.0017) +[2023-02-24 15:40:40,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 7417856. Throughput: 0: 935.5. Samples: 1855014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:40:40,456][00176] Avg episode reward: [(0, '23.899')] +[2023-02-24 15:40:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3846.2). Total num frames: 7442432. Throughput: 0: 953.0. Samples: 1858038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:40:45,454][00176] Avg episode reward: [(0, '21.782')] +[2023-02-24 15:40:48,033][10350] Updated weights for policy 0, policy_version 1820 (0.0023) +[2023-02-24 15:40:50,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7462912. Throughput: 0: 994.6. Samples: 1864810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:50,454][00176] Avg episode reward: [(0, '21.765')] +[2023-02-24 15:40:55,451][00176] Fps is (10 sec: 3275.5, 60 sec: 3754.4, 300 sec: 3832.1). Total num frames: 7475200. Throughput: 0: 940.5. Samples: 1869662. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:40:55,456][00176] Avg episode reward: [(0, '21.905')] +[2023-02-24 15:41:00,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 7491584. Throughput: 0: 905.1. Samples: 1871634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:00,456][00176] Avg episode reward: [(0, '22.514')] +[2023-02-24 15:41:00,875][10350] Updated weights for policy 0, policy_version 1830 (0.0026) +[2023-02-24 15:41:05,447][00176] Fps is (10 sec: 3687.8, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 7512064. Throughput: 0: 925.4. Samples: 1877286. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:05,449][00176] Avg episode reward: [(0, '24.045')] +[2023-02-24 15:41:10,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 7532544. Throughput: 0: 941.1. Samples: 1883630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:41:10,454][00176] Avg episode reward: [(0, '25.345')] +[2023-02-24 15:41:11,018][10350] Updated weights for policy 0, policy_version 1840 (0.0015) +[2023-02-24 15:41:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3818.3). Total num frames: 7544832. Throughput: 0: 905.0. Samples: 1885570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:15,453][00176] Avg episode reward: [(0, '23.906')] +[2023-02-24 15:41:20,447][00176] Fps is (10 sec: 2867.3, 60 sec: 3618.2, 300 sec: 3804.4). Total num frames: 7561216. Throughput: 0: 858.0. Samples: 1889660. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-24 15:41:20,450][00176] Avg episode reward: [(0, '24.477')] +[2023-02-24 15:41:23,482][10350] Updated weights for policy 0, policy_version 1850 (0.0037) +[2023-02-24 15:41:25,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3686.3, 300 sec: 3832.2). Total num frames: 7585792. Throughput: 0: 912.2. Samples: 1896064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:41:25,457][00176] Avg episode reward: [(0, '24.944')] +[2023-02-24 15:41:30,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3846.1). Total num frames: 7602176. Throughput: 0: 918.5. Samples: 1899372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:30,452][00176] Avg episode reward: [(0, '24.647')] +[2023-02-24 15:41:35,109][10350] Updated weights for policy 0, policy_version 1860 (0.0025) +[2023-02-24 15:41:35,447][00176] Fps is (10 sec: 3277.4, 60 sec: 3549.9, 300 sec: 3818.3). Total num frames: 7618560. Throughput: 0: 871.7. Samples: 1904036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:35,454][00176] Avg episode reward: [(0, '24.165')] +[2023-02-24 15:41:40,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3804.4). Total num frames: 7634944. Throughput: 0: 874.3. Samples: 1909000. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:41:40,456][00176] Avg episode reward: [(0, '24.212')] +[2023-02-24 15:41:45,195][10350] Updated weights for policy 0, policy_version 1870 (0.0018) +[2023-02-24 15:41:45,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 7659520. Throughput: 0: 904.4. Samples: 1912330. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:41:45,451][00176] Avg episode reward: [(0, '24.596')] +[2023-02-24 15:41:50,447][00176] Fps is (10 sec: 4095.7, 60 sec: 3549.8, 300 sec: 3832.2). Total num frames: 7675904. Throughput: 0: 926.1. Samples: 1918960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:41:50,451][00176] Avg episode reward: [(0, '24.582')] +[2023-02-24 15:41:55,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3618.4, 300 sec: 3804.4). Total num frames: 7692288. Throughput: 0: 884.4. Samples: 1923428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:41:55,453][00176] Avg episode reward: [(0, '24.260')] +[2023-02-24 15:41:57,252][10350] Updated weights for policy 0, policy_version 1880 (0.0013) +[2023-02-24 15:42:00,447][00176] Fps is (10 sec: 3686.7, 60 sec: 3686.4, 300 sec: 3804.5). Total num frames: 7712768. Throughput: 0: 895.9. Samples: 1925884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:42:00,450][00176] Avg episode reward: [(0, '25.651')] +[2023-02-24 15:42:05,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 7737344. Throughput: 0: 960.7. Samples: 1932890. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:42:05,449][00176] Avg episode reward: [(0, '25.249')] +[2023-02-24 15:42:06,189][10350] Updated weights for policy 0, policy_version 1890 (0.0016) +[2023-02-24 15:42:10,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 7753728. Throughput: 0: 950.8. Samples: 1938846. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:42:10,453][00176] Avg episode reward: [(0, '24.928')] +[2023-02-24 15:42:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 7770112. Throughput: 0: 926.6. Samples: 1941068. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:42:15,449][00176] Avg episode reward: [(0, '26.066')] +[2023-02-24 15:42:17,753][10350] Updated weights for policy 0, policy_version 1900 (0.0028) +[2023-02-24 15:42:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 7794688. Throughput: 0: 954.7. Samples: 1946998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:42:20,450][00176] Avg episode reward: [(0, '24.642')] +[2023-02-24 15:42:25,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3823.0, 300 sec: 3846.1). Total num frames: 7815168. Throughput: 0: 1001.0. Samples: 1954044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:42:25,450][00176] Avg episode reward: [(0, '24.821')] +[2023-02-24 15:42:27,201][10350] Updated weights for policy 0, policy_version 1910 (0.0012) +[2023-02-24 15:42:30,448][00176] Fps is (10 sec: 3685.8, 60 sec: 3822.8, 300 sec: 3832.2). Total num frames: 7831552. Throughput: 0: 982.9. Samples: 1956562. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:42:30,451][00176] Avg episode reward: [(0, '24.870')] +[2023-02-24 15:42:30,466][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001912_7831552.pth... +[2023-02-24 15:42:30,604][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001688_6914048.pth +[2023-02-24 15:42:35,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 7847936. Throughput: 0: 933.9. Samples: 1960984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:42:35,449][00176] Avg episode reward: [(0, '25.118')] +[2023-02-24 15:42:38,676][10350] Updated weights for policy 0, policy_version 1920 (0.0017) +[2023-02-24 15:42:40,447][00176] Fps is (10 sec: 4096.7, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7872512. Throughput: 0: 983.6. Samples: 1967688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:42:40,449][00176] Avg episode reward: [(0, '25.008')] +[2023-02-24 15:42:45,447][00176] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7892992. Throughput: 0: 1008.7. Samples: 1971276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:42:45,450][00176] Avg episode reward: [(0, '24.434')] +[2023-02-24 15:42:48,752][10350] Updated weights for policy 0, policy_version 1930 (0.0016) +[2023-02-24 15:42:50,450][00176] Fps is (10 sec: 3685.3, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 7909376. Throughput: 0: 974.2. Samples: 1976730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:42:50,457][00176] Avg episode reward: [(0, '25.113')] +[2023-02-24 15:42:55,447][00176] Fps is (10 sec: 3277.0, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 7925760. Throughput: 0: 946.1. Samples: 1981420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:42:55,452][00176] Avg episode reward: [(0, '25.061')] +[2023-02-24 15:42:59,294][10350] Updated weights for policy 0, policy_version 1940 (0.0014) +[2023-02-24 15:43:00,447][00176] Fps is (10 sec: 4097.2, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 7950336. Throughput: 0: 976.5. Samples: 1985010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:43:00,453][00176] Avg episode reward: [(0, '23.560')] +[2023-02-24 15:43:05,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7970816. Throughput: 0: 1003.2. Samples: 1992142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:43:05,449][00176] Avg episode reward: [(0, '23.483')] +[2023-02-24 15:43:10,449][00176] Fps is (10 sec: 3276.2, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 7983104. Throughput: 0: 947.2. Samples: 1996668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:43:10,454][00176] Avg episode reward: [(0, '24.075')] +[2023-02-24 15:43:10,476][10350] Updated weights for policy 0, policy_version 1950 (0.0037) +[2023-02-24 15:43:15,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 8003584. Throughput: 0: 942.6. Samples: 1998978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:43:15,450][00176] Avg episode reward: [(0, '24.368')] +[2023-02-24 15:43:19,814][10350] Updated weights for policy 0, policy_version 1960 (0.0027) +[2023-02-24 15:43:20,447][00176] Fps is (10 sec: 4506.5, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 8028160. Throughput: 0: 999.8. Samples: 2005974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:43:20,457][00176] Avg episode reward: [(0, '24.972')] +[2023-02-24 15:43:25,447][00176] Fps is (10 sec: 4505.3, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8048640. Throughput: 0: 992.2. Samples: 2012338. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:43:25,456][00176] Avg episode reward: [(0, '26.806')] +[2023-02-24 15:43:30,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3818.3). Total num frames: 8065024. Throughput: 0: 963.3. Samples: 2014626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:43:30,454][00176] Avg episode reward: [(0, '25.967')] +[2023-02-24 15:43:31,760][10350] Updated weights for policy 0, policy_version 1970 (0.0031) +[2023-02-24 15:43:35,447][00176] Fps is (10 sec: 3686.6, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 8085504. Throughput: 0: 961.9. Samples: 2020012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:43:35,449][00176] Avg episode reward: [(0, '26.212')] +[2023-02-24 15:43:40,405][10350] Updated weights for policy 0, policy_version 1980 (0.0018) +[2023-02-24 15:43:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 8110080. Throughput: 0: 1013.6. Samples: 2027034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:43:40,450][00176] Avg episode reward: [(0, '26.478')] +[2023-02-24 15:43:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 8122368. Throughput: 0: 999.5. Samples: 2029988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:43:45,454][00176] Avg episode reward: [(0, '26.925')] +[2023-02-24 15:43:50,448][00176] Fps is (10 sec: 2866.9, 60 sec: 3823.1, 300 sec: 3804.4). Total num frames: 8138752. Throughput: 0: 936.1. Samples: 2034266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:43:50,458][00176] Avg episode reward: [(0, '26.656')] +[2023-02-24 15:43:52,712][10350] Updated weights for policy 0, policy_version 1990 (0.0016) +[2023-02-24 15:43:55,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 8163328. Throughput: 0: 972.9. Samples: 2040446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:43:55,449][00176] Avg episode reward: [(0, '24.165')] +[2023-02-24 15:44:00,447][00176] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8183808. Throughput: 0: 998.6. Samples: 2043914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:44:00,449][00176] Avg episode reward: [(0, '25.339')] +[2023-02-24 15:44:01,875][10350] Updated weights for policy 0, policy_version 2000 (0.0011) +[2023-02-24 15:44:05,451][00176] Fps is (10 sec: 3685.0, 60 sec: 3822.7, 300 sec: 3832.2). Total num frames: 8200192. Throughput: 0: 975.0. Samples: 2049854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:05,453][00176] Avg episode reward: [(0, '24.866')] +[2023-02-24 15:44:10,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3804.4). Total num frames: 8216576. Throughput: 0: 934.1. Samples: 2054374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:10,456][00176] Avg episode reward: [(0, '23.993')] +[2023-02-24 15:44:13,358][10350] Updated weights for policy 0, policy_version 2010 (0.0017) +[2023-02-24 15:44:15,447][00176] Fps is (10 sec: 4097.6, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 8241152. Throughput: 0: 958.7. Samples: 2057766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:15,455][00176] Avg episode reward: [(0, '22.257')] +[2023-02-24 15:44:20,447][00176] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 8265728. Throughput: 0: 1000.3. Samples: 2065024. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:44:20,451][00176] Avg episode reward: [(0, '23.098')] +[2023-02-24 15:44:23,336][10350] Updated weights for policy 0, policy_version 2020 (0.0012) +[2023-02-24 15:44:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3818.3). Total num frames: 8278016. Throughput: 0: 952.1. Samples: 2069878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:25,449][00176] Avg episode reward: [(0, '24.243')] +[2023-02-24 15:44:30,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 8294400. Throughput: 0: 935.6. Samples: 2072088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:30,449][00176] Avg episode reward: [(0, '22.454')] +[2023-02-24 15:44:30,470][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002025_8294400.pth... +[2023-02-24 15:44:30,594][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001804_7389184.pth +[2023-02-24 15:44:34,360][10350] Updated weights for policy 0, policy_version 2030 (0.0016) +[2023-02-24 15:44:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 8318976. Throughput: 0: 978.0. Samples: 2078274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:35,449][00176] Avg episode reward: [(0, '23.480')] +[2023-02-24 15:44:40,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 8335360. Throughput: 0: 980.6. Samples: 2084572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:40,450][00176] Avg episode reward: [(0, '24.612')] +[2023-02-24 15:44:45,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 8351744. Throughput: 0: 953.7. Samples: 2086830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:44:45,462][00176] Avg episode reward: [(0, '23.702')] +[2023-02-24 15:44:46,011][10350] Updated weights for policy 0, policy_version 2040 (0.0018) +[2023-02-24 15:44:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3804.4). Total num frames: 8372224. Throughput: 0: 930.3. Samples: 2091712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:44:50,449][00176] Avg episode reward: [(0, '23.407')] +[2023-02-24 15:44:55,225][10350] Updated weights for policy 0, policy_version 2050 (0.0019) +[2023-02-24 15:44:55,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 8396800. Throughput: 0: 987.3. Samples: 2098804. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:44:55,453][00176] Avg episode reward: [(0, '24.623')] +[2023-02-24 15:45:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 8413184. Throughput: 0: 989.9. Samples: 2102312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:00,455][00176] Avg episode reward: [(0, '23.374')] +[2023-02-24 15:45:05,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3823.2, 300 sec: 3790.5). Total num frames: 8429568. Throughput: 0: 930.7. Samples: 2106906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:45:05,451][00176] Avg episode reward: [(0, '23.206')] +[2023-02-24 15:45:07,293][10350] Updated weights for policy 0, policy_version 2060 (0.0018) +[2023-02-24 15:45:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 8450048. Throughput: 0: 952.0. Samples: 2112718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:10,456][00176] Avg episode reward: [(0, '23.047')] +[2023-02-24 15:45:15,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 8474624. Throughput: 0: 980.4. Samples: 2116208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:15,454][00176] Avg episode reward: [(0, '24.105')] +[2023-02-24 15:45:15,860][10350] Updated weights for policy 0, policy_version 2070 (0.0016) +[2023-02-24 15:45:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 8491008. Throughput: 0: 985.0. Samples: 2122600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:20,455][00176] Avg episode reward: [(0, '22.854')] +[2023-02-24 15:45:25,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 8507392. Throughput: 0: 939.6. Samples: 2126856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:25,457][00176] Avg episode reward: [(0, '22.503')] +[2023-02-24 15:45:28,159][10350] Updated weights for policy 0, policy_version 2080 (0.0014) +[2023-02-24 15:45:30,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 8527872. Throughput: 0: 953.2. Samples: 2129722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:30,449][00176] Avg episode reward: [(0, '23.969')] +[2023-02-24 15:45:35,447][00176] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8552448. Throughput: 0: 997.1. Samples: 2136580. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:45:35,449][00176] Avg episode reward: [(0, '23.997')] +[2023-02-24 15:45:37,870][10350] Updated weights for policy 0, policy_version 2090 (0.0022) +[2023-02-24 15:45:40,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 8564736. Throughput: 0: 955.8. Samples: 2141814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:45:40,449][00176] Avg episode reward: [(0, '24.302')] +[2023-02-24 15:45:45,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 8581120. Throughput: 0: 925.8. Samples: 2143972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:45:45,453][00176] Avg episode reward: [(0, '23.056')] +[2023-02-24 15:45:49,392][10350] Updated weights for policy 0, policy_version 2100 (0.0014) +[2023-02-24 15:45:50,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 8605696. Throughput: 0: 956.8. Samples: 2149964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:45:50,449][00176] Avg episode reward: [(0, '23.600')] +[2023-02-24 15:45:55,449][00176] Fps is (10 sec: 4504.7, 60 sec: 3822.8, 300 sec: 3846.1). Total num frames: 8626176. Throughput: 0: 972.0. Samples: 2156460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:45:55,452][00176] Avg episode reward: [(0, '24.053')] +[2023-02-24 15:46:00,454][00176] Fps is (10 sec: 3274.5, 60 sec: 3754.2, 300 sec: 3818.2). Total num frames: 8638464. Throughput: 0: 939.2. Samples: 2158478. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:00,457][00176] Avg episode reward: [(0, '23.421')] +[2023-02-24 15:46:01,546][10350] Updated weights for policy 0, policy_version 2110 (0.0015) +[2023-02-24 15:46:05,447][00176] Fps is (10 sec: 2867.7, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 8654848. Throughput: 0: 891.8. Samples: 2162732. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:46:05,456][00176] Avg episode reward: [(0, '23.415')] +[2023-02-24 15:46:10,447][00176] Fps is (10 sec: 4098.9, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8679424. Throughput: 0: 947.6. Samples: 2169498. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:10,454][00176] Avg episode reward: [(0, '23.778')] +[2023-02-24 15:46:11,275][10350] Updated weights for policy 0, policy_version 2120 (0.0016) +[2023-02-24 15:46:15,449][00176] Fps is (10 sec: 4095.1, 60 sec: 3686.3, 300 sec: 3846.0). Total num frames: 8695808. Throughput: 0: 958.4. Samples: 2172854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:46:15,453][00176] Avg episode reward: [(0, '24.230')] +[2023-02-24 15:46:20,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 8712192. Throughput: 0: 915.0. Samples: 2177754. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:46:20,455][00176] Avg episode reward: [(0, '22.937')] +[2023-02-24 15:46:23,218][10350] Updated weights for policy 0, policy_version 2130 (0.0020) +[2023-02-24 15:46:25,447][00176] Fps is (10 sec: 3687.2, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 8732672. Throughput: 0: 920.2. Samples: 2183222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:25,449][00176] Avg episode reward: [(0, '23.490')] +[2023-02-24 15:46:30,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8757248. Throughput: 0: 951.8. Samples: 2186802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:46:30,456][00176] Avg episode reward: [(0, '23.095')] +[2023-02-24 15:46:30,472][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002138_8757248.pth... +[2023-02-24 15:46:30,595][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001912_7831552.pth +[2023-02-24 15:46:32,054][10350] Updated weights for policy 0, policy_version 2140 (0.0017) +[2023-02-24 15:46:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 8773632. Throughput: 0: 958.8. Samples: 2193108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:35,450][00176] Avg episode reward: [(0, '24.446')] +[2023-02-24 15:46:40,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 8790016. Throughput: 0: 912.4. Samples: 2197516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:40,456][00176] Avg episode reward: [(0, '23.920')] +[2023-02-24 15:46:43,910][10350] Updated weights for policy 0, policy_version 2150 (0.0045) +[2023-02-24 15:46:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8810496. Throughput: 0: 931.9. Samples: 2200408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:45,450][00176] Avg episode reward: [(0, '25.218')] +[2023-02-24 15:46:50,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 8835072. Throughput: 0: 991.0. Samples: 2207326. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:46:50,450][00176] Avg episode reward: [(0, '26.535')] +[2023-02-24 15:46:54,242][10350] Updated weights for policy 0, policy_version 2160 (0.0013) +[2023-02-24 15:46:55,452][00176] Fps is (10 sec: 3684.6, 60 sec: 3686.2, 300 sec: 3846.0). Total num frames: 8847360. Throughput: 0: 960.5. Samples: 2212726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:46:55,454][00176] Avg episode reward: [(0, '25.956')] +[2023-02-24 15:47:00,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3755.1, 300 sec: 3818.3). Total num frames: 8863744. Throughput: 0: 933.1. Samples: 2214842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:00,450][00176] Avg episode reward: [(0, '26.081')] +[2023-02-24 15:47:05,341][10350] Updated weights for policy 0, policy_version 2170 (0.0030) +[2023-02-24 15:47:05,447][00176] Fps is (10 sec: 4098.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8888320. Throughput: 0: 949.3. Samples: 2220472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:47:05,450][00176] Avg episode reward: [(0, '25.599')] +[2023-02-24 15:47:10,451][00176] Fps is (10 sec: 4503.8, 60 sec: 3822.7, 300 sec: 3859.9). Total num frames: 8908800. Throughput: 0: 974.5. Samples: 2227080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:10,454][00176] Avg episode reward: [(0, '26.117')] +[2023-02-24 15:47:15,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3754.8, 300 sec: 3818.3). Total num frames: 8921088. Throughput: 0: 948.4. Samples: 2229482. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:15,449][00176] Avg episode reward: [(0, '25.090')] +[2023-02-24 15:47:17,397][10350] Updated weights for policy 0, policy_version 2180 (0.0013) +[2023-02-24 15:47:20,449][00176] Fps is (10 sec: 2867.7, 60 sec: 3754.5, 300 sec: 3804.4). Total num frames: 8937472. Throughput: 0: 904.9. Samples: 2233832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:20,451][00176] Avg episode reward: [(0, '23.891')] +[2023-02-24 15:47:25,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 8962048. Throughput: 0: 948.8. Samples: 2240210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:25,450][00176] Avg episode reward: [(0, '23.700')] +[2023-02-24 15:47:27,309][10350] Updated weights for policy 0, policy_version 2190 (0.0022) +[2023-02-24 15:47:30,448][00176] Fps is (10 sec: 4506.2, 60 sec: 3754.6, 300 sec: 3846.1). Total num frames: 8982528. Throughput: 0: 956.5. Samples: 2243452. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:47:30,451][00176] Avg episode reward: [(0, '23.957')] +[2023-02-24 15:47:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 8994816. Throughput: 0: 916.4. Samples: 2248564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:35,457][00176] Avg episode reward: [(0, '23.586')] +[2023-02-24 15:47:39,582][10350] Updated weights for policy 0, policy_version 2200 (0.0035) +[2023-02-24 15:47:40,447][00176] Fps is (10 sec: 3277.1, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 9015296. Throughput: 0: 904.9. Samples: 2253440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:47:40,449][00176] Avg episode reward: [(0, '21.691')] +[2023-02-24 15:47:45,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 9039872. Throughput: 0: 936.7. Samples: 2256992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:45,449][00176] Avg episode reward: [(0, '22.918')] +[2023-02-24 15:47:47,936][10350] Updated weights for policy 0, policy_version 2210 (0.0021) +[2023-02-24 15:47:50,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 9060352. Throughput: 0: 975.5. Samples: 2264368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:47:50,451][00176] Avg episode reward: [(0, '23.076')] +[2023-02-24 15:47:55,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3755.0, 300 sec: 3804.4). Total num frames: 9072640. Throughput: 0: 930.1. Samples: 2268932. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:47:55,450][00176] Avg episode reward: [(0, '23.007')] +[2023-02-24 15:47:59,621][10350] Updated weights for policy 0, policy_version 2220 (0.0025) +[2023-02-24 15:48:00,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 9097216. Throughput: 0: 930.7. Samples: 2271364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:48:00,451][00176] Avg episode reward: [(0, '22.353')] +[2023-02-24 15:48:05,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 9117696. Throughput: 0: 994.2. Samples: 2278568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:48:05,449][00176] Avg episode reward: [(0, '22.329')] +[2023-02-24 15:48:08,381][10350] Updated weights for policy 0, policy_version 2230 (0.0020) +[2023-02-24 15:48:10,447][00176] Fps is (10 sec: 4095.9, 60 sec: 3823.2, 300 sec: 3846.1). Total num frames: 9138176. Throughput: 0: 996.0. Samples: 2285028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:48:10,453][00176] Avg episode reward: [(0, '21.680')] +[2023-02-24 15:48:15,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 9154560. Throughput: 0: 972.5. Samples: 2287216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:48:15,454][00176] Avg episode reward: [(0, '20.437')] +[2023-02-24 15:48:19,746][10350] Updated weights for policy 0, policy_version 2240 (0.0032) +[2023-02-24 15:48:20,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3818.3). Total num frames: 9175040. Throughput: 0: 980.8. Samples: 2292698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:48:20,456][00176] Avg episode reward: [(0, '21.258')] +[2023-02-24 15:48:25,447][00176] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 9199616. Throughput: 0: 1034.6. Samples: 2299996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:48:25,449][00176] Avg episode reward: [(0, '20.419')] +[2023-02-24 15:48:29,503][10350] Updated weights for policy 0, policy_version 2250 (0.0012) +[2023-02-24 15:48:30,449][00176] Fps is (10 sec: 4095.2, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 9216000. Throughput: 0: 1023.2. Samples: 2303036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:48:30,453][00176] Avg episode reward: [(0, '20.873')] +[2023-02-24 15:48:30,467][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002250_9216000.pth... +[2023-02-24 15:48:30,632][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002025_8294400.pth +[2023-02-24 15:48:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3804.4). Total num frames: 9232384. Throughput: 0: 957.4. Samples: 2307450. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:48:35,453][00176] Avg episode reward: [(0, '21.148')] +[2023-02-24 15:48:40,282][10350] Updated weights for policy 0, policy_version 2260 (0.0027) +[2023-02-24 15:48:40,447][00176] Fps is (10 sec: 4096.8, 60 sec: 4027.7, 300 sec: 3846.1). Total num frames: 9256960. Throughput: 0: 995.3. Samples: 2313722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:48:40,455][00176] Avg episode reward: [(0, '21.700')] +[2023-02-24 15:48:45,447][00176] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3860.0). Total num frames: 9277440. Throughput: 0: 1018.7. Samples: 2317206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:48:45,450][00176] Avg episode reward: [(0, '22.385')] +[2023-02-24 15:48:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 9293824. Throughput: 0: 989.2. Samples: 2323084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:48:50,449][00176] Avg episode reward: [(0, '23.205')] +[2023-02-24 15:48:51,098][10350] Updated weights for policy 0, policy_version 2270 (0.0013) +[2023-02-24 15:48:55,447][00176] Fps is (10 sec: 3276.9, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 9310208. Throughput: 0: 946.9. Samples: 2327638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:48:55,449][00176] Avg episode reward: [(0, '22.796')] +[2023-02-24 15:49:00,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 9334784. Throughput: 0: 977.0. Samples: 2331180. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:49:00,455][00176] Avg episode reward: [(0, '22.602')] +[2023-02-24 15:49:00,959][10350] Updated weights for policy 0, policy_version 2280 (0.0020) +[2023-02-24 15:49:05,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 9355264. Throughput: 0: 1011.2. Samples: 2338204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:49:05,449][00176] Avg episode reward: [(0, '23.241')] +[2023-02-24 15:49:10,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 9371648. Throughput: 0: 959.5. Samples: 2343174. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-24 15:49:10,457][00176] Avg episode reward: [(0, '23.061')] +[2023-02-24 15:49:12,519][10350] Updated weights for policy 0, policy_version 2290 (0.0018) +[2023-02-24 15:49:15,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 9392128. Throughput: 0: 943.0. Samples: 2345470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:49:15,455][00176] Avg episode reward: [(0, '22.023')] +[2023-02-24 15:49:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3860.0). Total num frames: 9416704. Throughput: 0: 999.2. Samples: 2352416. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-24 15:49:20,454][00176] Avg episode reward: [(0, '22.200')] +[2023-02-24 15:49:21,360][10350] Updated weights for policy 0, policy_version 2300 (0.0029) +[2023-02-24 15:49:25,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 9437184. Throughput: 0: 1009.7. Samples: 2359160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:49:25,450][00176] Avg episode reward: [(0, '22.111')] +[2023-02-24 15:49:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3832.2). Total num frames: 9449472. Throughput: 0: 982.2. Samples: 2361404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:49:30,449][00176] Avg episode reward: [(0, '21.794')] +[2023-02-24 15:49:33,040][10350] Updated weights for policy 0, policy_version 2310 (0.0017) +[2023-02-24 15:49:35,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 9469952. Throughput: 0: 966.3. Samples: 2366568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:49:35,452][00176] Avg episode reward: [(0, '21.789')] +[2023-02-24 15:49:40,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 9494528. Throughput: 0: 1027.4. Samples: 2373872. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-24 15:49:40,453][00176] Avg episode reward: [(0, '21.855')] +[2023-02-24 15:49:41,704][10350] Updated weights for policy 0, policy_version 2320 (0.0012) +[2023-02-24 15:49:45,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 9510912. Throughput: 0: 1020.0. Samples: 2377078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:49:45,449][00176] Avg episode reward: [(0, '22.830')] +[2023-02-24 15:49:50,448][00176] Fps is (10 sec: 3276.5, 60 sec: 3891.1, 300 sec: 3832.2). Total num frames: 9527296. Throughput: 0: 957.8. Samples: 2381308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:49:50,454][00176] Avg episode reward: [(0, '22.800')] +[2023-02-24 15:49:54,031][10350] Updated weights for policy 0, policy_version 2330 (0.0014) +[2023-02-24 15:49:55,447][00176] Fps is (10 sec: 3686.2, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 9547776. Throughput: 0: 969.0. Samples: 2386778. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-24 15:49:55,449][00176] Avg episode reward: [(0, '22.547')] +[2023-02-24 15:50:00,447][00176] Fps is (10 sec: 4506.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 9572352. Throughput: 0: 993.2. Samples: 2390162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:50:00,449][00176] Avg episode reward: [(0, '23.268')] +[2023-02-24 15:50:04,444][10350] Updated weights for policy 0, policy_version 2340 (0.0014) +[2023-02-24 15:50:05,448][00176] Fps is (10 sec: 3686.1, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 9584640. Throughput: 0: 967.5. Samples: 2395956. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-24 15:50:05,450][00176] Avg episode reward: [(0, '21.232')] +[2023-02-24 15:50:10,447][00176] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 9601024. Throughput: 0: 912.9. Samples: 2400242. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:50:10,456][00176] Avg episode reward: [(0, '21.579')] +[2023-02-24 15:50:15,330][10350] Updated weights for policy 0, policy_version 2350 (0.0011) +[2023-02-24 15:50:15,447][00176] Fps is (10 sec: 4096.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 9625600. Throughput: 0: 938.0. Samples: 2403616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:50:15,449][00176] Avg episode reward: [(0, '22.401')] +[2023-02-24 15:50:20,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 9646080. Throughput: 0: 983.1. Samples: 2410806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-24 15:50:20,449][00176] Avg episode reward: [(0, '22.420')] +[2023-02-24 15:50:25,449][00176] Fps is (10 sec: 3685.5, 60 sec: 3754.5, 300 sec: 3846.0). Total num frames: 9662464. Throughput: 0: 938.7. Samples: 2416116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:50:25,456][00176] Avg episode reward: [(0, '21.895')] +[2023-02-24 15:50:25,937][10350] Updated weights for policy 0, policy_version 2360 (0.0016) +[2023-02-24 15:50:30,447][00176] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 9678848. Throughput: 0: 918.7. Samples: 2418418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:50:30,455][00176] Avg episode reward: [(0, '22.123')] +[2023-02-24 15:50:30,476][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002364_9682944.pth... +[2023-02-24 15:50:30,601][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002138_8757248.pth +[2023-02-24 15:50:35,447][00176] Fps is (10 sec: 4097.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 9703424. Throughput: 0: 969.4. Samples: 2424932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:50:35,449][00176] Avg episode reward: [(0, '19.803')] +[2023-02-24 15:50:35,634][10350] Updated weights for policy 0, policy_version 2370 (0.0021) +[2023-02-24 15:50:40,447][00176] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 9728000. Throughput: 0: 1004.9. Samples: 2431996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:50:40,450][00176] Avg episode reward: [(0, '18.273')] +[2023-02-24 15:50:45,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 9740288. Throughput: 0: 981.0. Samples: 2434306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:50:45,456][00176] Avg episode reward: [(0, '18.393')] +[2023-02-24 15:50:46,788][10350] Updated weights for policy 0, policy_version 2380 (0.0011) +[2023-02-24 15:50:50,447][00176] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 9760768. Throughput: 0: 961.2. Samples: 2439210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:50:50,449][00176] Avg episode reward: [(0, '19.665')] +[2023-02-24 15:50:55,447][00176] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.8). Total num frames: 9785344. Throughput: 0: 1027.4. Samples: 2446474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:50:55,449][00176] Avg episode reward: [(0, '20.621')] +[2023-02-24 15:50:55,762][10350] Updated weights for policy 0, policy_version 2390 (0.0018) +[2023-02-24 15:51:00,452][00176] Fps is (10 sec: 4503.5, 60 sec: 3890.9, 300 sec: 3901.5). Total num frames: 9805824. Throughput: 0: 1031.0. Samples: 2450018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:51:00,454][00176] Avg episode reward: [(0, '20.705')] +[2023-02-24 15:51:05,447][00176] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 9822208. Throughput: 0: 982.4. Samples: 2455016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:51:05,457][00176] Avg episode reward: [(0, '21.493')] +[2023-02-24 15:51:07,483][10350] Updated weights for policy 0, policy_version 2400 (0.0015) +[2023-02-24 15:51:10,447][00176] Fps is (10 sec: 3688.2, 60 sec: 4027.7, 300 sec: 3887.8). Total num frames: 9842688. Throughput: 0: 986.8. Samples: 2460518. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:51:10,456][00176] Avg episode reward: [(0, '20.776')] +[2023-02-24 15:51:15,447][00176] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 9863168. Throughput: 0: 1009.1. Samples: 2463828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:51:15,448][00176] Avg episode reward: [(0, '22.349')] +[2023-02-24 15:51:16,290][10350] Updated weights for policy 0, policy_version 2410 (0.0013) +[2023-02-24 15:51:20,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 9883648. Throughput: 0: 1011.2. Samples: 2470436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:51:20,453][00176] Avg episode reward: [(0, '24.572')] +[2023-02-24 15:51:25,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3873.8). Total num frames: 9900032. Throughput: 0: 949.8. Samples: 2474736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:51:25,452][00176] Avg episode reward: [(0, '24.008')] +[2023-02-24 15:51:28,566][10350] Updated weights for policy 0, policy_version 2420 (0.0017) +[2023-02-24 15:51:30,447][00176] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3887.7). Total num frames: 9920512. Throughput: 0: 958.0. Samples: 2477418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-24 15:51:30,449][00176] Avg episode reward: [(0, '25.073')] +[2023-02-24 15:51:35,447][00176] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 9940992. Throughput: 0: 995.2. Samples: 2483992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:51:35,451][00176] Avg episode reward: [(0, '25.775')] +[2023-02-24 15:51:38,770][10350] Updated weights for policy 0, policy_version 2430 (0.0012) +[2023-02-24 15:51:40,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 9957376. Throughput: 0: 954.3. Samples: 2489418. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:51:40,455][00176] Avg episode reward: [(0, '26.642')] +[2023-02-24 15:51:45,449][00176] Fps is (10 sec: 2866.6, 60 sec: 3822.8, 300 sec: 3846.0). Total num frames: 9969664. Throughput: 0: 922.1. Samples: 2491512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-24 15:51:45,455][00176] Avg episode reward: [(0, '25.594')] +[2023-02-24 15:51:50,154][10350] Updated weights for policy 0, policy_version 2440 (0.0018) +[2023-02-24 15:51:50,447][00176] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 9994240. Throughput: 0: 939.6. Samples: 2497296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-24 15:51:50,452][00176] Avg episode reward: [(0, '24.406')] +[2023-02-24 15:51:52,717][00176] Component Batcher_0 stopped! +[2023-02-24 15:51:52,714][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth... +[2023-02-24 15:51:52,717][10336] Stopping Batcher_0... +[2023-02-24 15:51:52,726][10336] Loop batcher_evt_loop terminating... +[2023-02-24 15:51:52,768][10350] Weights refcount: 2 0 +[2023-02-24 15:51:52,772][00176] Component InferenceWorker_p0-w0 stopped! +[2023-02-24 15:51:52,771][10350] Stopping InferenceWorker_p0-w0... +[2023-02-24 15:51:52,779][10350] Loop inference_proc0-0_evt_loop terminating... +[2023-02-24 15:51:52,792][00176] Component RolloutWorker_w3 stopped! +[2023-02-24 15:51:52,791][10355] Stopping RolloutWorker_w3... +[2023-02-24 15:51:52,799][10355] Loop rollout_proc3_evt_loop terminating... +[2023-02-24 15:51:52,826][00176] Component RolloutWorker_w4 stopped! +[2023-02-24 15:51:52,828][00176] Component RolloutWorker_w1 stopped! +[2023-02-24 15:51:52,826][10352] Stopping RolloutWorker_w1... +[2023-02-24 15:51:52,834][10352] Loop rollout_proc1_evt_loop terminating... +[2023-02-24 15:51:52,833][10354] Stopping RolloutWorker_w4... +[2023-02-24 15:51:52,835][10354] Loop rollout_proc4_evt_loop terminating... +[2023-02-24 15:51:52,841][10357] Stopping RolloutWorker_w7... +[2023-02-24 15:51:52,842][10357] Loop rollout_proc7_evt_loop terminating... +[2023-02-24 15:51:52,841][00176] Component RolloutWorker_w7 stopped! +[2023-02-24 15:51:52,854][00176] Component RolloutWorker_w0 stopped! +[2023-02-24 15:51:52,856][10351] Stopping RolloutWorker_w0... +[2023-02-24 15:51:52,859][00176] Component RolloutWorker_w2 stopped! +[2023-02-24 15:51:52,861][10353] Stopping RolloutWorker_w2... +[2023-02-24 15:51:52,862][10353] Loop rollout_proc2_evt_loop terminating... +[2023-02-24 15:51:52,869][00176] Component RolloutWorker_w6 stopped! +[2023-02-24 15:51:52,873][10358] Stopping RolloutWorker_w6... +[2023-02-24 15:51:52,877][10351] Loop rollout_proc0_evt_loop terminating... +[2023-02-24 15:51:52,881][10358] Loop rollout_proc6_evt_loop terminating... +[2023-02-24 15:51:52,885][10356] Stopping RolloutWorker_w5... +[2023-02-24 15:51:52,885][00176] Component RolloutWorker_w5 stopped! +[2023-02-24 15:51:52,886][10356] Loop rollout_proc5_evt_loop terminating... +[2023-02-24 15:51:52,922][10336] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002250_9216000.pth +[2023-02-24 15:51:52,938][10336] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth... +[2023-02-24 15:51:53,107][00176] Component LearnerWorker_p0 stopped! +[2023-02-24 15:51:53,116][00176] Waiting for process learner_proc0 to stop... +[2023-02-24 15:51:53,122][10336] Stopping LearnerWorker_p0... +[2023-02-24 15:51:53,123][10336] Loop learner_proc0_evt_loop terminating... +[2023-02-24 15:51:55,102][00176] Waiting for process inference_proc0-0 to join... +[2023-02-24 15:51:55,448][00176] Waiting for process rollout_proc0 to join... +[2023-02-24 15:51:56,057][00176] Waiting for process rollout_proc1 to join... +[2023-02-24 15:51:56,062][00176] Waiting for process rollout_proc2 to join... +[2023-02-24 15:51:56,064][00176] Waiting for process rollout_proc3 to join... +[2023-02-24 15:51:56,066][00176] Waiting for process rollout_proc4 to join... +[2023-02-24 15:51:56,070][00176] Waiting for process rollout_proc5 to join... +[2023-02-24 15:51:56,071][00176] Waiting for process rollout_proc6 to join... +[2023-02-24 15:51:56,079][00176] Waiting for process rollout_proc7 to join... +[2023-02-24 15:51:56,081][00176] Batcher 0 profile tree view: +batching: 61.9047, releasing_batches: 0.0641 +[2023-02-24 15:51:56,083][00176] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 1324.5226 +update_model: 19.0534 + weight_update: 0.0023 +one_step: 0.0075 + handle_policy_step: 1246.7841 + deserialize: 36.3188, stack: 7.2857, obs_to_device_normalize: 280.5484, forward: 596.0528, send_messages: 64.8147 + prepare_outputs: 199.0643 + to_cpu: 122.1314 +[2023-02-24 15:51:56,084][00176] Learner 0 profile tree view: +misc: 0.0164, prepare_batch: 33.0225 +train: 184.7091 + epoch_init: 0.0184, minibatch_init: 0.0190, losses_postprocess: 1.5733, kl_divergence: 1.4860, after_optimizer: 81.6772 + calculate_losses: 65.3079 + losses_init: 0.0188, forward_head: 4.0963, bptt_initial: 43.1596, tail: 2.6864, advantages_returns: 0.7219, losses: 8.4585 + bptt: 5.3761 + bptt_forward_core: 5.1987 + update: 33.0925 + clip: 3.4745 +[2023-02-24 15:51:56,086][00176] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.8548, enqueue_policy_requests: 358.1753, env_step: 2037.5542, overhead: 53.3986, complete_rollouts: 17.0397 +save_policy_outputs: 49.7973 + split_output_tensors: 24.1047 +[2023-02-24 15:51:56,087][00176] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.7938, enqueue_policy_requests: 368.7011, env_step: 2029.1945, overhead: 50.1468, complete_rollouts: 17.2880 +save_policy_outputs: 49.5113 + split_output_tensors: 23.7072 +[2023-02-24 15:51:56,090][00176] Loop Runner_EvtLoop terminating... +[2023-02-24 15:51:56,092][00176] Runner profile tree view: +main_loop: 2718.0707 +[2023-02-24 15:51:56,093][00176] Collected {0: 10006528}, FPS: 3681.5 +[2023-02-24 15:51:56,333][00176] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-24 15:51:56,337][00176] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-24 15:51:56,340][00176] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-24 15:51:56,341][00176] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-24 15:51:56,343][00176] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-24 15:51:56,345][00176] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-24 15:51:56,347][00176] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-24 15:51:56,348][00176] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-24 15:51:56,349][00176] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-24 15:51:56,351][00176] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-24 15:51:56,352][00176] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-24 15:51:56,353][00176] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-24 15:51:56,354][00176] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-24 15:51:56,355][00176] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-24 15:51:56,357][00176] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-24 15:51:56,389][00176] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-24 15:51:56,392][00176] RunningMeanStd input shape: (3, 72, 128) +[2023-02-24 15:51:56,395][00176] RunningMeanStd input shape: (1,) +[2023-02-24 15:51:56,417][00176] ConvEncoder: input_channels=3 +[2023-02-24 15:51:57,242][00176] Conv encoder output size: 512 +[2023-02-24 15:51:57,247][00176] Policy head output size: 512 +[2023-02-24 15:52:00,038][00176] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth... +[2023-02-24 15:52:01,395][00176] Num frames 100... +[2023-02-24 15:52:01,507][00176] Num frames 200... +[2023-02-24 15:52:01,619][00176] Num frames 300... +[2023-02-24 15:52:01,743][00176] Num frames 400... +[2023-02-24 15:52:01,857][00176] Num frames 500... +[2023-02-24 15:52:01,971][00176] Num frames 600... +[2023-02-24 15:52:02,088][00176] Num frames 700... +[2023-02-24 15:52:02,209][00176] Num frames 800... +[2023-02-24 15:52:02,273][00176] Avg episode rewards: #0: 15.050, true rewards: #0: 8.050 +[2023-02-24 15:52:02,275][00176] Avg episode reward: 15.050, avg true_objective: 8.050 +[2023-02-24 15:52:02,383][00176] Num frames 900... +[2023-02-24 15:52:02,505][00176] Num frames 1000... +[2023-02-24 15:52:02,627][00176] Num frames 1100... +[2023-02-24 15:52:02,745][00176] Num frames 1200... +[2023-02-24 15:52:02,865][00176] Num frames 1300... +[2023-02-24 15:52:02,977][00176] Num frames 1400... +[2023-02-24 15:52:03,083][00176] Avg episode rewards: #0: 12.725, true rewards: #0: 7.225 +[2023-02-24 15:52:03,087][00176] Avg episode reward: 12.725, avg true_objective: 7.225 +[2023-02-24 15:52:03,153][00176] Num frames 1500... +[2023-02-24 15:52:03,266][00176] Num frames 1600... +[2023-02-24 15:52:03,382][00176] Num frames 1700... +[2023-02-24 15:52:03,492][00176] Num frames 1800... +[2023-02-24 15:52:03,604][00176] Num frames 1900... +[2023-02-24 15:52:03,726][00176] Num frames 2000... +[2023-02-24 15:52:03,838][00176] Num frames 2100... +[2023-02-24 15:52:03,948][00176] Num frames 2200... +[2023-02-24 15:52:04,066][00176] Num frames 2300... +[2023-02-24 15:52:04,177][00176] Num frames 2400... +[2023-02-24 15:52:04,296][00176] Num frames 2500... +[2023-02-24 15:52:04,418][00176] Num frames 2600... +[2023-02-24 15:52:04,528][00176] Num frames 2700... +[2023-02-24 15:52:04,649][00176] Num frames 2800... +[2023-02-24 15:52:04,784][00176] Num frames 2900... +[2023-02-24 15:52:04,900][00176] Num frames 3000... +[2023-02-24 15:52:05,028][00176] Num frames 3100... +[2023-02-24 15:52:05,163][00176] Avg episode rewards: #0: 22.570, true rewards: #0: 10.570 +[2023-02-24 15:52:05,165][00176] Avg episode reward: 22.570, avg true_objective: 10.570 +[2023-02-24 15:52:05,200][00176] Num frames 3200... +[2023-02-24 15:52:05,307][00176] Num frames 3300... +[2023-02-24 15:52:05,422][00176] Num frames 3400... +[2023-02-24 15:52:05,532][00176] Num frames 3500... +[2023-02-24 15:52:05,645][00176] Num frames 3600... +[2023-02-24 15:52:05,768][00176] Num frames 3700... +[2023-02-24 15:52:05,892][00176] Num frames 3800... +[2023-02-24 15:52:06,010][00176] Num frames 3900... +[2023-02-24 15:52:06,136][00176] Num frames 4000... +[2023-02-24 15:52:06,238][00176] Avg episode rewards: #0: 21.337, true rewards: #0: 10.087 +[2023-02-24 15:52:06,241][00176] Avg episode reward: 21.337, avg true_objective: 10.087 +[2023-02-24 15:52:06,328][00176] Num frames 4100... +[2023-02-24 15:52:06,448][00176] Num frames 4200... +[2023-02-24 15:52:06,566][00176] Num frames 4300... +[2023-02-24 15:52:06,688][00176] Num frames 4400... +[2023-02-24 15:52:06,809][00176] Num frames 4500... +[2023-02-24 15:52:06,929][00176] Num frames 4600... +[2023-02-24 15:52:07,042][00176] Num frames 4700... +[2023-02-24 15:52:07,153][00176] Num frames 4800... +[2023-02-24 15:52:07,273][00176] Num frames 4900... +[2023-02-24 15:52:07,369][00176] Avg episode rewards: #0: 21.068, true rewards: #0: 9.868 +[2023-02-24 15:52:07,372][00176] Avg episode reward: 21.068, avg true_objective: 9.868 +[2023-02-24 15:52:07,450][00176] Num frames 5000... +[2023-02-24 15:52:07,564][00176] Num frames 5100... +[2023-02-24 15:52:07,676][00176] Num frames 5200... +[2023-02-24 15:52:07,800][00176] Num frames 5300... +[2023-02-24 15:52:07,899][00176] Avg episode rewards: #0: 19.062, true rewards: #0: 8.895 +[2023-02-24 15:52:07,901][00176] Avg episode reward: 19.062, avg true_objective: 8.895 +[2023-02-24 15:52:07,976][00176] Num frames 5400... +[2023-02-24 15:52:08,097][00176] Num frames 5500... +[2023-02-24 15:52:08,210][00176] Num frames 5600... +[2023-02-24 15:52:08,329][00176] Num frames 5700... +[2023-02-24 15:52:08,449][00176] Num frames 5800... +[2023-02-24 15:52:08,563][00176] Num frames 5900... +[2023-02-24 15:52:08,702][00176] Num frames 6000... +[2023-02-24 15:52:08,873][00176] Num frames 6100... +[2023-02-24 15:52:09,033][00176] Num frames 6200... +[2023-02-24 15:52:09,196][00176] Num frames 6300... +[2023-02-24 15:52:09,350][00176] Num frames 6400... +[2023-02-24 15:52:09,507][00176] Num frames 6500... +[2023-02-24 15:52:09,671][00176] Num frames 6600... +[2023-02-24 15:52:09,831][00176] Num frames 6700... +[2023-02-24 15:52:09,993][00176] Num frames 6800... +[2023-02-24 15:52:10,154][00176] Num frames 6900... +[2023-02-24 15:52:10,341][00176] Avg episode rewards: #0: 21.691, true rewards: #0: 9.977 +[2023-02-24 15:52:10,344][00176] Avg episode reward: 21.691, avg true_objective: 9.977 +[2023-02-24 15:52:10,380][00176] Num frames 7000... +[2023-02-24 15:52:10,545][00176] Num frames 7100... +[2023-02-24 15:52:10,700][00176] Num frames 7200... +[2023-02-24 15:52:10,864][00176] Num frames 7300... +[2023-02-24 15:52:11,035][00176] Num frames 7400... +[2023-02-24 15:52:11,199][00176] Num frames 7500... +[2023-02-24 15:52:11,361][00176] Num frames 7600... +[2023-02-24 15:52:11,563][00176] Avg episode rewards: #0: 20.735, true rewards: #0: 9.610 +[2023-02-24 15:52:11,566][00176] Avg episode reward: 20.735, avg true_objective: 9.610 +[2023-02-24 15:52:11,599][00176] Num frames 7700... +[2023-02-24 15:52:11,713][00176] Num frames 7800... +[2023-02-24 15:52:11,832][00176] Num frames 7900... +[2023-02-24 15:52:11,950][00176] Num frames 8000... +[2023-02-24 15:52:12,067][00176] Num frames 8100... +[2023-02-24 15:52:12,178][00176] Num frames 8200... +[2023-02-24 15:52:12,297][00176] Num frames 8300... +[2023-02-24 15:52:12,406][00176] Num frames 8400... +[2023-02-24 15:52:12,523][00176] Num frames 8500... +[2023-02-24 15:52:12,637][00176] Num frames 8600... +[2023-02-24 15:52:12,750][00176] Num frames 8700... +[2023-02-24 15:52:12,867][00176] Num frames 8800... +[2023-02-24 15:52:12,984][00176] Num frames 8900... +[2023-02-24 15:52:13,116][00176] Avg episode rewards: #0: 21.853, true rewards: #0: 9.964 +[2023-02-24 15:52:13,118][00176] Avg episode reward: 21.853, avg true_objective: 9.964 +[2023-02-24 15:52:13,159][00176] Num frames 9000... +[2023-02-24 15:52:13,272][00176] Num frames 9100... +[2023-02-24 15:52:13,384][00176] Num frames 9200... +[2023-02-24 15:52:13,499][00176] Num frames 9300... +[2023-02-24 15:52:13,633][00176] Num frames 9400... +[2023-02-24 15:52:13,748][00176] Num frames 9500... +[2023-02-24 15:52:13,869][00176] Num frames 9600... +[2023-02-24 15:52:13,995][00176] Num frames 9700... +[2023-02-24 15:52:14,110][00176] Num frames 9800... +[2023-02-24 15:52:14,232][00176] Num frames 9900... +[2023-02-24 15:52:14,348][00176] Num frames 10000... +[2023-02-24 15:52:14,468][00176] Num frames 10100... +[2023-02-24 15:52:14,626][00176] Avg episode rewards: #0: 22.384, true rewards: #0: 10.184 +[2023-02-24 15:52:14,629][00176] Avg episode reward: 22.384, avg true_objective: 10.184 +[2023-02-24 15:53:14,955][00176] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-24 15:53:15,645][00176] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-24 15:53:15,647][00176] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-24 15:53:15,650][00176] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-24 15:53:15,652][00176] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-24 15:53:15,655][00176] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-24 15:53:15,660][00176] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-24 15:53:15,662][00176] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-24 15:53:15,664][00176] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-24 15:53:15,665][00176] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-24 15:53:15,666][00176] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-24 15:53:15,669][00176] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-24 15:53:15,670][00176] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-24 15:53:15,671][00176] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-24 15:53:15,672][00176] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-24 15:53:15,673][00176] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-24 15:53:15,708][00176] RunningMeanStd input shape: (3, 72, 128) +[2023-02-24 15:53:15,710][00176] RunningMeanStd input shape: (1,) +[2023-02-24 15:53:15,732][00176] ConvEncoder: input_channels=3 +[2023-02-24 15:53:15,806][00176] Conv encoder output size: 512 +[2023-02-24 15:53:15,810][00176] Policy head output size: 512 +[2023-02-24 15:53:15,840][00176] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth... +[2023-02-24 15:53:16,553][00176] Num frames 100... +[2023-02-24 15:53:16,709][00176] Num frames 200... +[2023-02-24 15:53:16,867][00176] Num frames 300... +[2023-02-24 15:53:17,018][00176] Num frames 400... +[2023-02-24 15:53:17,174][00176] Num frames 500... +[2023-02-24 15:53:17,299][00176] Avg episode rewards: #0: 7.440, true rewards: #0: 5.440 +[2023-02-24 15:53:17,301][00176] Avg episode reward: 7.440, avg true_objective: 5.440 +[2023-02-24 15:53:17,422][00176] Num frames 600... +[2023-02-24 15:53:17,611][00176] Num frames 700... +[2023-02-24 15:53:17,785][00176] Num frames 800... +[2023-02-24 15:53:17,964][00176] Num frames 900... +[2023-02-24 15:53:18,128][00176] Num frames 1000... +[2023-02-24 15:53:18,291][00176] Num frames 1100... +[2023-02-24 15:53:18,467][00176] Num frames 1200... +[2023-02-24 15:53:18,656][00176] Num frames 1300... +[2023-02-24 15:53:18,860][00176] Num frames 1400... +[2023-02-24 15:53:19,046][00176] Num frames 1500... +[2023-02-24 15:53:19,223][00176] Avg episode rewards: #0: 15.860, true rewards: #0: 7.860 +[2023-02-24 15:53:19,226][00176] Avg episode reward: 15.860, avg true_objective: 7.860 +[2023-02-24 15:53:19,284][00176] Num frames 1600... +[2023-02-24 15:53:19,469][00176] Num frames 1700... +[2023-02-24 15:53:19,641][00176] Num frames 1800... +[2023-02-24 15:53:19,813][00176] Num frames 1900... +[2023-02-24 15:53:19,990][00176] Num frames 2000... +[2023-02-24 15:53:20,169][00176] Num frames 2100... +[2023-02-24 15:53:20,355][00176] Num frames 2200... +[2023-02-24 15:53:20,553][00176] Num frames 2300... +[2023-02-24 15:53:20,729][00176] Num frames 2400... +[2023-02-24 15:53:20,895][00176] Num frames 2500... +[2023-02-24 15:53:21,070][00176] Num frames 2600... +[2023-02-24 15:53:21,250][00176] Num frames 2700... +[2023-02-24 15:53:21,435][00176] Num frames 2800... +[2023-02-24 15:53:21,631][00176] Num frames 2900... +[2023-02-24 15:53:21,717][00176] Avg episode rewards: #0: 22.053, true rewards: #0: 9.720 +[2023-02-24 15:53:21,719][00176] Avg episode reward: 22.053, avg true_objective: 9.720 +[2023-02-24 15:53:21,875][00176] Num frames 3000... +[2023-02-24 15:53:22,037][00176] Num frames 3100... +[2023-02-24 15:53:22,217][00176] Num frames 3200... +[2023-02-24 15:53:22,430][00176] Num frames 3300... +[2023-02-24 15:53:22,645][00176] Num frames 3400... +[2023-02-24 15:53:22,839][00176] Num frames 3500... +[2023-02-24 15:53:23,028][00176] Num frames 3600... +[2023-02-24 15:53:23,229][00176] Num frames 3700... +[2023-02-24 15:53:23,353][00176] Avg episode rewards: #0: 20.850, true rewards: #0: 9.350 +[2023-02-24 15:53:23,355][00176] Avg episode reward: 20.850, avg true_objective: 9.350 +[2023-02-24 15:53:23,460][00176] Num frames 3800... +[2023-02-24 15:53:23,635][00176] Num frames 3900... +[2023-02-24 15:53:23,881][00176] Avg episode rewards: #0: 17.192, true rewards: #0: 7.992 +[2023-02-24 15:53:23,884][00176] Avg episode reward: 17.192, avg true_objective: 7.992 +[2023-02-24 15:53:23,894][00176] Num frames 4000... +[2023-02-24 15:53:24,064][00176] Num frames 4100... +[2023-02-24 15:53:24,234][00176] Num frames 4200... +[2023-02-24 15:53:24,410][00176] Num frames 4300... +[2023-02-24 15:53:24,594][00176] Num frames 4400... +[2023-02-24 15:53:24,788][00176] Num frames 4500... +[2023-02-24 15:53:24,963][00176] Num frames 4600... +[2023-02-24 15:53:25,118][00176] Num frames 4700... +[2023-02-24 15:53:25,279][00176] Num frames 4800... +[2023-02-24 15:53:25,439][00176] Num frames 4900... +[2023-02-24 15:53:25,612][00176] Num frames 5000... +[2023-02-24 15:53:25,766][00176] Avg episode rewards: #0: 18.253, true rewards: #0: 8.420 +[2023-02-24 15:53:25,769][00176] Avg episode reward: 18.253, avg true_objective: 8.420 +[2023-02-24 15:53:25,847][00176] Num frames 5100... +[2023-02-24 15:53:26,027][00176] Num frames 5200... +[2023-02-24 15:53:26,211][00176] Num frames 5300... +[2023-02-24 15:53:26,378][00176] Num frames 5400... +[2023-02-24 15:53:26,492][00176] Num frames 5500... +[2023-02-24 15:53:26,607][00176] Num frames 5600... +[2023-02-24 15:53:26,721][00176] Num frames 5700... +[2023-02-24 15:53:26,834][00176] Num frames 5800... +[2023-02-24 15:53:26,952][00176] Num frames 5900... +[2023-02-24 15:53:27,068][00176] Num frames 6000... +[2023-02-24 15:53:27,183][00176] Num frames 6100... +[2023-02-24 15:53:27,297][00176] Num frames 6200... +[2023-02-24 15:53:27,411][00176] Num frames 6300... +[2023-02-24 15:53:27,526][00176] Num frames 6400... +[2023-02-24 15:53:27,644][00176] Num frames 6500... +[2023-02-24 15:53:27,756][00176] Num frames 6600... +[2023-02-24 15:53:27,870][00176] Num frames 6700... +[2023-02-24 15:53:27,986][00176] Num frames 6800... +[2023-02-24 15:53:28,125][00176] Avg episode rewards: #0: 23.251, true rewards: #0: 9.823 +[2023-02-24 15:53:28,127][00176] Avg episode reward: 23.251, avg true_objective: 9.823 +[2023-02-24 15:53:28,160][00176] Num frames 6900... +[2023-02-24 15:53:28,274][00176] Num frames 7000... +[2023-02-24 15:53:28,383][00176] Num frames 7100... +[2023-02-24 15:53:28,492][00176] Num frames 7200... +[2023-02-24 15:53:28,607][00176] Num frames 7300... +[2023-02-24 15:53:28,727][00176] Num frames 7400... +[2023-02-24 15:53:28,842][00176] Num frames 7500... +[2023-02-24 15:53:28,954][00176] Num frames 7600... +[2023-02-24 15:53:29,074][00176] Num frames 7700... +[2023-02-24 15:53:29,187][00176] Num frames 7800... +[2023-02-24 15:53:29,300][00176] Num frames 7900... +[2023-02-24 15:53:29,411][00176] Num frames 8000... +[2023-02-24 15:53:29,527][00176] Num frames 8100... +[2023-02-24 15:53:29,649][00176] Num frames 8200... +[2023-02-24 15:53:29,769][00176] Num frames 8300... +[2023-02-24 15:53:29,887][00176] Num frames 8400... +[2023-02-24 15:53:30,000][00176] Num frames 8500... +[2023-02-24 15:53:30,132][00176] Num frames 8600... +[2023-02-24 15:53:30,289][00176] Num frames 8700... +[2023-02-24 15:53:30,451][00176] Num frames 8800... +[2023-02-24 15:53:30,615][00176] Num frames 8900... +[2023-02-24 15:53:30,709][00176] Avg episode rewards: #0: 27.030, true rewards: #0: 11.155 +[2023-02-24 15:53:30,711][00176] Avg episode reward: 27.030, avg true_objective: 11.155 +[2023-02-24 15:53:30,855][00176] Num frames 9000... +[2023-02-24 15:53:31,014][00176] Num frames 9100... +[2023-02-24 15:53:31,178][00176] Num frames 9200... +[2023-02-24 15:53:31,353][00176] Num frames 9300... +[2023-02-24 15:53:31,529][00176] Num frames 9400... +[2023-02-24 15:53:31,718][00176] Num frames 9500... +[2023-02-24 15:53:31,905][00176] Num frames 9600... +[2023-02-24 15:53:32,080][00176] Num frames 9700... +[2023-02-24 15:53:32,252][00176] Num frames 9800... +[2023-02-24 15:53:32,426][00176] Num frames 9900... +[2023-02-24 15:53:32,607][00176] Num frames 10000... +[2023-02-24 15:53:32,796][00176] Num frames 10100... +[2023-02-24 15:53:32,989][00176] Num frames 10200... +[2023-02-24 15:53:33,170][00176] Num frames 10300... +[2023-02-24 15:53:33,333][00176] Num frames 10400... +[2023-02-24 15:53:33,502][00176] Num frames 10500... +[2023-02-24 15:53:33,720][00176] Avg episode rewards: #0: 28.994, true rewards: #0: 11.772 +[2023-02-24 15:53:33,723][00176] Avg episode reward: 28.994, avg true_objective: 11.772 +[2023-02-24 15:53:33,734][00176] Num frames 10600... +[2023-02-24 15:53:33,895][00176] Num frames 10700... +[2023-02-24 15:53:34,069][00176] Num frames 10800... +[2023-02-24 15:53:34,234][00176] Num frames 10900... +[2023-02-24 15:53:34,403][00176] Num frames 11000... +[2023-02-24 15:53:34,537][00176] Num frames 11100... +[2023-02-24 15:53:34,650][00176] Num frames 11200... +[2023-02-24 15:53:34,748][00176] Avg episode rewards: #0: 27.135, true rewards: #0: 11.235 +[2023-02-24 15:53:34,749][00176] Avg episode reward: 27.135, avg true_objective: 11.235 +[2023-02-24 15:54:38,715][00176] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-24 16:00:49,305][00176] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-24 16:00:49,308][00176] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-24 16:00:49,310][00176] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-24 16:00:49,312][00176] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-24 16:00:49,315][00176] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-24 16:00:49,317][00176] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-24 16:00:49,320][00176] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-24 16:00:49,321][00176] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-24 16:00:49,322][00176] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-24 16:00:49,326][00176] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-24 16:00:49,327][00176] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-24 16:00:49,328][00176] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-24 16:00:49,329][00176] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-24 16:00:49,330][00176] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-24 16:00:49,331][00176] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-24 16:00:49,361][00176] RunningMeanStd input shape: (3, 72, 128) +[2023-02-24 16:00:49,365][00176] RunningMeanStd input shape: (1,) +[2023-02-24 16:00:49,384][00176] ConvEncoder: input_channels=3 +[2023-02-24 16:00:49,442][00176] Conv encoder output size: 512 +[2023-02-24 16:00:49,445][00176] Policy head output size: 512 +[2023-02-24 16:00:49,473][00176] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002443_10006528.pth... +[2023-02-24 16:00:50,143][00176] Num frames 100... +[2023-02-24 16:00:50,297][00176] Num frames 200... +[2023-02-24 16:00:50,447][00176] Num frames 300... +[2023-02-24 16:00:50,772][00176] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-24 16:00:50,775][00176] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-24 16:00:50,839][00176] Num frames 400... +[2023-02-24 16:00:51,173][00176] Num frames 500... +[2023-02-24 16:00:51,465][00176] Num frames 600... +[2023-02-24 16:00:51,828][00176] Num frames 700... +[2023-02-24 16:00:52,142][00176] Num frames 800... +[2023-02-24 16:00:52,368][00176] Num frames 900... +[2023-02-24 16:00:52,578][00176] Num frames 1000... +[2023-02-24 16:00:52,779][00176] Num frames 1100... +[2023-02-24 16:00:53,017][00176] Num frames 1200... +[2023-02-24 16:00:53,205][00176] Num frames 1300... +[2023-02-24 16:00:53,421][00176] Num frames 1400... +[2023-02-24 16:00:53,654][00176] Num frames 1500... +[2023-02-24 16:00:53,833][00176] Num frames 1600... +[2023-02-24 16:00:54,208][00176] Num frames 1700... +[2023-02-24 16:00:54,434][00176] Num frames 1800... +[2023-02-24 16:00:54,607][00176] Num frames 1900... +[2023-02-24 16:00:54,895][00176] Num frames 2000... +[2023-02-24 16:00:55,104][00176] Num frames 2100... +[2023-02-24 16:00:55,225][00176] Num frames 2200... +[2023-02-24 16:00:55,335][00176] Num frames 2300... +[2023-02-24 16:00:55,432][00176] Avg episode rewards: #0: 28.179, true rewards: #0: 11.680 +[2023-02-24 16:00:55,434][00176] Avg episode reward: 28.179, avg true_objective: 11.680 +[2023-02-24 16:00:55,514][00176] Num frames 2400... +[2023-02-24 16:00:55,638][00176] Num frames 2500... +[2023-02-24 16:00:55,755][00176] Num frames 2600... +[2023-02-24 16:00:55,874][00176] Num frames 2700... +[2023-02-24 16:00:56,016][00176] Num frames 2800... +[2023-02-24 16:00:56,132][00176] Num frames 2900... +[2023-02-24 16:00:56,249][00176] Num frames 3000... +[2023-02-24 16:00:56,357][00176] Num frames 3100... +[2023-02-24 16:00:56,480][00176] Num frames 3200... +[2023-02-24 16:00:56,591][00176] Num frames 3300... +[2023-02-24 16:00:56,713][00176] Num frames 3400... +[2023-02-24 16:00:56,838][00176] Num frames 3500... +[2023-02-24 16:00:56,960][00176] Num frames 3600... +[2023-02-24 16:00:57,107][00176] Avg episode rewards: #0: 29.266, true rewards: #0: 12.267 +[2023-02-24 16:00:57,109][00176] Avg episode reward: 29.266, avg true_objective: 12.267 +[2023-02-24 16:00:57,136][00176] Num frames 3700... +[2023-02-24 16:00:57,256][00176] Num frames 3800... +[2023-02-24 16:00:57,370][00176] Num frames 3900... +[2023-02-24 16:00:57,484][00176] Num frames 4000... +[2023-02-24 16:00:57,613][00176] Num frames 4100... +[2023-02-24 16:00:57,737][00176] Num frames 4200... +[2023-02-24 16:00:57,864][00176] Num frames 4300... +[2023-02-24 16:00:57,990][00176] Num frames 4400... +[2023-02-24 16:00:58,122][00176] Num frames 4500... +[2023-02-24 16:00:58,249][00176] Num frames 4600... +[2023-02-24 16:00:58,368][00176] Num frames 4700... +[2023-02-24 16:00:58,489][00176] Num frames 4800... +[2023-02-24 16:00:58,614][00176] Num frames 4900... +[2023-02-24 16:00:58,735][00176] Num frames 5000... +[2023-02-24 16:00:58,858][00176] Num frames 5100... +[2023-02-24 16:00:58,985][00176] Num frames 5200... +[2023-02-24 16:00:59,106][00176] Num frames 5300... +[2023-02-24 16:00:59,230][00176] Num frames 5400... +[2023-02-24 16:00:59,352][00176] Num frames 5500... +[2023-02-24 16:00:59,468][00176] Num frames 5600... +[2023-02-24 16:00:59,600][00176] Num frames 5700... +[2023-02-24 16:00:59,748][00176] Avg episode rewards: #0: 35.450, true rewards: #0: 14.450 +[2023-02-24 16:00:59,750][00176] Avg episode reward: 35.450, avg true_objective: 14.450 +[2023-02-24 16:00:59,782][00176] Num frames 5800... +[2023-02-24 16:00:59,906][00176] Num frames 5900... +[2023-02-24 16:01:00,032][00176] Num frames 6000... +[2023-02-24 16:01:00,147][00176] Num frames 6100... +[2023-02-24 16:01:00,273][00176] Num frames 6200... +[2023-02-24 16:01:00,397][00176] Num frames 6300... +[2023-02-24 16:01:00,526][00176] Num frames 6400... +[2023-02-24 16:01:00,650][00176] Num frames 6500... +[2023-02-24 16:01:00,773][00176] Num frames 6600... +[2023-02-24 16:01:00,888][00176] Num frames 6700... +[2023-02-24 16:01:01,025][00176] Num frames 6800... +[2023-02-24 16:01:01,138][00176] Num frames 6900... +[2023-02-24 16:01:01,251][00176] Num frames 7000... +[2023-02-24 16:01:01,366][00176] Num frames 7100... +[2023-02-24 16:01:01,479][00176] Num frames 7200... +[2023-02-24 16:01:01,557][00176] Avg episode rewards: #0: 35.634, true rewards: #0: 14.434 +[2023-02-24 16:01:01,559][00176] Avg episode reward: 35.634, avg true_objective: 14.434 +[2023-02-24 16:01:01,658][00176] Num frames 7300... +[2023-02-24 16:01:01,788][00176] Num frames 7400... +[2023-02-24 16:01:01,914][00176] Num frames 7500... +[2023-02-24 16:01:02,058][00176] Num frames 7600... +[2023-02-24 16:01:02,223][00176] Num frames 7700... +[2023-02-24 16:01:02,419][00176] Num frames 7800... +[2023-02-24 16:01:02,612][00176] Num frames 7900... +[2023-02-24 16:01:02,797][00176] Num frames 8000... +[2023-02-24 16:01:02,971][00176] Num frames 8100... +[2023-02-24 16:01:03,144][00176] Num frames 8200... +[2023-02-24 16:01:03,311][00176] Num frames 8300... +[2023-02-24 16:01:03,478][00176] Num frames 8400... +[2023-02-24 16:01:03,642][00176] Num frames 8500... +[2023-02-24 16:01:03,812][00176] Num frames 8600... +[2023-02-24 16:01:03,970][00176] Num frames 8700... +[2023-02-24 16:01:04,147][00176] Num frames 8800... +[2023-02-24 16:01:04,215][00176] Avg episode rewards: #0: 36.340, true rewards: #0: 14.673 +[2023-02-24 16:01:04,218][00176] Avg episode reward: 36.340, avg true_objective: 14.673 +[2023-02-24 16:01:04,384][00176] Num frames 8900... +[2023-02-24 16:01:04,561][00176] Num frames 9000... +[2023-02-24 16:01:04,736][00176] Num frames 9100... +[2023-02-24 16:01:04,915][00176] Num frames 9200... +[2023-02-24 16:01:05,083][00176] Num frames 9300... +[2023-02-24 16:01:05,258][00176] Num frames 9400... +[2023-02-24 16:01:05,414][00176] Num frames 9500... +[2023-02-24 16:01:05,576][00176] Num frames 9600... +[2023-02-24 16:01:05,665][00176] Avg episode rewards: #0: 33.455, true rewards: #0: 13.741 +[2023-02-24 16:01:05,667][00176] Avg episode reward: 33.455, avg true_objective: 13.741 +[2023-02-24 16:01:05,812][00176] Num frames 9700... +[2023-02-24 16:01:05,956][00176] Num frames 9800... +[2023-02-24 16:01:06,078][00176] Num frames 9900... +[2023-02-24 16:01:06,198][00176] Num frames 10000... +[2023-02-24 16:01:06,317][00176] Num frames 10100... +[2023-02-24 16:01:06,437][00176] Num frames 10200... +[2023-02-24 16:01:06,548][00176] Num frames 10300... +[2023-02-24 16:01:06,663][00176] Num frames 10400... +[2023-02-24 16:01:06,787][00176] Num frames 10500... +[2023-02-24 16:01:06,904][00176] Num frames 10600... +[2023-02-24 16:01:07,026][00176] Num frames 10700... +[2023-02-24 16:01:07,141][00176] Num frames 10800... +[2023-02-24 16:01:07,261][00176] Num frames 10900... +[2023-02-24 16:01:07,376][00176] Num frames 11000... +[2023-02-24 16:01:07,488][00176] Num frames 11100... +[2023-02-24 16:01:07,588][00176] Avg episode rewards: #0: 33.922, true rewards: #0: 13.922 +[2023-02-24 16:01:07,589][00176] Avg episode reward: 33.922, avg true_objective: 13.922 +[2023-02-24 16:01:07,671][00176] Num frames 11200... +[2023-02-24 16:01:07,787][00176] Num frames 11300... +[2023-02-24 16:01:07,857][00176] Avg episode rewards: #0: 30.345, true rewards: #0: 12.568 +[2023-02-24 16:01:07,859][00176] Avg episode reward: 30.345, avg true_objective: 12.568 +[2023-02-24 16:01:07,964][00176] Num frames 11400... +[2023-02-24 16:01:08,078][00176] Num frames 11500... +[2023-02-24 16:01:08,201][00176] Num frames 11600... +[2023-02-24 16:01:08,314][00176] Num frames 11700... +[2023-02-24 16:01:08,429][00176] Num frames 11800... +[2023-02-24 16:01:08,540][00176] Num frames 11900... +[2023-02-24 16:01:08,665][00176] Num frames 12000... +[2023-02-24 16:01:08,789][00176] Num frames 12100... +[2023-02-24 16:01:08,903][00176] Num frames 12200... +[2023-02-24 16:01:09,018][00176] Num frames 12300... +[2023-02-24 16:01:09,137][00176] Num frames 12400... +[2023-02-24 16:01:09,253][00176] Num frames 12500... +[2023-02-24 16:01:09,371][00176] Num frames 12600... +[2023-02-24 16:01:09,480][00176] Num frames 12700... +[2023-02-24 16:01:09,591][00176] Num frames 12800... +[2023-02-24 16:01:09,719][00176] Num frames 12900... +[2023-02-24 16:01:09,839][00176] Num frames 13000... +[2023-02-24 16:01:09,948][00176] Num frames 13100... +[2023-02-24 16:01:10,044][00176] Avg episode rewards: #0: 32.035, true rewards: #0: 13.135 +[2023-02-24 16:01:10,046][00176] Avg episode reward: 32.035, avg true_objective: 13.135 +[2023-02-24 16:02:29,126][00176] Replay video saved to /content/train_dir/default_experiment/replay.mp4!