diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,984 @@ +[2023-02-27 11:28:58,437][00107] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-27 11:28:58,442][00107] Rollout worker 0 uses device cpu +[2023-02-27 11:28:58,443][00107] Rollout worker 1 uses device cpu +[2023-02-27 11:28:58,445][00107] Rollout worker 2 uses device cpu +[2023-02-27 11:28:58,446][00107] Rollout worker 3 uses device cpu +[2023-02-27 11:28:58,448][00107] Rollout worker 4 uses device cpu +[2023-02-27 11:28:58,449][00107] Rollout worker 5 uses device cpu +[2023-02-27 11:28:58,450][00107] Rollout worker 6 uses device cpu +[2023-02-27 11:28:58,451][00107] Rollout worker 7 uses device cpu +[2023-02-27 11:28:58,642][00107] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 11:28:58,644][00107] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-27 11:28:58,675][00107] Starting all processes... +[2023-02-27 11:28:58,676][00107] Starting process learner_proc0 +[2023-02-27 11:28:58,727][00107] Starting all processes... +[2023-02-27 11:28:58,738][00107] Starting process inference_proc0-0 +[2023-02-27 11:28:58,738][00107] Starting process rollout_proc0 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc1 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc2 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc3 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc4 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc5 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc6 +[2023-02-27 11:28:58,743][00107] Starting process rollout_proc7 +[2023-02-27 11:29:08,913][20173] Worker 1 uses CPU cores [1] +[2023-02-27 11:29:09,326][20174] Worker 2 uses CPU cores [0] +[2023-02-27 11:29:09,663][20157] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 11:29:09,668][20157] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-27 11:29:09,741][20171] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 11:29:09,742][20171] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-27 11:29:09,813][20178] Worker 5 uses CPU cores [1] +[2023-02-27 11:29:09,823][20176] Worker 6 uses CPU cores [0] +[2023-02-27 11:29:09,841][20175] Worker 3 uses CPU cores [1] +[2023-02-27 11:29:09,883][20172] Worker 0 uses CPU cores [0] +[2023-02-27 11:29:09,899][20177] Worker 4 uses CPU cores [0] +[2023-02-27 11:29:09,928][20179] Worker 7 uses CPU cores [1] +[2023-02-27 11:29:10,366][20157] Num visible devices: 1 +[2023-02-27 11:29:10,367][20171] Num visible devices: 1 +[2023-02-27 11:29:10,378][20157] Starting seed is not provided +[2023-02-27 11:29:10,378][20157] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 11:29:10,379][20157] Initializing actor-critic model on device cuda:0 +[2023-02-27 11:29:10,380][20157] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:29:10,382][20157] RunningMeanStd input shape: (1,) +[2023-02-27 11:29:10,394][20157] ConvEncoder: input_channels=3 +[2023-02-27 11:29:10,688][20157] Conv encoder output size: 512 +[2023-02-27 11:29:10,689][20157] Policy head output size: 512 +[2023-02-27 11:29:10,741][20157] Created Actor Critic model with architecture: +[2023-02-27 11:29:10,741][20157] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-27 11:29:18,021][20157] Using optimizer +[2023-02-27 11:29:18,022][20157] No checkpoints found +[2023-02-27 11:29:18,022][20157] Did not load from checkpoint, starting from scratch! +[2023-02-27 11:29:18,022][20157] Initialized policy 0 weights for model version 0 +[2023-02-27 11:29:18,026][20157] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 11:29:18,034][20157] LearnerWorker_p0 finished initialization! +[2023-02-27 11:29:18,221][20171] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:29:18,222][20171] RunningMeanStd input shape: (1,) +[2023-02-27 11:29:18,234][20171] ConvEncoder: input_channels=3 +[2023-02-27 11:29:18,331][20171] Conv encoder output size: 512 +[2023-02-27 11:29:18,331][20171] Policy head output size: 512 +[2023-02-27 11:29:18,634][00107] Heartbeat connected on Batcher_0 +[2023-02-27 11:29:18,641][00107] Heartbeat connected on LearnerWorker_p0 +[2023-02-27 11:29:18,652][00107] Heartbeat connected on RolloutWorker_w0 +[2023-02-27 11:29:18,657][00107] Heartbeat connected on RolloutWorker_w1 +[2023-02-27 11:29:18,659][00107] Heartbeat connected on RolloutWorker_w2 +[2023-02-27 11:29:18,663][00107] Heartbeat connected on RolloutWorker_w3 +[2023-02-27 11:29:18,668][00107] Heartbeat connected on RolloutWorker_w4 +[2023-02-27 11:29:18,673][00107] Heartbeat connected on RolloutWorker_w5 +[2023-02-27 11:29:18,677][00107] Heartbeat connected on RolloutWorker_w6 +[2023-02-27 11:29:18,681][00107] Heartbeat connected on RolloutWorker_w7 +[2023-02-27 11:29:19,228][00107] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 11:29:20,599][00107] Inference worker 0-0 is ready! +[2023-02-27 11:29:20,601][00107] All inference workers are ready! Signal rollout workers to start! +[2023-02-27 11:29:20,606][00107] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-27 11:29:20,726][20178] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,742][20173] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,755][20172] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,764][20179] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,770][20176] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,767][20174] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,776][20177] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:20,775][20175] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:29:21,292][20176] Decorrelating experience for 0 frames... +[2023-02-27 11:29:21,631][20176] Decorrelating experience for 32 frames... +[2023-02-27 11:29:22,057][20176] Decorrelating experience for 64 frames... +[2023-02-27 11:29:22,175][20179] Decorrelating experience for 0 frames... +[2023-02-27 11:29:22,179][20178] Decorrelating experience for 0 frames... +[2023-02-27 11:29:22,180][20173] Decorrelating experience for 0 frames... +[2023-02-27 11:29:22,185][20175] Decorrelating experience for 0 frames... +[2023-02-27 11:29:22,954][20176] Decorrelating experience for 96 frames... +[2023-02-27 11:29:23,190][20173] Decorrelating experience for 32 frames... +[2023-02-27 11:29:23,192][20179] Decorrelating experience for 32 frames... +[2023-02-27 11:29:23,201][20178] Decorrelating experience for 32 frames... +[2023-02-27 11:29:23,266][20172] Decorrelating experience for 0 frames... +[2023-02-27 11:29:23,286][20177] Decorrelating experience for 0 frames... +[2023-02-27 11:29:24,228][00107] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 11:29:24,321][20172] Decorrelating experience for 32 frames... +[2023-02-27 11:29:24,387][20177] Decorrelating experience for 32 frames... +[2023-02-27 11:29:24,429][20175] Decorrelating experience for 32 frames... +[2023-02-27 11:29:24,675][20173] Decorrelating experience for 64 frames... +[2023-02-27 11:29:24,686][20178] Decorrelating experience for 64 frames... +[2023-02-27 11:29:25,263][20179] Decorrelating experience for 64 frames... +[2023-02-27 11:29:25,658][20174] Decorrelating experience for 0 frames... +[2023-02-27 11:29:25,760][20172] Decorrelating experience for 64 frames... +[2023-02-27 11:29:25,871][20177] Decorrelating experience for 64 frames... +[2023-02-27 11:29:26,522][20174] Decorrelating experience for 32 frames... +[2023-02-27 11:29:26,652][20172] Decorrelating experience for 96 frames... +[2023-02-27 11:29:27,093][20174] Decorrelating experience for 64 frames... +[2023-02-27 11:29:27,201][20173] Decorrelating experience for 96 frames... +[2023-02-27 11:29:27,325][20179] Decorrelating experience for 96 frames... +[2023-02-27 11:29:28,105][20178] Decorrelating experience for 96 frames... +[2023-02-27 11:29:28,678][20175] Decorrelating experience for 64 frames... +[2023-02-27 11:29:28,834][20177] Decorrelating experience for 96 frames... +[2023-02-27 11:29:29,228][00107] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 1.6. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 11:29:29,231][00107] Avg episode reward: [(0, '1.216')] +[2023-02-27 11:29:29,601][20174] Decorrelating experience for 96 frames... +[2023-02-27 11:29:32,953][20157] Signal inference workers to stop experience collection... +[2023-02-27 11:29:32,980][20171] InferenceWorker_p0-w0: stopping experience collection +[2023-02-27 11:29:32,995][20175] Decorrelating experience for 96 frames... +[2023-02-27 11:29:34,228][00107] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 161.2. Samples: 2418. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 11:29:34,231][00107] Avg episode reward: [(0, '2.637')] +[2023-02-27 11:29:35,455][20157] Signal inference workers to resume experience collection... +[2023-02-27 11:29:35,457][20171] InferenceWorker_p0-w0: resuming experience collection +[2023-02-27 11:29:39,228][00107] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 266.6. Samples: 5332. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0) +[2023-02-27 11:29:39,230][00107] Avg episode reward: [(0, '3.396')] +[2023-02-27 11:29:43,148][20171] Updated weights for policy 0, policy_version 10 (0.0358) +[2023-02-27 11:29:44,228][00107] Fps is (10 sec: 4505.6, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 351.3. Samples: 8782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:29:44,230][00107] Avg episode reward: [(0, '4.097')] +[2023-02-27 11:29:49,228][00107] Fps is (10 sec: 4095.9, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 61440. Throughput: 0: 506.5. Samples: 15196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:29:49,231][00107] Avg episode reward: [(0, '4.517')] +[2023-02-27 11:29:54,229][00107] Fps is (10 sec: 3276.6, 60 sec: 2223.5, 300 sec: 2223.5). Total num frames: 77824. Throughput: 0: 567.6. Samples: 19866. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:29:54,234][00107] Avg episode reward: [(0, '4.523')] +[2023-02-27 11:29:55,100][20171] Updated weights for policy 0, policy_version 20 (0.0037) +[2023-02-27 11:29:59,228][00107] Fps is (10 sec: 4096.1, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 576.7. Samples: 23070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:29:59,230][00107] Avg episode reward: [(0, '4.281')] +[2023-02-27 11:29:59,237][20157] Saving new best policy, reward=4.281! +[2023-02-27 11:30:03,427][20171] Updated weights for policy 0, policy_version 30 (0.0015) +[2023-02-27 11:30:04,228][00107] Fps is (10 sec: 4505.9, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 122880. Throughput: 0: 676.8. Samples: 30456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:04,230][00107] Avg episode reward: [(0, '4.317')] +[2023-02-27 11:30:04,317][20157] Saving new best policy, reward=4.317! +[2023-02-27 11:30:09,228][00107] Fps is (10 sec: 3686.4, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 139264. Throughput: 0: 796.8. Samples: 35856. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:30:09,236][00107] Avg episode reward: [(0, '4.425')] +[2023-02-27 11:30:09,241][20157] Saving new best policy, reward=4.425! +[2023-02-27 11:30:14,229][00107] Fps is (10 sec: 3276.4, 60 sec: 2829.9, 300 sec: 2829.9). Total num frames: 155648. Throughput: 0: 845.4. Samples: 38058. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:14,232][00107] Avg episode reward: [(0, '4.430')] +[2023-02-27 11:30:14,268][20157] Saving new best policy, reward=4.430! +[2023-02-27 11:30:15,130][20171] Updated weights for policy 0, policy_version 40 (0.0024) +[2023-02-27 11:30:19,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 180224. Throughput: 0: 937.6. Samples: 44612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:19,229][00107] Avg episode reward: [(0, '4.392')] +[2023-02-27 11:30:23,484][20171] Updated weights for policy 0, policy_version 50 (0.0013) +[2023-02-27 11:30:24,228][00107] Fps is (10 sec: 4915.7, 60 sec: 3413.3, 300 sec: 3150.8). Total num frames: 204800. Throughput: 0: 1035.9. Samples: 51946. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:24,235][00107] Avg episode reward: [(0, '4.502')] +[2023-02-27 11:30:24,257][20157] Saving new best policy, reward=4.502! +[2023-02-27 11:30:29,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3159.8). Total num frames: 221184. Throughput: 0: 1009.4. Samples: 54204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:30:29,235][00107] Avg episode reward: [(0, '4.469')] +[2023-02-27 11:30:34,228][00107] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3222.2). Total num frames: 241664. Throughput: 0: 973.0. Samples: 58982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:30:34,235][00107] Avg episode reward: [(0, '4.244')] +[2023-02-27 11:30:34,967][20171] Updated weights for policy 0, policy_version 60 (0.0028) +[2023-02-27 11:30:39,228][00107] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3276.8). Total num frames: 262144. Throughput: 0: 1033.4. Samples: 66370. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:30:39,233][00107] Avg episode reward: [(0, '4.426')] +[2023-02-27 11:30:44,220][20171] Updated weights for policy 0, policy_version 70 (0.0023) +[2023-02-27 11:30:44,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3373.2). Total num frames: 286720. Throughput: 0: 1042.7. Samples: 69992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:44,230][00107] Avg episode reward: [(0, '4.514')] +[2023-02-27 11:30:44,242][20157] Saving new best policy, reward=4.514! +[2023-02-27 11:30:49,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3322.3). Total num frames: 299008. Throughput: 0: 986.7. Samples: 74858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:49,231][00107] Avg episode reward: [(0, '4.438')] +[2023-02-27 11:30:54,228][00107] Fps is (10 sec: 3276.8, 60 sec: 4027.8, 300 sec: 3363.0). Total num frames: 319488. Throughput: 0: 995.0. Samples: 80630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:30:54,234][00107] Avg episode reward: [(0, '4.558')] +[2023-02-27 11:30:54,242][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000078_319488.pth... +[2023-02-27 11:30:54,365][20157] Saving new best policy, reward=4.558! +[2023-02-27 11:30:55,319][20171] Updated weights for policy 0, policy_version 80 (0.0020) +[2023-02-27 11:30:59,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3440.6). Total num frames: 344064. Throughput: 0: 1022.6. Samples: 84074. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:30:59,230][00107] Avg episode reward: [(0, '4.303')] +[2023-02-27 11:31:04,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3471.9). Total num frames: 364544. Throughput: 0: 1023.3. Samples: 90662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:31:04,232][00107] Avg episode reward: [(0, '4.342')] +[2023-02-27 11:31:05,363][20171] Updated weights for policy 0, policy_version 90 (0.0024) +[2023-02-27 11:31:09,231][00107] Fps is (10 sec: 3685.2, 60 sec: 4027.5, 300 sec: 3462.9). Total num frames: 380928. Throughput: 0: 965.2. Samples: 95382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:31:09,236][00107] Avg episode reward: [(0, '4.420')] +[2023-02-27 11:31:14,228][00107] Fps is (10 sec: 3686.4, 60 sec: 4096.1, 300 sec: 3490.5). Total num frames: 401408. Throughput: 0: 985.0. Samples: 98528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:31:14,236][00107] Avg episode reward: [(0, '4.564')] +[2023-02-27 11:31:14,251][20157] Saving new best policy, reward=4.564! +[2023-02-27 11:31:15,498][20171] Updated weights for policy 0, policy_version 100 (0.0037) +[2023-02-27 11:31:19,228][00107] Fps is (10 sec: 4507.1, 60 sec: 4096.0, 300 sec: 3549.9). Total num frames: 425984. Throughput: 0: 1037.5. Samples: 105668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:31:19,235][00107] Avg episode reward: [(0, '4.645')] +[2023-02-27 11:31:19,238][20157] Saving new best policy, reward=4.645! +[2023-02-27 11:31:24,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3538.9). Total num frames: 442368. Throughput: 0: 997.2. Samples: 111244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:31:24,238][00107] Avg episode reward: [(0, '4.526')] +[2023-02-27 11:31:26,261][20171] Updated weights for policy 0, policy_version 110 (0.0024) +[2023-02-27 11:31:29,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3528.9). Total num frames: 458752. Throughput: 0: 967.9. Samples: 113546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:31:29,234][00107] Avg episode reward: [(0, '4.519')] +[2023-02-27 11:31:34,228][00107] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3580.2). Total num frames: 483328. Throughput: 0: 1002.5. Samples: 119972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:31:34,236][00107] Avg episode reward: [(0, '4.550')] +[2023-02-27 11:31:35,544][20171] Updated weights for policy 0, policy_version 120 (0.0037) +[2023-02-27 11:31:39,228][00107] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3627.9). Total num frames: 507904. Throughput: 0: 1038.0. Samples: 127340. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:31:39,234][00107] Avg episode reward: [(0, '4.727')] +[2023-02-27 11:31:39,237][20157] Saving new best policy, reward=4.727! +[2023-02-27 11:31:44,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3587.5). Total num frames: 520192. Throughput: 0: 1012.7. Samples: 129648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:31:44,230][00107] Avg episode reward: [(0, '4.917')] +[2023-02-27 11:31:44,241][20157] Saving new best policy, reward=4.917! +[2023-02-27 11:31:46,974][20171] Updated weights for policy 0, policy_version 130 (0.0011) +[2023-02-27 11:31:49,228][00107] Fps is (10 sec: 3276.8, 60 sec: 4027.7, 300 sec: 3604.5). Total num frames: 540672. Throughput: 0: 969.9. Samples: 134308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:31:49,230][00107] Avg episode reward: [(0, '5.034')] +[2023-02-27 11:31:49,237][20157] Saving new best policy, reward=5.034! +[2023-02-27 11:31:54,228][00107] Fps is (10 sec: 4505.7, 60 sec: 4096.0, 300 sec: 3646.8). Total num frames: 565248. Throughput: 0: 1026.1. Samples: 141554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:31:54,231][00107] Avg episode reward: [(0, '4.929')] +[2023-02-27 11:31:55,461][20171] Updated weights for policy 0, policy_version 140 (0.0026) +[2023-02-27 11:31:59,228][00107] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3660.8). Total num frames: 585728. Throughput: 0: 1034.6. Samples: 145086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:31:59,231][00107] Avg episode reward: [(0, '4.837')] +[2023-02-27 11:32:04,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3649.2). Total num frames: 602112. Throughput: 0: 991.6. Samples: 150290. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:32:04,230][00107] Avg episode reward: [(0, '4.588')] +[2023-02-27 11:32:07,119][20171] Updated weights for policy 0, policy_version 150 (0.0016) +[2023-02-27 11:32:09,228][00107] Fps is (10 sec: 3686.5, 60 sec: 4028.0, 300 sec: 3662.3). Total num frames: 622592. Throughput: 0: 992.2. Samples: 155892. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:32:09,234][00107] Avg episode reward: [(0, '4.564')] +[2023-02-27 11:32:14,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3698.1). Total num frames: 647168. Throughput: 0: 1020.2. Samples: 159456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:32:14,233][00107] Avg episode reward: [(0, '4.757')] +[2023-02-27 11:32:15,648][20171] Updated weights for policy 0, policy_version 160 (0.0013) +[2023-02-27 11:32:19,234][00107] Fps is (10 sec: 4502.6, 60 sec: 4027.3, 300 sec: 3709.0). Total num frames: 667648. Throughput: 0: 1027.0. Samples: 166192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:19,241][00107] Avg episode reward: [(0, '4.886')] +[2023-02-27 11:32:24,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3675.3). Total num frames: 679936. Throughput: 0: 963.5. Samples: 170698. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:32:24,230][00107] Avg episode reward: [(0, '4.722')] +[2023-02-27 11:32:27,440][20171] Updated weights for policy 0, policy_version 170 (0.0069) +[2023-02-27 11:32:29,228][00107] Fps is (10 sec: 3688.8, 60 sec: 4096.0, 300 sec: 3708.0). Total num frames: 704512. Throughput: 0: 979.6. Samples: 173730. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:32:29,230][00107] Avg episode reward: [(0, '4.685')] +[2023-02-27 11:32:34,228][00107] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3738.9). Total num frames: 729088. Throughput: 0: 1037.4. Samples: 180992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:34,233][00107] Avg episode reward: [(0, '4.644')] +[2023-02-27 11:32:36,158][20171] Updated weights for policy 0, policy_version 180 (0.0014) +[2023-02-27 11:32:39,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3727.4). Total num frames: 745472. Throughput: 0: 1003.5. Samples: 186710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:39,231][00107] Avg episode reward: [(0, '4.746')] +[2023-02-27 11:32:44,228][00107] Fps is (10 sec: 3276.8, 60 sec: 4027.8, 300 sec: 3716.4). Total num frames: 761856. Throughput: 0: 975.1. Samples: 188964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:44,233][00107] Avg episode reward: [(0, '4.730')] +[2023-02-27 11:32:47,465][20171] Updated weights for policy 0, policy_version 190 (0.0017) +[2023-02-27 11:32:49,228][00107] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 3744.9). Total num frames: 786432. Throughput: 0: 997.9. Samples: 195196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:32:49,235][00107] Avg episode reward: [(0, '4.642')] +[2023-02-27 11:32:54,229][00107] Fps is (10 sec: 4505.0, 60 sec: 4027.6, 300 sec: 3753.1). Total num frames: 806912. Throughput: 0: 1031.8. Samples: 202324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:54,234][00107] Avg episode reward: [(0, '4.838')] +[2023-02-27 11:32:54,244][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000197_806912.pth... +[2023-02-27 11:32:57,347][20171] Updated weights for policy 0, policy_version 200 (0.0028) +[2023-02-27 11:32:59,232][00107] Fps is (10 sec: 3684.8, 60 sec: 3959.2, 300 sec: 3742.2). Total num frames: 823296. Throughput: 0: 1006.8. Samples: 204764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:32:59,236][00107] Avg episode reward: [(0, '4.815')] +[2023-02-27 11:33:04,228][00107] Fps is (10 sec: 3277.2, 60 sec: 3959.5, 300 sec: 3731.9). Total num frames: 839680. Throughput: 0: 959.9. Samples: 209380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:33:04,236][00107] Avg episode reward: [(0, '4.510')] +[2023-02-27 11:33:07,686][20171] Updated weights for policy 0, policy_version 210 (0.0015) +[2023-02-27 11:33:09,228][00107] Fps is (10 sec: 4097.7, 60 sec: 4027.7, 300 sec: 3757.6). Total num frames: 864256. Throughput: 0: 1020.2. Samples: 216606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:33:09,231][00107] Avg episode reward: [(0, '4.476')] +[2023-02-27 11:33:14,228][00107] Fps is (10 sec: 4915.1, 60 sec: 4027.7, 300 sec: 3782.3). Total num frames: 888832. Throughput: 0: 1035.0. Samples: 220306. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:33:14,233][00107] Avg episode reward: [(0, '4.711')] +[2023-02-27 11:33:18,026][20171] Updated weights for policy 0, policy_version 220 (0.0020) +[2023-02-27 11:33:19,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3891.6, 300 sec: 3754.7). Total num frames: 901120. Throughput: 0: 986.9. Samples: 225402. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:33:19,231][00107] Avg episode reward: [(0, '4.794')] +[2023-02-27 11:33:24,234][00107] Fps is (10 sec: 3274.9, 60 sec: 4027.3, 300 sec: 3761.5). Total num frames: 921600. Throughput: 0: 981.5. Samples: 230884. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:33:24,236][00107] Avg episode reward: [(0, '4.688')] +[2023-02-27 11:33:28,202][20171] Updated weights for policy 0, policy_version 230 (0.0033) +[2023-02-27 11:33:29,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3784.7). Total num frames: 946176. Throughput: 0: 1009.6. Samples: 234394. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:33:29,230][00107] Avg episode reward: [(0, '4.772')] +[2023-02-27 11:33:34,228][00107] Fps is (10 sec: 4098.5, 60 sec: 3891.2, 300 sec: 3774.7). Total num frames: 962560. Throughput: 0: 1013.8. Samples: 240818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:33:34,234][00107] Avg episode reward: [(0, '4.842')] +[2023-02-27 11:33:39,229][00107] Fps is (10 sec: 3276.6, 60 sec: 3891.2, 300 sec: 3765.2). Total num frames: 978944. Throughput: 0: 956.1. Samples: 245348. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:33:39,233][00107] Avg episode reward: [(0, '4.561')] +[2023-02-27 11:33:39,794][20171] Updated weights for policy 0, policy_version 240 (0.0012) +[2023-02-27 11:33:44,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3771.4). Total num frames: 999424. Throughput: 0: 967.3. Samples: 248290. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:33:44,230][00107] Avg episode reward: [(0, '4.736')] +[2023-02-27 11:33:48,610][20171] Updated weights for policy 0, policy_version 250 (0.0018) +[2023-02-27 11:33:49,228][00107] Fps is (10 sec: 4505.9, 60 sec: 3959.5, 300 sec: 3792.6). Total num frames: 1024000. Throughput: 0: 1021.6. Samples: 255350. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:33:49,236][00107] Avg episode reward: [(0, '4.644')] +[2023-02-27 11:33:54,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3891.3, 300 sec: 3783.2). Total num frames: 1040384. Throughput: 0: 988.9. Samples: 261106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:33:54,230][00107] Avg episode reward: [(0, '4.643')] +[2023-02-27 11:33:59,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3891.5, 300 sec: 3774.2). Total num frames: 1056768. Throughput: 0: 957.6. Samples: 263400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:33:59,236][00107] Avg episode reward: [(0, '4.691')] +[2023-02-27 11:34:00,569][20171] Updated weights for policy 0, policy_version 260 (0.0030) +[2023-02-27 11:34:04,228][00107] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3794.2). Total num frames: 1081344. Throughput: 0: 974.3. Samples: 269244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:34:04,231][00107] Avg episode reward: [(0, '4.633')] +[2023-02-27 11:34:09,168][20171] Updated weights for policy 0, policy_version 270 (0.0023) +[2023-02-27 11:34:09,228][00107] Fps is (10 sec: 4915.3, 60 sec: 4027.7, 300 sec: 3813.5). Total num frames: 1105920. Throughput: 0: 1014.5. Samples: 276532. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:34:09,230][00107] Avg episode reward: [(0, '4.689')] +[2023-02-27 11:34:14,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3790.5). Total num frames: 1118208. Throughput: 0: 995.0. Samples: 279170. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:34:14,230][00107] Avg episode reward: [(0, '4.692')] +[2023-02-27 11:34:19,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1138688. Throughput: 0: 954.7. Samples: 283780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:34:19,231][00107] Avg episode reward: [(0, '4.671')] +[2023-02-27 11:34:20,750][20171] Updated weights for policy 0, policy_version 280 (0.0014) +[2023-02-27 11:34:24,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3959.9, 300 sec: 3929.4). Total num frames: 1159168. Throughput: 0: 1004.5. Samples: 290548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:34:24,230][00107] Avg episode reward: [(0, '4.650')] +[2023-02-27 11:34:29,228][00107] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 1183744. Throughput: 0: 1020.5. Samples: 294214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:34:29,230][00107] Avg episode reward: [(0, '4.674')] +[2023-02-27 11:34:29,957][20171] Updated weights for policy 0, policy_version 290 (0.0025) +[2023-02-27 11:34:34,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 1196032. Throughput: 0: 979.8. Samples: 299442. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:34:34,230][00107] Avg episode reward: [(0, '4.587')] +[2023-02-27 11:34:39,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1216512. Throughput: 0: 966.7. Samples: 304608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:34:39,236][00107] Avg episode reward: [(0, '4.783')] +[2023-02-27 11:34:41,496][20171] Updated weights for policy 0, policy_version 300 (0.0035) +[2023-02-27 11:34:44,228][00107] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 1241088. Throughput: 0: 993.4. Samples: 308104. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:34:44,231][00107] Avg episode reward: [(0, '4.958')] +[2023-02-27 11:34:49,228][00107] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 1261568. Throughput: 0: 1016.6. Samples: 314992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:34:49,232][00107] Avg episode reward: [(0, '4.823')] +[2023-02-27 11:34:51,743][20171] Updated weights for policy 0, policy_version 310 (0.0013) +[2023-02-27 11:34:54,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 1273856. Throughput: 0: 951.2. Samples: 319338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:34:54,237][00107] Avg episode reward: [(0, '4.804')] +[2023-02-27 11:34:54,251][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000311_1273856.pth... +[2023-02-27 11:34:54,399][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000078_319488.pth +[2023-02-27 11:34:59,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1294336. Throughput: 0: 949.8. Samples: 321912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:34:59,231][00107] Avg episode reward: [(0, '4.857')] +[2023-02-27 11:35:02,121][20171] Updated weights for policy 0, policy_version 320 (0.0022) +[2023-02-27 11:35:04,228][00107] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 1318912. Throughput: 0: 1003.6. Samples: 328940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:35:04,233][00107] Avg episode reward: [(0, '4.684')] +[2023-02-27 11:35:09,232][00107] Fps is (10 sec: 4503.7, 60 sec: 3890.9, 300 sec: 4012.7). Total num frames: 1339392. Throughput: 0: 985.9. Samples: 334916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:35:09,237][00107] Avg episode reward: [(0, '4.540')] +[2023-02-27 11:35:13,346][20171] Updated weights for policy 0, policy_version 330 (0.0028) +[2023-02-27 11:35:14,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 1351680. Throughput: 0: 953.2. Samples: 337110. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:35:14,234][00107] Avg episode reward: [(0, '4.626')] +[2023-02-27 11:35:19,228][00107] Fps is (10 sec: 3688.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1376256. Throughput: 0: 967.9. Samples: 342998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:35:19,230][00107] Avg episode reward: [(0, '4.471')] +[2023-02-27 11:35:22,520][20171] Updated weights for policy 0, policy_version 340 (0.0012) +[2023-02-27 11:35:24,228][00107] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1396736. Throughput: 0: 1009.6. Samples: 350038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:35:24,231][00107] Avg episode reward: [(0, '4.464')] +[2023-02-27 11:35:29,230][00107] Fps is (10 sec: 3685.6, 60 sec: 3822.8, 300 sec: 3971.0). Total num frames: 1413120. Throughput: 0: 993.4. Samples: 352810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:35:29,232][00107] Avg episode reward: [(0, '4.501')] +[2023-02-27 11:35:34,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 1429504. Throughput: 0: 941.5. Samples: 357358. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:35:34,230][00107] Avg episode reward: [(0, '4.582')] +[2023-02-27 11:35:34,303][20171] Updated weights for policy 0, policy_version 350 (0.0030) +[2023-02-27 11:35:39,228][00107] Fps is (10 sec: 4096.9, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 1454080. Throughput: 0: 998.0. Samples: 364250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:35:39,230][00107] Avg episode reward: [(0, '4.870')] +[2023-02-27 11:35:42,768][20171] Updated weights for policy 0, policy_version 360 (0.0023) +[2023-02-27 11:35:44,229][00107] Fps is (10 sec: 4914.6, 60 sec: 3959.4, 300 sec: 3998.8). Total num frames: 1478656. Throughput: 0: 1019.0. Samples: 367768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:35:44,233][00107] Avg episode reward: [(0, '4.758')] +[2023-02-27 11:35:49,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3971.0). Total num frames: 1490944. Throughput: 0: 984.2. Samples: 373228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:35:49,237][00107] Avg episode reward: [(0, '4.610')] +[2023-02-27 11:35:54,228][00107] Fps is (10 sec: 3277.2, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 1511424. Throughput: 0: 964.8. Samples: 378328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:35:54,233][00107] Avg episode reward: [(0, '4.513')] +[2023-02-27 11:35:54,587][20171] Updated weights for policy 0, policy_version 370 (0.0026) +[2023-02-27 11:35:59,229][00107] Fps is (10 sec: 4505.0, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 1536000. Throughput: 0: 994.4. Samples: 381860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:35:59,234][00107] Avg episode reward: [(0, '4.589')] +[2023-02-27 11:36:03,648][20171] Updated weights for policy 0, policy_version 380 (0.0019) +[2023-02-27 11:36:04,228][00107] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3985.0). Total num frames: 1556480. Throughput: 0: 1019.6. Samples: 388880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:36:04,236][00107] Avg episode reward: [(0, '4.620')] +[2023-02-27 11:36:09,228][00107] Fps is (10 sec: 3686.9, 60 sec: 3891.5, 300 sec: 3971.0). Total num frames: 1572864. Throughput: 0: 965.9. Samples: 393502. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:36:09,230][00107] Avg episode reward: [(0, '4.591')] +[2023-02-27 11:36:14,228][00107] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 1593344. Throughput: 0: 957.1. Samples: 395876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:14,231][00107] Avg episode reward: [(0, '4.578')] +[2023-02-27 11:36:15,211][20171] Updated weights for policy 0, policy_version 390 (0.0024) +[2023-02-27 11:36:19,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1613824. Throughput: 0: 1010.9. Samples: 402850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:36:19,234][00107] Avg episode reward: [(0, '4.400')] +[2023-02-27 11:36:24,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3959.4, 300 sec: 3984.9). Total num frames: 1634304. Throughput: 0: 992.9. Samples: 408932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:24,235][00107] Avg episode reward: [(0, '4.330')] +[2023-02-27 11:36:25,416][20171] Updated weights for policy 0, policy_version 400 (0.0015) +[2023-02-27 11:36:29,228][00107] Fps is (10 sec: 3276.9, 60 sec: 3891.3, 300 sec: 3943.3). Total num frames: 1646592. Throughput: 0: 964.3. Samples: 411160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:29,234][00107] Avg episode reward: [(0, '4.424')] +[2023-02-27 11:36:34,228][00107] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 1671168. Throughput: 0: 966.5. Samples: 416722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:34,234][00107] Avg episode reward: [(0, '4.633')] +[2023-02-27 11:36:35,753][20171] Updated weights for policy 0, policy_version 410 (0.0013) +[2023-02-27 11:36:39,228][00107] Fps is (10 sec: 4915.2, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1695744. Throughput: 0: 1011.9. Samples: 423862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:39,233][00107] Avg episode reward: [(0, '4.783')] +[2023-02-27 11:36:44,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3957.2). Total num frames: 1708032. Throughput: 0: 994.9. Samples: 426630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:44,232][00107] Avg episode reward: [(0, '4.564')] +[2023-02-27 11:36:48,002][20171] Updated weights for policy 0, policy_version 420 (0.0015) +[2023-02-27 11:36:49,228][00107] Fps is (10 sec: 2457.5, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1720320. Throughput: 0: 922.4. Samples: 430390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:49,237][00107] Avg episode reward: [(0, '4.606')] +[2023-02-27 11:36:54,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 1740800. Throughput: 0: 938.0. Samples: 435710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:54,231][00107] Avg episode reward: [(0, '4.694')] +[2023-02-27 11:36:54,241][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000425_1740800.pth... +[2023-02-27 11:36:54,357][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000197_806912.pth +[2023-02-27 11:36:58,893][20171] Updated weights for policy 0, policy_version 430 (0.0031) +[2023-02-27 11:36:59,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 1761280. Throughput: 0: 950.9. Samples: 438668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:36:59,230][00107] Avg episode reward: [(0, '4.887')] +[2023-02-27 11:37:04,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3901.6). Total num frames: 1773568. Throughput: 0: 904.7. Samples: 443560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:37:04,231][00107] Avg episode reward: [(0, '4.990')] +[2023-02-27 11:37:09,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3873.8). Total num frames: 1789952. Throughput: 0: 869.2. Samples: 448048. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-27 11:37:09,230][00107] Avg episode reward: [(0, '5.034')] +[2023-02-27 11:37:11,335][20171] Updated weights for policy 0, policy_version 440 (0.0014) +[2023-02-27 11:37:14,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3887.8). Total num frames: 1814528. Throughput: 0: 898.8. Samples: 451606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:37:14,235][00107] Avg episode reward: [(0, '4.827')] +[2023-02-27 11:37:19,228][00107] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3915.5). Total num frames: 1835008. Throughput: 0: 933.0. Samples: 458706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:37:19,235][00107] Avg episode reward: [(0, '4.677')] +[2023-02-27 11:37:20,690][20171] Updated weights for policy 0, policy_version 450 (0.0019) +[2023-02-27 11:37:24,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3887.7). Total num frames: 1851392. Throughput: 0: 880.5. Samples: 463486. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:37:24,238][00107] Avg episode reward: [(0, '4.651')] +[2023-02-27 11:37:29,229][00107] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1871872. Throughput: 0: 870.3. Samples: 465792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:37:29,232][00107] Avg episode reward: [(0, '4.571')] +[2023-02-27 11:37:31,841][20171] Updated weights for policy 0, policy_version 460 (0.0020) +[2023-02-27 11:37:34,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3887.7). Total num frames: 1892352. Throughput: 0: 937.8. Samples: 472592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:37:34,233][00107] Avg episode reward: [(0, '4.738')] +[2023-02-27 11:37:39,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3901.6). Total num frames: 1912832. Throughput: 0: 968.3. Samples: 479282. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:37:39,234][00107] Avg episode reward: [(0, '4.861')] +[2023-02-27 11:37:42,096][20171] Updated weights for policy 0, policy_version 470 (0.0018) +[2023-02-27 11:37:44,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3873.8). Total num frames: 1929216. Throughput: 0: 953.0. Samples: 481552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:37:44,235][00107] Avg episode reward: [(0, '4.635')] +[2023-02-27 11:37:49,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3873.9). Total num frames: 1949696. Throughput: 0: 951.0. Samples: 486354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:37:49,233][00107] Avg episode reward: [(0, '4.587')] +[2023-02-27 11:37:52,942][20171] Updated weights for policy 0, policy_version 480 (0.0019) +[2023-02-27 11:37:54,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3887.8). Total num frames: 1970176. Throughput: 0: 998.0. Samples: 492956. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:37:54,234][00107] Avg episode reward: [(0, '4.774')] +[2023-02-27 11:37:59,229][00107] Fps is (10 sec: 3685.9, 60 sec: 3754.6, 300 sec: 3887.7). Total num frames: 1986560. Throughput: 0: 988.2. Samples: 496074. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-27 11:37:59,237][00107] Avg episode reward: [(0, '4.704')] +[2023-02-27 11:38:04,228][00107] Fps is (10 sec: 2867.1, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 1998848. Throughput: 0: 919.4. Samples: 500080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:38:04,235][00107] Avg episode reward: [(0, '4.625')] +[2023-02-27 11:38:05,666][20171] Updated weights for policy 0, policy_version 490 (0.0020) +[2023-02-27 11:38:09,228][00107] Fps is (10 sec: 3277.2, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2019328. Throughput: 0: 932.2. Samples: 505436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:38:09,230][00107] Avg episode reward: [(0, '4.698')] +[2023-02-27 11:38:14,228][00107] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 2043904. Throughput: 0: 951.7. Samples: 508620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:38:14,233][00107] Avg episode reward: [(0, '4.694')] +[2023-02-27 11:38:15,157][20171] Updated weights for policy 0, policy_version 500 (0.0035) +[2023-02-27 11:38:19,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3846.2). Total num frames: 2056192. Throughput: 0: 925.5. Samples: 514242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:38:19,234][00107] Avg episode reward: [(0, '4.745')] +[2023-02-27 11:38:24,232][00107] Fps is (10 sec: 2456.7, 60 sec: 3617.9, 300 sec: 3804.4). Total num frames: 2068480. Throughput: 0: 858.7. Samples: 517926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:38:24,234][00107] Avg episode reward: [(0, '4.648')] +[2023-02-27 11:38:28,991][20171] Updated weights for policy 0, policy_version 510 (0.0012) +[2023-02-27 11:38:29,228][00107] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3818.3). Total num frames: 2088960. Throughput: 0: 862.5. Samples: 520364. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:38:29,230][00107] Avg episode reward: [(0, '4.744')] +[2023-02-27 11:38:34,228][00107] Fps is (10 sec: 4097.5, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 2109440. Throughput: 0: 888.4. Samples: 526330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:38:34,230][00107] Avg episode reward: [(0, '4.513')] +[2023-02-27 11:38:39,231][00107] Fps is (10 sec: 3275.8, 60 sec: 3481.4, 300 sec: 3804.4). Total num frames: 2121728. Throughput: 0: 847.3. Samples: 531088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:38:39,234][00107] Avg episode reward: [(0, '4.467')] +[2023-02-27 11:38:41,737][20171] Updated weights for policy 0, policy_version 520 (0.0036) +[2023-02-27 11:38:44,228][00107] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3762.8). Total num frames: 2134016. Throughput: 0: 819.2. Samples: 532936. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:38:44,235][00107] Avg episode reward: [(0, '4.506')] +[2023-02-27 11:38:49,228][00107] Fps is (10 sec: 3277.8, 60 sec: 3413.3, 300 sec: 3776.7). Total num frames: 2154496. Throughput: 0: 837.0. Samples: 537744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:38:49,234][00107] Avg episode reward: [(0, '4.559')] +[2023-02-27 11:38:53,491][20171] Updated weights for policy 0, policy_version 530 (0.0013) +[2023-02-27 11:38:54,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3776.7). Total num frames: 2170880. Throughput: 0: 847.4. Samples: 543570. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:38:54,234][00107] Avg episode reward: [(0, '4.729')] +[2023-02-27 11:38:54,253][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000530_2170880.pth... +[2023-02-27 11:38:54,383][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000311_1273856.pth +[2023-02-27 11:38:59,229][00107] Fps is (10 sec: 2866.9, 60 sec: 3276.8, 300 sec: 3735.0). Total num frames: 2183168. Throughput: 0: 827.4. Samples: 545854. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:38:59,235][00107] Avg episode reward: [(0, '4.726')] +[2023-02-27 11:39:04,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3707.2). Total num frames: 2199552. Throughput: 0: 787.0. Samples: 549658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:04,238][00107] Avg episode reward: [(0, '4.895')] +[2023-02-27 11:39:07,159][20171] Updated weights for policy 0, policy_version 540 (0.0018) +[2023-02-27 11:39:09,228][00107] Fps is (10 sec: 3277.2, 60 sec: 3276.8, 300 sec: 3721.1). Total num frames: 2215936. Throughput: 0: 823.6. Samples: 554986. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:39:09,231][00107] Avg episode reward: [(0, '4.948')] +[2023-02-27 11:39:14,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3721.1). Total num frames: 2236416. Throughput: 0: 832.2. Samples: 557812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:39:14,231][00107] Avg episode reward: [(0, '4.681')] +[2023-02-27 11:39:19,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3693.3). Total num frames: 2248704. Throughput: 0: 805.3. Samples: 562570. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:19,233][00107] Avg episode reward: [(0, '4.733')] +[2023-02-27 11:39:19,837][20171] Updated weights for policy 0, policy_version 550 (0.0013) +[2023-02-27 11:39:24,228][00107] Fps is (10 sec: 2457.6, 60 sec: 3208.7, 300 sec: 3651.7). Total num frames: 2260992. Throughput: 0: 779.2. Samples: 566148. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:24,233][00107] Avg episode reward: [(0, '4.671')] +[2023-02-27 11:39:29,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3665.6). Total num frames: 2277376. Throughput: 0: 791.2. Samples: 568540. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:39:29,230][00107] Avg episode reward: [(0, '4.807')] +[2023-02-27 11:39:32,368][20171] Updated weights for policy 0, policy_version 560 (0.0036) +[2023-02-27 11:39:34,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3665.6). Total num frames: 2297856. Throughput: 0: 815.0. Samples: 574420. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:34,230][00107] Avg episode reward: [(0, '4.614')] +[2023-02-27 11:39:39,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3208.7, 300 sec: 3637.8). Total num frames: 2314240. Throughput: 0: 790.5. Samples: 579142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:39:39,234][00107] Avg episode reward: [(0, '4.696')] +[2023-02-27 11:39:44,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3610.0). Total num frames: 2326528. Throughput: 0: 779.2. Samples: 580918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:39:44,235][00107] Avg episode reward: [(0, '4.866')] +[2023-02-27 11:39:46,088][20171] Updated weights for policy 0, policy_version 570 (0.0027) +[2023-02-27 11:39:49,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3637.8). Total num frames: 2347008. Throughput: 0: 800.4. Samples: 585674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:39:49,231][00107] Avg episode reward: [(0, '4.905')] +[2023-02-27 11:39:54,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3208.5, 300 sec: 3623.9). Total num frames: 2363392. Throughput: 0: 812.4. Samples: 591542. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:54,231][00107] Avg episode reward: [(0, '4.740')] +[2023-02-27 11:39:57,980][20171] Updated weights for policy 0, policy_version 580 (0.0021) +[2023-02-27 11:39:59,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3208.6, 300 sec: 3582.3). Total num frames: 2375680. Throughput: 0: 797.7. Samples: 593708. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:39:59,231][00107] Avg episode reward: [(0, '4.677')] +[2023-02-27 11:40:04,228][00107] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3554.5). Total num frames: 2387968. Throughput: 0: 775.7. Samples: 597478. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:40:04,234][00107] Avg episode reward: [(0, '4.674')] +[2023-02-27 11:40:09,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3596.2). Total num frames: 2412544. Throughput: 0: 819.8. Samples: 603038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:40:09,236][00107] Avg episode reward: [(0, '4.719')] +[2023-02-27 11:40:10,291][20171] Updated weights for policy 0, policy_version 590 (0.0029) +[2023-02-27 11:40:14,228][00107] Fps is (10 sec: 4505.6, 60 sec: 3276.8, 300 sec: 3582.3). Total num frames: 2433024. Throughput: 0: 836.4. Samples: 606176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:40:14,231][00107] Avg episode reward: [(0, '4.879')] +[2023-02-27 11:40:19,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3554.5). Total num frames: 2445312. Throughput: 0: 815.2. Samples: 611104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:40:19,231][00107] Avg episode reward: [(0, '4.941')] +[2023-02-27 11:40:23,646][20171] Updated weights for policy 0, policy_version 600 (0.0026) +[2023-02-27 11:40:24,228][00107] Fps is (10 sec: 2457.5, 60 sec: 3276.8, 300 sec: 3540.6). Total num frames: 2457600. Throughput: 0: 797.1. Samples: 615010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:40:24,237][00107] Avg episode reward: [(0, '4.995')] +[2023-02-27 11:40:29,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3554.5). Total num frames: 2478080. Throughput: 0: 824.9. Samples: 618040. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:40:29,230][00107] Avg episode reward: [(0, '5.163')] +[2023-02-27 11:40:29,237][20157] Saving new best policy, reward=5.163! +[2023-02-27 11:40:33,775][20171] Updated weights for policy 0, policy_version 610 (0.0021) +[2023-02-27 11:40:34,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3345.0, 300 sec: 3540.6). Total num frames: 2498560. Throughput: 0: 855.1. Samples: 624152. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:40:34,235][00107] Avg episode reward: [(0, '5.328')] +[2023-02-27 11:40:34,246][20157] Saving new best policy, reward=5.328! +[2023-02-27 11:40:39,232][00107] Fps is (10 sec: 3275.3, 60 sec: 3276.6, 300 sec: 3498.9). Total num frames: 2510848. Throughput: 0: 815.1. Samples: 628224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:40:39,235][00107] Avg episode reward: [(0, '5.403')] +[2023-02-27 11:40:39,244][20157] Saving new best policy, reward=5.403! +[2023-02-27 11:40:44,228][00107] Fps is (10 sec: 2457.7, 60 sec: 3276.8, 300 sec: 3499.0). Total num frames: 2523136. Throughput: 0: 806.3. Samples: 629992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:40:44,238][00107] Avg episode reward: [(0, '4.901')] +[2023-02-27 11:40:47,790][20171] Updated weights for policy 0, policy_version 620 (0.0014) +[2023-02-27 11:40:49,228][00107] Fps is (10 sec: 3278.1, 60 sec: 3276.8, 300 sec: 3499.0). Total num frames: 2543616. Throughput: 0: 840.1. Samples: 635284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:40:49,236][00107] Avg episode reward: [(0, '4.795')] +[2023-02-27 11:40:54,228][00107] Fps is (10 sec: 4095.8, 60 sec: 3345.0, 300 sec: 3485.1). Total num frames: 2564096. Throughput: 0: 848.3. Samples: 641214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:40:54,232][00107] Avg episode reward: [(0, '4.856')] +[2023-02-27 11:40:54,252][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000626_2564096.pth... +[2023-02-27 11:40:54,385][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000425_1740800.pth +[2023-02-27 11:40:59,228][00107] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 2576384. Throughput: 0: 821.3. Samples: 643136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:40:59,233][00107] Avg episode reward: [(0, '4.940')] +[2023-02-27 11:41:00,266][20171] Updated weights for policy 0, policy_version 630 (0.0026) +[2023-02-27 11:41:04,228][00107] Fps is (10 sec: 2457.7, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 2588672. Throughput: 0: 797.9. Samples: 647008. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:41:04,233][00107] Avg episode reward: [(0, '4.868')] +[2023-02-27 11:41:09,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 2609152. Throughput: 0: 840.8. Samples: 652846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:41:09,234][00107] Avg episode reward: [(0, '4.728')] +[2023-02-27 11:41:11,662][20171] Updated weights for policy 0, policy_version 640 (0.0023) +[2023-02-27 11:41:14,228][00107] Fps is (10 sec: 4095.9, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 2629632. Throughput: 0: 842.8. Samples: 655968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:41:14,238][00107] Avg episode reward: [(0, '4.613')] +[2023-02-27 11:41:19,230][00107] Fps is (10 sec: 3276.2, 60 sec: 3276.7, 300 sec: 3415.6). Total num frames: 2641920. Throughput: 0: 801.9. Samples: 660238. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:41:19,235][00107] Avg episode reward: [(0, '4.720')] +[2023-02-27 11:41:24,228][00107] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 2654208. Throughput: 0: 801.1. Samples: 664270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:41:24,231][00107] Avg episode reward: [(0, '4.558')] +[2023-02-27 11:41:25,562][20171] Updated weights for policy 0, policy_version 650 (0.0031) +[2023-02-27 11:41:29,230][00107] Fps is (10 sec: 3276.6, 60 sec: 3276.7, 300 sec: 3401.7). Total num frames: 2674688. Throughput: 0: 827.3. Samples: 667222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:41:29,236][00107] Avg episode reward: [(0, '4.668')] +[2023-02-27 11:41:34,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3208.6, 300 sec: 3374.0). Total num frames: 2691072. Throughput: 0: 838.9. Samples: 673032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:41:34,232][00107] Avg episode reward: [(0, '4.888')] +[2023-02-27 11:41:37,869][20171] Updated weights for policy 0, policy_version 660 (0.0013) +[2023-02-27 11:41:39,230][00107] Fps is (10 sec: 2867.2, 60 sec: 3208.7, 300 sec: 3374.0). Total num frames: 2703360. Throughput: 0: 792.9. Samples: 676896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:41:39,233][00107] Avg episode reward: [(0, '4.777')] +[2023-02-27 11:41:44,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2719744. Throughput: 0: 792.4. Samples: 678792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:41:44,235][00107] Avg episode reward: [(0, '4.806')] +[2023-02-27 11:41:49,228][00107] Fps is (10 sec: 3687.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2740224. Throughput: 0: 828.5. Samples: 684290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:41:49,230][00107] Avg episode reward: [(0, '4.985')] +[2023-02-27 11:41:49,978][20171] Updated weights for policy 0, policy_version 670 (0.0013) +[2023-02-27 11:41:54,235][00107] Fps is (10 sec: 3683.8, 60 sec: 3208.2, 300 sec: 3373.9). Total num frames: 2756608. Throughput: 0: 819.8. Samples: 689742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:41:54,239][00107] Avg episode reward: [(0, '5.034')] +[2023-02-27 11:41:59,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3374.0). Total num frames: 2768896. Throughput: 0: 791.4. Samples: 691580. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:41:59,232][00107] Avg episode reward: [(0, '4.948')] +[2023-02-27 11:42:04,209][20171] Updated weights for policy 0, policy_version 680 (0.0028) +[2023-02-27 11:42:04,228][00107] Fps is (10 sec: 2869.2, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 2785280. Throughput: 0: 778.4. Samples: 695264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:04,231][00107] Avg episode reward: [(0, '4.890')] +[2023-02-27 11:42:09,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3346.2). Total num frames: 2801664. Throughput: 0: 821.3. Samples: 701230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:09,236][00107] Avg episode reward: [(0, '4.760')] +[2023-02-27 11:42:14,228][00107] Fps is (10 sec: 3686.2, 60 sec: 3208.5, 300 sec: 3346.2). Total num frames: 2822144. Throughput: 0: 822.1. Samples: 704216. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:42:14,232][00107] Avg episode reward: [(0, '4.722')] +[2023-02-27 11:42:15,869][20171] Updated weights for policy 0, policy_version 690 (0.0043) +[2023-02-27 11:42:19,232][00107] Fps is (10 sec: 3275.4, 60 sec: 3208.4, 300 sec: 3332.3). Total num frames: 2834432. Throughput: 0: 785.3. Samples: 708376. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:19,235][00107] Avg episode reward: [(0, '4.562')] +[2023-02-27 11:42:24,228][00107] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3304.6). Total num frames: 2846720. Throughput: 0: 795.7. Samples: 712700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:42:24,231][00107] Avg episode reward: [(0, '4.731')] +[2023-02-27 11:42:28,322][20171] Updated weights for policy 0, policy_version 700 (0.0021) +[2023-02-27 11:42:29,228][00107] Fps is (10 sec: 3278.3, 60 sec: 3208.7, 300 sec: 3304.6). Total num frames: 2867200. Throughput: 0: 819.9. Samples: 715686. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:29,233][00107] Avg episode reward: [(0, '4.781')] +[2023-02-27 11:42:34,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3208.5, 300 sec: 3290.7). Total num frames: 2883584. Throughput: 0: 828.2. Samples: 721560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:34,231][00107] Avg episode reward: [(0, '4.869')] +[2023-02-27 11:42:39,228][00107] Fps is (10 sec: 3276.6, 60 sec: 3276.9, 300 sec: 3290.7). Total num frames: 2899968. Throughput: 0: 794.9. Samples: 725508. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:42:39,238][00107] Avg episode reward: [(0, '4.738')] +[2023-02-27 11:42:41,926][20171] Updated weights for policy 0, policy_version 710 (0.0025) +[2023-02-27 11:42:44,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 2916352. Throughput: 0: 797.6. Samples: 727472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:44,230][00107] Avg episode reward: [(0, '4.746')] +[2023-02-27 11:42:49,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 2936832. Throughput: 0: 848.4. Samples: 733440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:42:49,238][00107] Avg episode reward: [(0, '4.772')] +[2023-02-27 11:42:51,815][20171] Updated weights for policy 0, policy_version 720 (0.0014) +[2023-02-27 11:42:54,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3277.2, 300 sec: 3276.8). Total num frames: 2953216. Throughput: 0: 837.8. Samples: 738932. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:42:54,235][00107] Avg episode reward: [(0, '4.840')] +[2023-02-27 11:42:54,249][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000721_2953216.pth... +[2023-02-27 11:42:54,405][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000530_2170880.pth +[2023-02-27 11:42:59,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 2965504. Throughput: 0: 812.9. Samples: 740794. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:42:59,234][00107] Avg episode reward: [(0, '4.839')] +[2023-02-27 11:43:04,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3262.9). Total num frames: 2981888. Throughput: 0: 814.8. Samples: 745038. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:43:04,230][00107] Avg episode reward: [(0, '4.811')] +[2023-02-27 11:43:05,612][20171] Updated weights for policy 0, policy_version 730 (0.0013) +[2023-02-27 11:43:09,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3249.0). Total num frames: 3002368. Throughput: 0: 856.5. Samples: 751244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:09,231][00107] Avg episode reward: [(0, '4.680')] +[2023-02-27 11:43:14,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3276.8, 300 sec: 3262.9). Total num frames: 3018752. Throughput: 0: 856.3. Samples: 754220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:14,231][00107] Avg episode reward: [(0, '4.518')] +[2023-02-27 11:43:17,949][20171] Updated weights for policy 0, policy_version 740 (0.0029) +[2023-02-27 11:43:19,228][00107] Fps is (10 sec: 2867.1, 60 sec: 3277.0, 300 sec: 3263.0). Total num frames: 3031040. Throughput: 0: 813.3. Samples: 758160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:19,232][00107] Avg episode reward: [(0, '4.558')] +[2023-02-27 11:43:24,228][00107] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3262.9). Total num frames: 3051520. Throughput: 0: 839.5. Samples: 763284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:43:24,231][00107] Avg episode reward: [(0, '4.751')] +[2023-02-27 11:43:28,876][20171] Updated weights for policy 0, policy_version 750 (0.0019) +[2023-02-27 11:43:29,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3262.9). Total num frames: 3072000. Throughput: 0: 863.2. Samples: 766316. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:29,230][00107] Avg episode reward: [(0, '4.705')] +[2023-02-27 11:43:34,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3276.8). Total num frames: 3088384. Throughput: 0: 850.6. Samples: 771718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:43:34,232][00107] Avg episode reward: [(0, '4.545')] +[2023-02-27 11:43:39,228][00107] Fps is (10 sec: 2867.1, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3100672. Throughput: 0: 814.9. Samples: 775602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:43:39,235][00107] Avg episode reward: [(0, '4.567')] +[2023-02-27 11:43:42,314][20171] Updated weights for policy 0, policy_version 760 (0.0014) +[2023-02-27 11:43:44,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3262.9). Total num frames: 3117056. Throughput: 0: 827.5. Samples: 778030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:44,237][00107] Avg episode reward: [(0, '4.761')] +[2023-02-27 11:43:49,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3137536. Throughput: 0: 869.3. Samples: 784158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:43:49,234][00107] Avg episode reward: [(0, '4.970')] +[2023-02-27 11:43:53,670][20171] Updated weights for policy 0, policy_version 770 (0.0033) +[2023-02-27 11:43:54,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3345.1, 300 sec: 3290.7). Total num frames: 3153920. Throughput: 0: 841.4. Samples: 789106. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 11:43:54,237][00107] Avg episode reward: [(0, '5.021')] +[2023-02-27 11:43:59,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 3166208. Throughput: 0: 818.4. Samples: 791046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:43:59,230][00107] Avg episode reward: [(0, '4.923')] +[2023-02-27 11:44:04,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3290.7). Total num frames: 3186688. Throughput: 0: 840.3. Samples: 795974. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 11:44:04,237][00107] Avg episode reward: [(0, '4.679')] +[2023-02-27 11:44:06,033][20171] Updated weights for policy 0, policy_version 780 (0.0019) +[2023-02-27 11:44:09,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3290.7). Total num frames: 3207168. Throughput: 0: 863.6. Samples: 802146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:44:09,231][00107] Avg episode reward: [(0, '4.830')] +[2023-02-27 11:44:14,230][00107] Fps is (10 sec: 3276.1, 60 sec: 3344.9, 300 sec: 3290.7). Total num frames: 3219456. Throughput: 0: 851.0. Samples: 804614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:44:14,236][00107] Avg episode reward: [(0, '4.651')] +[2023-02-27 11:44:18,967][20171] Updated weights for policy 0, policy_version 790 (0.0014) +[2023-02-27 11:44:19,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3235840. Throughput: 0: 818.4. Samples: 808544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:44:19,236][00107] Avg episode reward: [(0, '4.629')] +[2023-02-27 11:44:24,228][00107] Fps is (10 sec: 3277.6, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3252224. Throughput: 0: 854.8. Samples: 814070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:44:24,237][00107] Avg episode reward: [(0, '4.881')] +[2023-02-27 11:44:29,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3272704. Throughput: 0: 868.8. Samples: 817126. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:44:29,234][00107] Avg episode reward: [(0, '4.851')] +[2023-02-27 11:44:29,506][20171] Updated weights for policy 0, policy_version 800 (0.0020) +[2023-02-27 11:44:34,231][00107] Fps is (10 sec: 3276.0, 60 sec: 3276.7, 300 sec: 3290.7). Total num frames: 3284992. Throughput: 0: 836.7. Samples: 821810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:44:34,236][00107] Avg episode reward: [(0, '4.515')] +[2023-02-27 11:44:39,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3301376. Throughput: 0: 812.4. Samples: 825666. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 11:44:39,231][00107] Avg episode reward: [(0, '4.578')] +[2023-02-27 11:44:43,111][20171] Updated weights for policy 0, policy_version 810 (0.0017) +[2023-02-27 11:44:44,228][00107] Fps is (10 sec: 3687.4, 60 sec: 3413.3, 300 sec: 3304.6). Total num frames: 3321856. Throughput: 0: 833.1. Samples: 828536. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:44:44,231][00107] Avg episode reward: [(0, '4.483')] +[2023-02-27 11:44:49,228][00107] Fps is (10 sec: 3686.3, 60 sec: 3345.0, 300 sec: 3304.6). Total num frames: 3338240. Throughput: 0: 853.9. Samples: 834400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:44:49,235][00107] Avg episode reward: [(0, '4.653')] +[2023-02-27 11:44:54,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3354624. Throughput: 0: 815.3. Samples: 838836. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:44:54,230][00107] Avg episode reward: [(0, '4.662')] +[2023-02-27 11:44:54,251][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000819_3354624.pth... +[2023-02-27 11:44:54,412][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000626_2564096.pth +[2023-02-27 11:44:55,723][20171] Updated weights for policy 0, policy_version 820 (0.0019) +[2023-02-27 11:44:59,228][00107] Fps is (10 sec: 2867.3, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3366912. Throughput: 0: 802.5. Samples: 840726. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:44:59,230][00107] Avg episode reward: [(0, '4.817')] +[2023-02-27 11:45:04,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3387392. Throughput: 0: 835.1. Samples: 846122. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:45:04,231][00107] Avg episode reward: [(0, '5.066')] +[2023-02-27 11:45:07,051][20171] Updated weights for policy 0, policy_version 830 (0.0023) +[2023-02-27 11:45:09,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3290.7). Total num frames: 3403776. Throughput: 0: 845.7. Samples: 852128. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:45:09,232][00107] Avg episode reward: [(0, '4.968')] +[2023-02-27 11:45:14,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3304.6). Total num frames: 3420160. Throughput: 0: 820.6. Samples: 854054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:45:14,236][00107] Avg episode reward: [(0, '4.836')] +[2023-02-27 11:45:19,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3432448. Throughput: 0: 803.5. Samples: 857966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:45:19,236][00107] Avg episode reward: [(0, '4.581')] +[2023-02-27 11:45:20,461][20171] Updated weights for policy 0, policy_version 840 (0.0012) +[2023-02-27 11:45:24,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3452928. Throughput: 0: 852.4. Samples: 864024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:45:24,232][00107] Avg episode reward: [(0, '4.706')] +[2023-02-27 11:45:29,230][00107] Fps is (10 sec: 4095.0, 60 sec: 3344.9, 300 sec: 3304.6). Total num frames: 3473408. Throughput: 0: 855.8. Samples: 867050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:45:29,233][00107] Avg episode reward: [(0, '4.814')] +[2023-02-27 11:45:31,993][20171] Updated weights for policy 0, policy_version 850 (0.0014) +[2023-02-27 11:45:34,231][00107] Fps is (10 sec: 3275.7, 60 sec: 3345.0, 300 sec: 3304.6). Total num frames: 3485696. Throughput: 0: 820.2. Samples: 871310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:45:34,234][00107] Avg episode reward: [(0, '4.877')] +[2023-02-27 11:45:39,228][00107] Fps is (10 sec: 2867.9, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3502080. Throughput: 0: 820.7. Samples: 875768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:45:39,235][00107] Avg episode reward: [(0, '4.698')] +[2023-02-27 11:45:44,026][20171] Updated weights for policy 0, policy_version 860 (0.0018) +[2023-02-27 11:45:44,228][00107] Fps is (10 sec: 3687.6, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3522560. Throughput: 0: 846.2. Samples: 878804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:45:44,231][00107] Avg episode reward: [(0, '4.568')] +[2023-02-27 11:45:49,229][00107] Fps is (10 sec: 3685.9, 60 sec: 3345.0, 300 sec: 3304.6). Total num frames: 3538944. Throughput: 0: 859.7. Samples: 884810. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:45:49,236][00107] Avg episode reward: [(0, '4.699')] +[2023-02-27 11:45:54,228][00107] Fps is (10 sec: 2867.0, 60 sec: 3276.8, 300 sec: 3304.6). Total num frames: 3551232. Throughput: 0: 812.7. Samples: 888700. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:45:54,234][00107] Avg episode reward: [(0, '4.612')] +[2023-02-27 11:45:57,502][20171] Updated weights for policy 0, policy_version 870 (0.0014) +[2023-02-27 11:45:59,228][00107] Fps is (10 sec: 2867.6, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3567616. Throughput: 0: 814.7. Samples: 890714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:45:59,236][00107] Avg episode reward: [(0, '4.638')] +[2023-02-27 11:46:04,228][00107] Fps is (10 sec: 3686.6, 60 sec: 3345.1, 300 sec: 3318.5). Total num frames: 3588096. Throughput: 0: 861.5. Samples: 896734. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:46:04,235][00107] Avg episode reward: [(0, '4.775')] +[2023-02-27 11:46:07,936][20171] Updated weights for policy 0, policy_version 880 (0.0029) +[2023-02-27 11:46:09,230][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3304.6). Total num frames: 3604480. Throughput: 0: 847.0. Samples: 902138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:46:09,235][00107] Avg episode reward: [(0, '4.814')] +[2023-02-27 11:46:14,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3345.0, 300 sec: 3318.5). Total num frames: 3620864. Throughput: 0: 822.4. Samples: 904056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:46:14,236][00107] Avg episode reward: [(0, '4.721')] +[2023-02-27 11:46:19,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3332.3). Total num frames: 3637248. Throughput: 0: 829.7. Samples: 908642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:46:19,230][00107] Avg episode reward: [(0, '4.727')] +[2023-02-27 11:46:20,955][20171] Updated weights for policy 0, policy_version 890 (0.0029) +[2023-02-27 11:46:24,228][00107] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3332.4). Total num frames: 3657728. Throughput: 0: 867.4. Samples: 914800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:46:24,231][00107] Avg episode reward: [(0, '4.755')] +[2023-02-27 11:46:29,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3332.3). Total num frames: 3674112. Throughput: 0: 863.2. Samples: 917650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:46:29,234][00107] Avg episode reward: [(0, '4.487')] +[2023-02-27 11:46:33,708][20171] Updated weights for policy 0, policy_version 900 (0.0015) +[2023-02-27 11:46:34,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.3, 300 sec: 3332.4). Total num frames: 3686400. Throughput: 0: 811.8. Samples: 921342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:46:34,231][00107] Avg episode reward: [(0, '4.531')] +[2023-02-27 11:46:39,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3702784. Throughput: 0: 838.1. Samples: 926412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:46:39,231][00107] Avg episode reward: [(0, '4.382')] +[2023-02-27 11:46:44,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3723264. Throughput: 0: 861.2. Samples: 929470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:46:44,231][00107] Avg episode reward: [(0, '4.490')] +[2023-02-27 11:46:44,548][20171] Updated weights for policy 0, policy_version 910 (0.0014) +[2023-02-27 11:46:49,234][00107] Fps is (10 sec: 3684.1, 60 sec: 3344.8, 300 sec: 3332.3). Total num frames: 3739648. Throughput: 0: 845.3. Samples: 934776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:46:49,237][00107] Avg episode reward: [(0, '4.709')] +[2023-02-27 11:46:54,228][00107] Fps is (10 sec: 2867.1, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3751936. Throughput: 0: 811.8. Samples: 938668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:46:54,231][00107] Avg episode reward: [(0, '4.710')] +[2023-02-27 11:46:54,250][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000916_3751936.pth... +[2023-02-27 11:46:54,425][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000721_2953216.pth +[2023-02-27 11:46:58,123][20171] Updated weights for policy 0, policy_version 920 (0.0021) +[2023-02-27 11:46:59,228][00107] Fps is (10 sec: 3278.9, 60 sec: 3413.3, 300 sec: 3346.2). Total num frames: 3772416. Throughput: 0: 826.9. Samples: 941266. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:46:59,231][00107] Avg episode reward: [(0, '4.699')] +[2023-02-27 11:47:04,228][00107] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3792896. Throughput: 0: 859.8. Samples: 947334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:47:04,236][00107] Avg episode reward: [(0, '4.495')] +[2023-02-27 11:47:09,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3332.3). Total num frames: 3805184. Throughput: 0: 829.6. Samples: 952134. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:47:09,231][00107] Avg episode reward: [(0, '4.757')] +[2023-02-27 11:47:09,697][20171] Updated weights for policy 0, policy_version 930 (0.0018) +[2023-02-27 11:47:14,228][00107] Fps is (10 sec: 2457.5, 60 sec: 3276.8, 300 sec: 3332.4). Total num frames: 3817472. Throughput: 0: 809.5. Samples: 954078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:47:14,234][00107] Avg episode reward: [(0, '4.677')] +[2023-02-27 11:47:19,228][00107] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3842048. Throughput: 0: 844.4. Samples: 959338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:47:19,238][00107] Avg episode reward: [(0, '4.591')] +[2023-02-27 11:47:21,265][20171] Updated weights for policy 0, policy_version 940 (0.0025) +[2023-02-27 11:47:24,228][00107] Fps is (10 sec: 4096.2, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3858432. Throughput: 0: 869.1. Samples: 965522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:47:24,230][00107] Avg episode reward: [(0, '4.483')] +[2023-02-27 11:47:29,228][00107] Fps is (10 sec: 3276.7, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3874816. Throughput: 0: 850.7. Samples: 967752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:47:29,231][00107] Avg episode reward: [(0, '4.558')] +[2023-02-27 11:47:34,228][00107] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3887104. Throughput: 0: 816.6. Samples: 971520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:47:34,236][00107] Avg episode reward: [(0, '4.764')] +[2023-02-27 11:47:34,925][20171] Updated weights for policy 0, policy_version 950 (0.0034) +[2023-02-27 11:47:39,230][00107] Fps is (10 sec: 3276.2, 60 sec: 3413.2, 300 sec: 3360.1). Total num frames: 3907584. Throughput: 0: 856.7. Samples: 977220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:47:39,235][00107] Avg episode reward: [(0, '4.707')] +[2023-02-27 11:47:44,229][00107] Fps is (10 sec: 4095.5, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3928064. Throughput: 0: 866.2. Samples: 980248. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:47:44,237][00107] Avg episode reward: [(0, '4.581')] +[2023-02-27 11:47:45,663][20171] Updated weights for policy 0, policy_version 960 (0.0015) +[2023-02-27 11:47:49,228][00107] Fps is (10 sec: 3277.5, 60 sec: 3345.4, 300 sec: 3346.2). Total num frames: 3940352. Throughput: 0: 834.4. Samples: 984884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:47:49,232][00107] Avg episode reward: [(0, '4.657')] +[2023-02-27 11:47:54,228][00107] Fps is (10 sec: 2457.9, 60 sec: 3345.1, 300 sec: 3346.2). Total num frames: 3952640. Throughput: 0: 817.9. Samples: 988938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:47:54,235][00107] Avg episode reward: [(0, '4.588')] +[2023-02-27 11:47:58,492][20171] Updated weights for policy 0, policy_version 970 (0.0012) +[2023-02-27 11:47:59,228][00107] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3973120. Throughput: 0: 843.0. Samples: 992014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:47:59,237][00107] Avg episode reward: [(0, '4.560')] +[2023-02-27 11:48:04,228][00107] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3360.1). Total num frames: 3993600. Throughput: 0: 861.9. Samples: 998122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:48:04,237][00107] Avg episode reward: [(0, '4.636')] +[2023-02-27 11:48:08,267][20157] Stopping Batcher_0... +[2023-02-27 11:48:08,269][20157] Loop batcher_evt_loop terminating... +[2023-02-27 11:48:08,270][00107] Component Batcher_0 stopped! +[2023-02-27 11:48:08,289][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:48:08,365][20171] Weights refcount: 2 0 +[2023-02-27 11:48:08,367][00107] Component InferenceWorker_p0-w0 stopped! +[2023-02-27 11:48:08,370][20171] Stopping InferenceWorker_p0-w0... +[2023-02-27 11:48:08,370][20171] Loop inference_proc0-0_evt_loop terminating... +[2023-02-27 11:48:08,430][00107] Component RolloutWorker_w6 stopped! +[2023-02-27 11:48:08,433][20176] Stopping RolloutWorker_w6... +[2023-02-27 11:48:08,439][00107] Component RolloutWorker_w7 stopped! +[2023-02-27 11:48:08,443][00107] Component RolloutWorker_w0 stopped! +[2023-02-27 11:48:08,445][20172] Stopping RolloutWorker_w0... +[2023-02-27 11:48:08,439][20179] Stopping RolloutWorker_w7... +[2023-02-27 11:48:08,445][20172] Loop rollout_proc0_evt_loop terminating... +[2023-02-27 11:48:08,446][20179] Loop rollout_proc7_evt_loop terminating... +[2023-02-27 11:48:08,434][20176] Loop rollout_proc6_evt_loop terminating... +[2023-02-27 11:48:08,459][00107] Component RolloutWorker_w4 stopped! +[2023-02-27 11:48:08,461][20177] Stopping RolloutWorker_w4... +[2023-02-27 11:48:08,464][20177] Loop rollout_proc4_evt_loop terminating... +[2023-02-27 11:48:08,474][20173] Stopping RolloutWorker_w1... +[2023-02-27 11:48:08,474][20173] Loop rollout_proc1_evt_loop terminating... +[2023-02-27 11:48:08,473][00107] Component RolloutWorker_w1 stopped! +[2023-02-27 11:48:08,488][00107] Component RolloutWorker_w2 stopped! +[2023-02-27 11:48:08,491][20174] Stopping RolloutWorker_w2... +[2023-02-27 11:48:08,491][20174] Loop rollout_proc2_evt_loop terminating... +[2023-02-27 11:48:08,520][20157] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000819_3354624.pth +[2023-02-27 11:48:08,537][20157] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:48:08,567][00107] Component RolloutWorker_w3 stopped! +[2023-02-27 11:48:08,573][20175] Stopping RolloutWorker_w3... +[2023-02-27 11:48:08,574][20175] Loop rollout_proc3_evt_loop terminating... +[2023-02-27 11:48:08,613][00107] Component RolloutWorker_w5 stopped! +[2023-02-27 11:48:08,619][20178] Stopping RolloutWorker_w5... +[2023-02-27 11:48:08,620][20178] Loop rollout_proc5_evt_loop terminating... +[2023-02-27 11:48:08,808][00107] Component LearnerWorker_p0 stopped! +[2023-02-27 11:48:08,813][00107] Waiting for process learner_proc0 to stop... +[2023-02-27 11:48:08,818][20157] Stopping LearnerWorker_p0... +[2023-02-27 11:48:08,819][20157] Loop learner_proc0_evt_loop terminating... +[2023-02-27 11:48:11,632][00107] Waiting for process inference_proc0-0 to join... +[2023-02-27 11:48:12,110][00107] Waiting for process rollout_proc0 to join... +[2023-02-27 11:48:12,773][00107] Waiting for process rollout_proc1 to join... +[2023-02-27 11:48:12,776][00107] Waiting for process rollout_proc2 to join... +[2023-02-27 11:48:12,778][00107] Waiting for process rollout_proc3 to join... +[2023-02-27 11:48:12,780][00107] Waiting for process rollout_proc4 to join... +[2023-02-27 11:48:12,781][00107] Waiting for process rollout_proc5 to join... +[2023-02-27 11:48:12,784][00107] Waiting for process rollout_proc6 to join... +[2023-02-27 11:48:12,785][00107] Waiting for process rollout_proc7 to join... +[2023-02-27 11:48:12,786][00107] Batcher 0 profile tree view: +batching: 26.8972, releasing_batches: 0.0254 +[2023-02-27 11:48:12,788][00107] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0101 + wait_policy_total: 527.4529 +update_model: 8.5874 + weight_update: 0.0036 +one_step: 0.0026 + handle_policy_step: 546.2621 + deserialize: 15.8587, stack: 3.1238, obs_to_device_normalize: 120.9882, forward: 263.7075, send_messages: 28.1778 + prepare_outputs: 87.1048 + to_cpu: 53.7354 +[2023-02-27 11:48:12,789][00107] Learner 0 profile tree view: +misc: 0.0060, prepare_batch: 17.0107 +train: 76.7285 + epoch_init: 0.0125, minibatch_init: 0.0064, losses_postprocess: 0.5033, kl_divergence: 0.6250, after_optimizer: 32.4975 + calculate_losses: 27.7643 + losses_init: 0.0204, forward_head: 1.8180, bptt_initial: 18.3318, tail: 1.1289, advantages_returns: 0.2516, losses: 3.4604 + bptt: 2.3867 + bptt_forward_core: 2.2594 + update: 14.6643 + clip: 1.4372 +[2023-02-27 11:48:12,791][00107] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3484, enqueue_policy_requests: 145.5039, env_step: 856.1247, overhead: 22.8647, complete_rollouts: 7.1126 +save_policy_outputs: 20.5167 + split_output_tensors: 10.0193 +[2023-02-27 11:48:12,794][00107] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3580, enqueue_policy_requests: 140.6013, env_step: 858.5151, overhead: 22.8853, complete_rollouts: 6.9748 +save_policy_outputs: 21.2746 + split_output_tensors: 10.6132 +[2023-02-27 11:48:12,796][00107] Loop Runner_EvtLoop terminating... +[2023-02-27 11:48:12,799][00107] Runner profile tree view: +main_loop: 1154.1242 +[2023-02-27 11:48:12,804][00107] Collected {0: 4005888}, FPS: 3470.9 +[2023-02-27 11:48:47,205][00107] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-27 11:48:47,211][00107] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-27 11:48:47,214][00107] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-27 11:48:47,218][00107] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-27 11:48:47,222][00107] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:48:47,224][00107] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-27 11:48:47,226][00107] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:48:47,227][00107] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-27 11:48:47,230][00107] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-27 11:48:47,231][00107] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-27 11:48:47,234][00107] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-27 11:48:47,235][00107] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-27 11:48:47,242][00107] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-27 11:48:47,249][00107] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-27 11:48:47,257][00107] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-27 11:48:47,293][00107] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:48:47,300][00107] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:48:47,303][00107] RunningMeanStd input shape: (1,) +[2023-02-27 11:48:47,329][00107] ConvEncoder: input_channels=3 +[2023-02-27 11:48:48,131][00107] Conv encoder output size: 512 +[2023-02-27 11:48:48,133][00107] Policy head output size: 512 +[2023-02-27 11:48:51,087][00107] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:48:52,438][00107] Num frames 100... +[2023-02-27 11:48:52,574][00107] Num frames 200... +[2023-02-27 11:48:52,710][00107] Num frames 300... +[2023-02-27 11:48:52,842][00107] Num frames 400... +[2023-02-27 11:48:52,974][00107] Num frames 500... +[2023-02-27 11:48:53,048][00107] Avg episode rewards: #0: 7.120, true rewards: #0: 5.120 +[2023-02-27 11:48:53,050][00107] Avg episode reward: 7.120, avg true_objective: 5.120 +[2023-02-27 11:48:53,174][00107] Num frames 600... +[2023-02-27 11:48:53,314][00107] Num frames 700... +[2023-02-27 11:48:53,443][00107] Num frames 800... +[2023-02-27 11:48:53,572][00107] Num frames 900... +[2023-02-27 11:48:53,713][00107] Avg episode rewards: #0: 6.300, true rewards: #0: 4.800 +[2023-02-27 11:48:53,714][00107] Avg episode reward: 6.300, avg true_objective: 4.800 +[2023-02-27 11:48:53,771][00107] Num frames 1000... +[2023-02-27 11:48:53,906][00107] Num frames 1100... +[2023-02-27 11:48:54,047][00107] Num frames 1200... +[2023-02-27 11:48:54,174][00107] Num frames 1300... +[2023-02-27 11:48:54,330][00107] Avg episode rewards: #0: 5.920, true rewards: #0: 4.587 +[2023-02-27 11:48:54,332][00107] Avg episode reward: 5.920, avg true_objective: 4.587 +[2023-02-27 11:48:54,367][00107] Num frames 1400... +[2023-02-27 11:48:54,489][00107] Num frames 1500... +[2023-02-27 11:48:54,614][00107] Num frames 1600... +[2023-02-27 11:48:54,711][00107] Avg episode rewards: #0: 5.080, true rewards: #0: 4.080 +[2023-02-27 11:48:54,713][00107] Avg episode reward: 5.080, avg true_objective: 4.080 +[2023-02-27 11:48:54,817][00107] Num frames 1700... +[2023-02-27 11:48:54,955][00107] Num frames 1800... +[2023-02-27 11:48:55,085][00107] Num frames 1900... +[2023-02-27 11:48:55,214][00107] Num frames 2000... +[2023-02-27 11:48:55,373][00107] Avg episode rewards: #0: 5.160, true rewards: #0: 4.160 +[2023-02-27 11:48:55,374][00107] Avg episode reward: 5.160, avg true_objective: 4.160 +[2023-02-27 11:48:55,403][00107] Num frames 2100... +[2023-02-27 11:48:55,535][00107] Num frames 2200... +[2023-02-27 11:48:55,661][00107] Num frames 2300... +[2023-02-27 11:48:55,789][00107] Num frames 2400... +[2023-02-27 11:48:55,922][00107] Num frames 2500... +[2023-02-27 11:48:56,020][00107] Avg episode rewards: #0: 5.213, true rewards: #0: 4.213 +[2023-02-27 11:48:56,022][00107] Avg episode reward: 5.213, avg true_objective: 4.213 +[2023-02-27 11:48:56,124][00107] Num frames 2600... +[2023-02-27 11:48:56,252][00107] Num frames 2700... +[2023-02-27 11:48:56,381][00107] Num frames 2800... +[2023-02-27 11:48:56,511][00107] Num frames 2900... +[2023-02-27 11:48:56,642][00107] Num frames 3000... +[2023-02-27 11:48:56,798][00107] Avg episode rewards: #0: 5.531, true rewards: #0: 4.389 +[2023-02-27 11:48:56,800][00107] Avg episode reward: 5.531, avg true_objective: 4.389 +[2023-02-27 11:48:56,840][00107] Num frames 3100... +[2023-02-27 11:48:56,968][00107] Num frames 3200... +[2023-02-27 11:48:57,110][00107] Num frames 3300... +[2023-02-27 11:48:57,246][00107] Num frames 3400... +[2023-02-27 11:48:57,383][00107] Num frames 3500... +[2023-02-27 11:48:57,510][00107] Num frames 3600... +[2023-02-27 11:48:57,588][00107] Avg episode rewards: #0: 5.770, true rewards: #0: 4.520 +[2023-02-27 11:48:57,593][00107] Avg episode reward: 5.770, avg true_objective: 4.520 +[2023-02-27 11:48:57,712][00107] Num frames 3700... +[2023-02-27 11:48:57,843][00107] Num frames 3800... +[2023-02-27 11:48:57,977][00107] Num frames 3900... +[2023-02-27 11:48:58,116][00107] Num frames 4000... +[2023-02-27 11:48:58,214][00107] Avg episode rewards: #0: 5.702, true rewards: #0: 4.480 +[2023-02-27 11:48:58,216][00107] Avg episode reward: 5.702, avg true_objective: 4.480 +[2023-02-27 11:48:58,307][00107] Num frames 4100... +[2023-02-27 11:48:58,444][00107] Num frames 4200... +[2023-02-27 11:48:58,574][00107] Num frames 4300... +[2023-02-27 11:48:58,699][00107] Num frames 4400... +[2023-02-27 11:48:58,778][00107] Avg episode rewards: #0: 5.516, true rewards: #0: 4.416 +[2023-02-27 11:48:58,781][00107] Avg episode reward: 5.516, avg true_objective: 4.416 +[2023-02-27 11:49:22,050][00107] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-27 11:51:32,107][00107] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-27 11:51:32,109][00107] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-27 11:51:32,112][00107] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-27 11:51:32,115][00107] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-27 11:51:32,117][00107] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:51:32,120][00107] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-27 11:51:32,123][00107] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-27 11:51:32,125][00107] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-27 11:51:32,127][00107] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-27 11:51:32,128][00107] Adding new argument 'hf_repository'='KoRiF/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-27 11:51:32,129][00107] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-27 11:51:32,131][00107] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-27 11:51:32,133][00107] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-27 11:51:32,134][00107] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-27 11:51:32,135][00107] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-27 11:51:32,163][00107] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:51:32,166][00107] RunningMeanStd input shape: (1,) +[2023-02-27 11:51:32,181][00107] ConvEncoder: input_channels=3 +[2023-02-27 11:51:32,223][00107] Conv encoder output size: 512 +[2023-02-27 11:51:32,228][00107] Policy head output size: 512 +[2023-02-27 11:51:32,250][00107] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:51:32,711][00107] Num frames 100... +[2023-02-27 11:51:32,839][00107] Num frames 200... +[2023-02-27 11:51:32,963][00107] Num frames 300... +[2023-02-27 11:51:33,137][00107] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-27 11:51:33,139][00107] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-27 11:51:33,163][00107] Num frames 400... +[2023-02-27 11:51:33,287][00107] Num frames 500... +[2023-02-27 11:51:33,412][00107] Num frames 600... +[2023-02-27 11:51:33,546][00107] Num frames 700... +[2023-02-27 11:51:33,678][00107] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-27 11:51:33,680][00107] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-27 11:51:33,721][00107] Num frames 800... +[2023-02-27 11:51:33,847][00107] Num frames 900... +[2023-02-27 11:51:33,983][00107] Num frames 1000... +[2023-02-27 11:51:34,102][00107] Num frames 1100... +[2023-02-27 11:51:34,225][00107] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-27 11:51:34,226][00107] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-27 11:51:34,294][00107] Num frames 1200... +[2023-02-27 11:51:34,421][00107] Num frames 1300... +[2023-02-27 11:51:34,567][00107] Num frames 1400... +[2023-02-27 11:51:34,692][00107] Num frames 1500... +[2023-02-27 11:51:34,831][00107] Avg episode rewards: #0: 4.170, true rewards: #0: 3.920 +[2023-02-27 11:51:34,833][00107] Avg episode reward: 4.170, avg true_objective: 3.920 +[2023-02-27 11:51:34,879][00107] Num frames 1600... +[2023-02-27 11:51:34,998][00107] Num frames 1700... +[2023-02-27 11:51:35,126][00107] Num frames 1800... +[2023-02-27 11:51:35,253][00107] Num frames 1900... +[2023-02-27 11:51:35,377][00107] Num frames 2000... +[2023-02-27 11:51:35,487][00107] Avg episode rewards: #0: 4.496, true rewards: #0: 4.096 +[2023-02-27 11:51:35,490][00107] Avg episode reward: 4.496, avg true_objective: 4.096 +[2023-02-27 11:51:35,568][00107] Num frames 2100... +[2023-02-27 11:51:35,699][00107] Num frames 2200... +[2023-02-27 11:51:35,820][00107] Num frames 2300... +[2023-02-27 11:51:35,958][00107] Num frames 2400... +[2023-02-27 11:51:36,093][00107] Avg episode rewards: #0: 4.440, true rewards: #0: 4.107 +[2023-02-27 11:51:36,095][00107] Avg episode reward: 4.440, avg true_objective: 4.107 +[2023-02-27 11:51:36,140][00107] Num frames 2500... +[2023-02-27 11:51:36,268][00107] Num frames 2600... +[2023-02-27 11:51:36,385][00107] Num frames 2700... +[2023-02-27 11:51:36,500][00107] Num frames 2800... +[2023-02-27 11:51:36,623][00107] Num frames 2900... +[2023-02-27 11:51:36,767][00107] Avg episode rewards: #0: 4.823, true rewards: #0: 4.251 +[2023-02-27 11:51:36,771][00107] Avg episode reward: 4.823, avg true_objective: 4.251 +[2023-02-27 11:51:36,811][00107] Num frames 3000... +[2023-02-27 11:51:36,935][00107] Num frames 3100... +[2023-02-27 11:51:37,056][00107] Num frames 3200... +[2023-02-27 11:51:37,181][00107] Num frames 3300... +[2023-02-27 11:51:37,318][00107] Avg episode rewards: #0: 4.700, true rewards: #0: 4.200 +[2023-02-27 11:51:37,321][00107] Avg episode reward: 4.700, avg true_objective: 4.200 +[2023-02-27 11:51:37,374][00107] Num frames 3400... +[2023-02-27 11:51:37,503][00107] Num frames 3500... +[2023-02-27 11:51:37,633][00107] Num frames 3600... +[2023-02-27 11:51:37,763][00107] Num frames 3700... +[2023-02-27 11:51:37,881][00107] Avg episode rewards: #0: 4.604, true rewards: #0: 4.160 +[2023-02-27 11:51:37,882][00107] Avg episode reward: 4.604, avg true_objective: 4.160 +[2023-02-27 11:51:37,954][00107] Num frames 3800... +[2023-02-27 11:51:38,075][00107] Num frames 3900... +[2023-02-27 11:51:38,196][00107] Num frames 4000... +[2023-02-27 11:51:38,325][00107] Num frames 4100... +[2023-02-27 11:51:38,412][00107] Avg episode rewards: #0: 4.528, true rewards: #0: 4.128 +[2023-02-27 11:51:38,414][00107] Avg episode reward: 4.528, avg true_objective: 4.128 +[2023-02-27 11:51:59,229][00107] Replay video saved to /content/train_dir/default_experiment/replay.mp4!