Juu24's picture
Upload folder using huggingface_hub
63ab2bd verified
[2024-10-10 13:26:18,084][00343] Saving configuration to /content/train_dir/default_experiment/config.json...
[2024-10-10 13:26:18,086][00343] Rollout worker 0 uses device cpu
[2024-10-10 13:26:18,088][00343] Rollout worker 1 uses device cpu
[2024-10-10 13:26:18,089][00343] Rollout worker 2 uses device cpu
[2024-10-10 13:26:18,090][00343] Rollout worker 3 uses device cpu
[2024-10-10 13:26:18,091][00343] Rollout worker 4 uses device cpu
[2024-10-10 13:26:18,092][00343] Rollout worker 5 uses device cpu
[2024-10-10 13:26:18,093][00343] Rollout worker 6 uses device cpu
[2024-10-10 13:26:18,094][00343] Rollout worker 7 uses device cpu
[2024-10-10 13:26:18,251][00343] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2024-10-10 13:26:18,253][00343] InferenceWorker_p0-w0: min num requests: 2
[2024-10-10 13:26:18,286][00343] Starting all processes...
[2024-10-10 13:26:18,288][00343] Starting process learner_proc0
[2024-10-10 13:26:18,336][00343] Starting all processes...
[2024-10-10 13:26:18,347][00343] Starting process inference_proc0-0
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc0
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc1
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc2
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc3
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc4
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc5
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc6
[2024-10-10 13:26:18,350][00343] Starting process rollout_proc7
[2024-10-10 13:26:28,457][03659] Worker 2 uses CPU cores [0]
[2024-10-10 13:26:28,961][03643] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2024-10-10 13:26:28,975][03643] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2024-10-10 13:26:29,066][03643] Num visible devices: 1
[2024-10-10 13:26:29,112][03643] Starting seed is not provided
[2024-10-10 13:26:29,113][03643] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2024-10-10 13:26:29,113][03643] Initializing actor-critic model on device cuda:0
[2024-10-10 13:26:29,114][03643] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:26:29,116][03643] RunningMeanStd input shape: (1,)
[2024-10-10 13:26:29,244][03643] ConvEncoder: input_channels=3
[2024-10-10 13:26:29,436][03661] Worker 4 uses CPU cores [0]
[2024-10-10 13:26:29,589][03656] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2024-10-10 13:26:29,592][03656] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2024-10-10 13:26:29,721][03658] Worker 1 uses CPU cores [1]
[2024-10-10 13:26:29,748][03663] Worker 6 uses CPU cores [0]
[2024-10-10 13:26:29,748][03656] Num visible devices: 1
[2024-10-10 13:26:29,801][03662] Worker 5 uses CPU cores [1]
[2024-10-10 13:26:29,823][03657] Worker 0 uses CPU cores [0]
[2024-10-10 13:26:29,848][03664] Worker 7 uses CPU cores [1]
[2024-10-10 13:26:29,974][03660] Worker 3 uses CPU cores [1]
[2024-10-10 13:26:30,018][03643] Conv encoder output size: 512
[2024-10-10 13:26:30,019][03643] Policy head output size: 512
[2024-10-10 13:26:30,035][03643] Created Actor Critic model with architecture:
[2024-10-10 13:26:30,035][03643] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2024-10-10 13:26:34,743][03643] Using optimizer <class 'torch.optim.adam.Adam'>
[2024-10-10 13:26:34,744][03643] No checkpoints found
[2024-10-10 13:26:34,744][03643] Did not load from checkpoint, starting from scratch!
[2024-10-10 13:26:34,745][03643] Initialized policy 0 weights for model version 0
[2024-10-10 13:26:34,749][03643] LearnerWorker_p0 finished initialization!
[2024-10-10 13:26:34,751][03643] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2024-10-10 13:26:34,966][03656] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:26:34,968][03656] RunningMeanStd input shape: (1,)
[2024-10-10 13:26:34,984][03656] ConvEncoder: input_channels=3
[2024-10-10 13:26:35,086][03656] Conv encoder output size: 512
[2024-10-10 13:26:35,086][03656] Policy head output size: 512
[2024-10-10 13:26:36,594][00343] Inference worker 0-0 is ready!
[2024-10-10 13:26:36,595][00343] All inference workers are ready! Signal rollout workers to start!
[2024-10-10 13:26:36,695][03660] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,718][03664] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,711][03662] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,723][03658] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,736][03663] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,752][03661] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,757][03657] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:36,761][03659] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:26:37,962][03662] Decorrelating experience for 0 frames...
[2024-10-10 13:26:37,963][03664] Decorrelating experience for 0 frames...
[2024-10-10 13:26:38,166][03657] Decorrelating experience for 0 frames...
[2024-10-10 13:26:38,170][03659] Decorrelating experience for 0 frames...
[2024-10-10 13:26:38,177][03661] Decorrelating experience for 0 frames...
[2024-10-10 13:26:38,186][03663] Decorrelating experience for 0 frames...
[2024-10-10 13:26:38,243][00343] Heartbeat connected on Batcher_0
[2024-10-10 13:26:38,249][00343] Heartbeat connected on LearnerWorker_p0
[2024-10-10 13:26:38,293][00343] Heartbeat connected on InferenceWorker_p0-w0
[2024-10-10 13:26:38,930][03657] Decorrelating experience for 32 frames...
[2024-10-10 13:26:38,938][03661] Decorrelating experience for 32 frames...
[2024-10-10 13:26:38,957][00343] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2024-10-10 13:26:39,095][03660] Decorrelating experience for 0 frames...
[2024-10-10 13:26:39,105][03658] Decorrelating experience for 0 frames...
[2024-10-10 13:26:39,490][03662] Decorrelating experience for 32 frames...
[2024-10-10 13:26:40,821][03664] Decorrelating experience for 32 frames...
[2024-10-10 13:26:40,866][03659] Decorrelating experience for 32 frames...
[2024-10-10 13:26:40,925][03658] Decorrelating experience for 32 frames...
[2024-10-10 13:26:41,155][03657] Decorrelating experience for 64 frames...
[2024-10-10 13:26:41,165][03661] Decorrelating experience for 64 frames...
[2024-10-10 13:26:41,472][03660] Decorrelating experience for 32 frames...
[2024-10-10 13:26:42,622][03662] Decorrelating experience for 64 frames...
[2024-10-10 13:26:42,801][03658] Decorrelating experience for 64 frames...
[2024-10-10 13:26:42,803][03664] Decorrelating experience for 64 frames...
[2024-10-10 13:26:42,942][03663] Decorrelating experience for 32 frames...
[2024-10-10 13:26:43,313][03661] Decorrelating experience for 96 frames...
[2024-10-10 13:26:43,316][03657] Decorrelating experience for 96 frames...
[2024-10-10 13:26:43,658][00343] Heartbeat connected on RolloutWorker_w4
[2024-10-10 13:26:43,661][00343] Heartbeat connected on RolloutWorker_w0
[2024-10-10 13:26:43,957][00343] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2024-10-10 13:26:44,675][03658] Decorrelating experience for 96 frames...
[2024-10-10 13:26:44,873][03659] Decorrelating experience for 64 frames...
[2024-10-10 13:26:44,963][00343] Heartbeat connected on RolloutWorker_w1
[2024-10-10 13:26:45,048][03660] Decorrelating experience for 64 frames...
[2024-10-10 13:26:45,053][03662] Decorrelating experience for 96 frames...
[2024-10-10 13:26:45,188][03663] Decorrelating experience for 64 frames...
[2024-10-10 13:26:45,441][00343] Heartbeat connected on RolloutWorker_w5
[2024-10-10 13:26:46,052][03664] Decorrelating experience for 96 frames...
[2024-10-10 13:26:46,191][00343] Heartbeat connected on RolloutWorker_w7
[2024-10-10 13:26:46,444][03659] Decorrelating experience for 96 frames...
[2024-10-10 13:26:46,587][03663] Decorrelating experience for 96 frames...
[2024-10-10 13:26:46,602][00343] Heartbeat connected on RolloutWorker_w2
[2024-10-10 13:26:46,757][00343] Heartbeat connected on RolloutWorker_w6
[2024-10-10 13:26:47,199][03660] Decorrelating experience for 96 frames...
[2024-10-10 13:26:47,305][00343] Heartbeat connected on RolloutWorker_w3
[2024-10-10 13:26:48,957][00343] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 118.2. Samples: 1182. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2024-10-10 13:26:48,958][00343] Avg episode reward: [(0, '1.476')]
[2024-10-10 13:26:50,300][03643] Signal inference workers to stop experience collection...
[2024-10-10 13:26:50,314][03656] InferenceWorker_p0-w0: stopping experience collection
[2024-10-10 13:26:52,685][03643] Signal inference workers to resume experience collection...
[2024-10-10 13:26:52,688][03656] InferenceWorker_p0-w0: resuming experience collection
[2024-10-10 13:26:53,957][00343] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 162.9. Samples: 2444. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
[2024-10-10 13:26:53,959][00343] Avg episode reward: [(0, '2.624')]
[2024-10-10 13:26:58,957][00343] Fps is (10 sec: 2867.2, 60 sec: 1433.6, 300 sec: 1433.6). Total num frames: 28672. Throughput: 0: 342.3. Samples: 6846. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:26:58,961][00343] Avg episode reward: [(0, '3.701')]
[2024-10-10 13:27:02,928][03656] Updated weights for policy 0, policy_version 10 (0.0373)
[2024-10-10 13:27:03,961][00343] Fps is (10 sec: 3685.0, 60 sec: 1638.2, 300 sec: 1638.2). Total num frames: 40960. Throughput: 0: 439.1. Samples: 10980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:27:03,968][00343] Avg episode reward: [(0, '3.958')]
[2024-10-10 13:27:08,957][00343] Fps is (10 sec: 3276.8, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 61440. Throughput: 0: 462.1. Samples: 13862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:27:08,963][00343] Avg episode reward: [(0, '4.396')]
[2024-10-10 13:27:13,509][03656] Updated weights for policy 0, policy_version 20 (0.0027)
[2024-10-10 13:27:13,957][00343] Fps is (10 sec: 4097.5, 60 sec: 2340.6, 300 sec: 2340.6). Total num frames: 81920. Throughput: 0: 572.6. Samples: 20040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:27:13,959][00343] Avg episode reward: [(0, '4.331')]
[2024-10-10 13:27:18,958][00343] Fps is (10 sec: 3686.0, 60 sec: 2457.5, 300 sec: 2457.5). Total num frames: 98304. Throughput: 0: 622.6. Samples: 24906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:27:18,960][00343] Avg episode reward: [(0, '4.300')]
[2024-10-10 13:27:23,957][00343] Fps is (10 sec: 3276.8, 60 sec: 2548.6, 300 sec: 2548.6). Total num frames: 114688. Throughput: 0: 600.8. Samples: 27036. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:27:23,961][00343] Avg episode reward: [(0, '4.372')]
[2024-10-10 13:27:23,966][03643] Saving new best policy, reward=4.372!
[2024-10-10 13:27:25,463][03656] Updated weights for policy 0, policy_version 30 (0.0015)
[2024-10-10 13:27:28,957][00343] Fps is (10 sec: 3686.8, 60 sec: 2703.4, 300 sec: 2703.4). Total num frames: 135168. Throughput: 0: 747.3. Samples: 33630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:27:28,959][00343] Avg episode reward: [(0, '4.281')]
[2024-10-10 13:27:33,957][00343] Fps is (10 sec: 4096.0, 60 sec: 2830.0, 300 sec: 2830.0). Total num frames: 155648. Throughput: 0: 848.2. Samples: 39352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:27:33,961][00343] Avg episode reward: [(0, '4.311')]
[2024-10-10 13:27:36,808][03656] Updated weights for policy 0, policy_version 40 (0.0028)
[2024-10-10 13:27:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2798.9). Total num frames: 167936. Throughput: 0: 864.9. Samples: 41366. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:27:38,963][00343] Avg episode reward: [(0, '4.445')]
[2024-10-10 13:27:38,966][03643] Saving new best policy, reward=4.445!
[2024-10-10 13:27:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3140.3, 300 sec: 2898.7). Total num frames: 188416. Throughput: 0: 892.5. Samples: 47008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:27:43,965][00343] Avg episode reward: [(0, '4.419')]
[2024-10-10 13:27:46,859][03656] Updated weights for policy 0, policy_version 50 (0.0014)
[2024-10-10 13:27:48,957][00343] Fps is (10 sec: 4505.4, 60 sec: 3549.8, 300 sec: 3042.7). Total num frames: 212992. Throughput: 0: 948.4. Samples: 53656. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:27:48,960][00343] Avg episode reward: [(0, '4.265')]
[2024-10-10 13:27:53,961][00343] Fps is (10 sec: 3684.9, 60 sec: 3686.2, 300 sec: 3003.6). Total num frames: 225280. Throughput: 0: 929.9. Samples: 55710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:27:53,963][00343] Avg episode reward: [(0, '4.507')]
[2024-10-10 13:27:53,977][03643] Saving new best policy, reward=4.507!
[2024-10-10 13:27:58,901][03656] Updated weights for policy 0, policy_version 60 (0.0027)
[2024-10-10 13:27:58,957][00343] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3072.0). Total num frames: 245760. Throughput: 0: 903.4. Samples: 60694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:27:58,959][00343] Avg episode reward: [(0, '4.757')]
[2024-10-10 13:27:58,966][03643] Saving new best policy, reward=4.757!
[2024-10-10 13:28:03,957][00343] Fps is (10 sec: 4097.6, 60 sec: 3754.9, 300 sec: 3132.2). Total num frames: 266240. Throughput: 0: 939.1. Samples: 67166. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:28:03,965][00343] Avg episode reward: [(0, '4.694')]
[2024-10-10 13:28:08,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3140.3). Total num frames: 282624. Throughput: 0: 956.9. Samples: 70098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:28:08,959][00343] Avg episode reward: [(0, '4.632')]
[2024-10-10 13:28:09,578][03656] Updated weights for policy 0, policy_version 70 (0.0014)
[2024-10-10 13:28:13,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3147.5). Total num frames: 299008. Throughput: 0: 900.8. Samples: 74164. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:28:13,963][00343] Avg episode reward: [(0, '4.721')]
[2024-10-10 13:28:13,973][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth...
[2024-10-10 13:28:18,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3194.9). Total num frames: 319488. Throughput: 0: 914.2. Samples: 80490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:28:18,961][00343] Avg episode reward: [(0, '4.926')]
[2024-10-10 13:28:18,964][03643] Saving new best policy, reward=4.926!
[2024-10-10 13:28:20,517][03656] Updated weights for policy 0, policy_version 80 (0.0016)
[2024-10-10 13:28:23,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3237.8). Total num frames: 339968. Throughput: 0: 942.7. Samples: 83788. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:28:23,963][00343] Avg episode reward: [(0, '4.975')]
[2024-10-10 13:28:23,972][03643] Saving new best policy, reward=4.975!
[2024-10-10 13:28:28,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3202.3). Total num frames: 352256. Throughput: 0: 920.4. Samples: 88426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:28:28,962][00343] Avg episode reward: [(0, '5.049')]
[2024-10-10 13:28:28,973][03643] Saving new best policy, reward=5.049!
[2024-10-10 13:28:32,431][03656] Updated weights for policy 0, policy_version 90 (0.0026)
[2024-10-10 13:28:33,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3241.2). Total num frames: 372736. Throughput: 0: 896.9. Samples: 94018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:28:33,961][00343] Avg episode reward: [(0, '5.123')]
[2024-10-10 13:28:33,970][03643] Saving new best policy, reward=5.123!
[2024-10-10 13:28:38,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3310.9). Total num frames: 397312. Throughput: 0: 924.2. Samples: 97296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:28:38,963][00343] Avg episode reward: [(0, '5.222')]
[2024-10-10 13:28:38,967][03643] Saving new best policy, reward=5.222!
[2024-10-10 13:28:43,018][03656] Updated weights for policy 0, policy_version 100 (0.0023)
[2024-10-10 13:28:43,962][00343] Fps is (10 sec: 3684.6, 60 sec: 3686.1, 300 sec: 3276.7). Total num frames: 409600. Throughput: 0: 936.0. Samples: 102818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:28:43,964][00343] Avg episode reward: [(0, '5.429')]
[2024-10-10 13:28:43,975][03643] Saving new best policy, reward=5.429!
[2024-10-10 13:28:48,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 425984. Throughput: 0: 895.2. Samples: 107450. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:28:48,963][00343] Avg episode reward: [(0, '5.220')]
[2024-10-10 13:28:53,596][03656] Updated weights for policy 0, policy_version 110 (0.0019)
[2024-10-10 13:28:53,957][00343] Fps is (10 sec: 4098.0, 60 sec: 3754.9, 300 sec: 3337.5). Total num frames: 450560. Throughput: 0: 904.0. Samples: 110780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:28:53,959][00343] Avg episode reward: [(0, '5.262')]
[2024-10-10 13:28:58,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3335.3). Total num frames: 466944. Throughput: 0: 959.4. Samples: 117338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:28:58,959][00343] Avg episode reward: [(0, '5.259')]
[2024-10-10 13:29:03,958][00343] Fps is (10 sec: 3276.5, 60 sec: 3618.1, 300 sec: 3333.3). Total num frames: 483328. Throughput: 0: 908.2. Samples: 121362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:03,962][00343] Avg episode reward: [(0, '5.233')]
[2024-10-10 13:29:05,674][03656] Updated weights for policy 0, policy_version 120 (0.0031)
[2024-10-10 13:29:08,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3358.7). Total num frames: 503808. Throughput: 0: 902.3. Samples: 124390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:08,964][00343] Avg episode reward: [(0, '5.325')]
[2024-10-10 13:29:13,957][00343] Fps is (10 sec: 4096.3, 60 sec: 3754.7, 300 sec: 3382.5). Total num frames: 524288. Throughput: 0: 943.5. Samples: 130882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:29:13,964][00343] Avg episode reward: [(0, '5.353')]
[2024-10-10 13:29:15,884][03656] Updated weights for policy 0, policy_version 130 (0.0013)
[2024-10-10 13:29:18,960][00343] Fps is (10 sec: 3275.8, 60 sec: 3618.0, 300 sec: 3353.5). Total num frames: 536576. Throughput: 0: 918.9. Samples: 135370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:18,974][00343] Avg episode reward: [(0, '5.301')]
[2024-10-10 13:29:23,958][00343] Fps is (10 sec: 3276.4, 60 sec: 3618.0, 300 sec: 3376.1). Total num frames: 557056. Throughput: 0: 896.9. Samples: 137656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:29:23,963][00343] Avg episode reward: [(0, '5.626')]
[2024-10-10 13:29:23,977][03643] Saving new best policy, reward=5.626!
[2024-10-10 13:29:27,253][03656] Updated weights for policy 0, policy_version 140 (0.0020)
[2024-10-10 13:29:28,957][00343] Fps is (10 sec: 4097.2, 60 sec: 3754.7, 300 sec: 3397.3). Total num frames: 577536. Throughput: 0: 919.8. Samples: 144206. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:29:28,964][00343] Avg episode reward: [(0, '5.845')]
[2024-10-10 13:29:28,967][03643] Saving new best policy, reward=5.845!
[2024-10-10 13:29:33,957][00343] Fps is (10 sec: 4096.6, 60 sec: 3754.7, 300 sec: 3417.2). Total num frames: 598016. Throughput: 0: 942.0. Samples: 149840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:33,963][00343] Avg episode reward: [(0, '6.030')]
[2024-10-10 13:29:33,971][03643] Saving new best policy, reward=6.030!
[2024-10-10 13:29:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3390.6). Total num frames: 610304. Throughput: 0: 913.0. Samples: 151864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:38,959][00343] Avg episode reward: [(0, '5.865')]
[2024-10-10 13:29:39,305][03656] Updated weights for policy 0, policy_version 150 (0.0029)
[2024-10-10 13:29:43,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3686.7, 300 sec: 3409.6). Total num frames: 630784. Throughput: 0: 899.8. Samples: 157830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:29:43,961][00343] Avg episode reward: [(0, '5.574')]
[2024-10-10 13:29:48,557][03656] Updated weights for policy 0, policy_version 160 (0.0020)
[2024-10-10 13:29:48,963][00343] Fps is (10 sec: 4502.9, 60 sec: 3822.5, 300 sec: 3449.2). Total num frames: 655360. Throughput: 0: 954.8. Samples: 164334. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:29:48,967][00343] Avg episode reward: [(0, '5.768')]
[2024-10-10 13:29:53,957][00343] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3423.8). Total num frames: 667648. Throughput: 0: 934.0. Samples: 166422. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:29:53,966][00343] Avg episode reward: [(0, '5.717')]
[2024-10-10 13:29:58,957][00343] Fps is (10 sec: 3278.8, 60 sec: 3686.4, 300 sec: 3440.6). Total num frames: 688128. Throughput: 0: 904.5. Samples: 171584. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:29:58,959][00343] Avg episode reward: [(0, '5.556')]
[2024-10-10 13:30:00,187][03656] Updated weights for policy 0, policy_version 170 (0.0023)
[2024-10-10 13:30:03,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3456.6). Total num frames: 708608. Throughput: 0: 951.8. Samples: 178198. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:30:03,967][00343] Avg episode reward: [(0, '5.679')]
[2024-10-10 13:30:08,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3452.3). Total num frames: 724992. Throughput: 0: 961.3. Samples: 180912. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:30:08,959][00343] Avg episode reward: [(0, '5.918')]
[2024-10-10 13:30:12,528][03656] Updated weights for policy 0, policy_version 180 (0.0014)
[2024-10-10 13:30:13,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3448.3). Total num frames: 741376. Throughput: 0: 910.0. Samples: 185154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:30:13,961][00343] Avg episode reward: [(0, '6.082')]
[2024-10-10 13:30:13,973][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000181_741376.pth...
[2024-10-10 13:30:14,107][03643] Saving new best policy, reward=6.082!
[2024-10-10 13:30:18,960][00343] Fps is (10 sec: 3685.3, 60 sec: 3754.7, 300 sec: 3462.9). Total num frames: 761856. Throughput: 0: 925.9. Samples: 191510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:30:18,966][00343] Avg episode reward: [(0, '5.684')]
[2024-10-10 13:30:21,774][03656] Updated weights for policy 0, policy_version 190 (0.0032)
[2024-10-10 13:30:23,957][00343] Fps is (10 sec: 4096.1, 60 sec: 3754.8, 300 sec: 3477.1). Total num frames: 782336. Throughput: 0: 955.8. Samples: 194874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:30:23,959][00343] Avg episode reward: [(0, '5.374')]
[2024-10-10 13:30:28,957][00343] Fps is (10 sec: 3277.8, 60 sec: 3618.1, 300 sec: 3454.9). Total num frames: 794624. Throughput: 0: 917.9. Samples: 199136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:30:28,963][00343] Avg episode reward: [(0, '5.404')]
[2024-10-10 13:30:33,899][03656] Updated weights for policy 0, policy_version 200 (0.0024)
[2024-10-10 13:30:33,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3486.0). Total num frames: 819200. Throughput: 0: 907.1. Samples: 205148. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:30:33,964][00343] Avg episode reward: [(0, '5.448')]
[2024-10-10 13:30:38,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3498.7). Total num frames: 839680. Throughput: 0: 934.1. Samples: 208458. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:30:38,960][00343] Avg episode reward: [(0, '5.700')]
[2024-10-10 13:30:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3477.4). Total num frames: 851968. Throughput: 0: 933.4. Samples: 213586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:30:43,959][00343] Avg episode reward: [(0, '5.859')]
[2024-10-10 13:30:45,693][03656] Updated weights for policy 0, policy_version 210 (0.0013)
[2024-10-10 13:30:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.5, 300 sec: 3489.8). Total num frames: 872448. Throughput: 0: 900.0. Samples: 218698. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:30:48,964][00343] Avg episode reward: [(0, '6.126')]
[2024-10-10 13:30:48,969][03643] Saving new best policy, reward=6.126!
[2024-10-10 13:30:53,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3501.7). Total num frames: 892928. Throughput: 0: 912.3. Samples: 221964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:30:53,959][00343] Avg episode reward: [(0, '6.575')]
[2024-10-10 13:30:53,970][03643] Saving new best policy, reward=6.575!
[2024-10-10 13:30:55,211][03656] Updated weights for policy 0, policy_version 220 (0.0013)
[2024-10-10 13:30:58,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3513.1). Total num frames: 913408. Throughput: 0: 954.3. Samples: 228096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:30:58,964][00343] Avg episode reward: [(0, '6.700')]
[2024-10-10 13:30:58,967][03643] Saving new best policy, reward=6.700!
[2024-10-10 13:31:03,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3493.2). Total num frames: 925696. Throughput: 0: 902.9. Samples: 232138. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:31:03,961][00343] Avg episode reward: [(0, '7.195')]
[2024-10-10 13:31:03,975][03643] Saving new best policy, reward=7.195!
[2024-10-10 13:31:07,350][03656] Updated weights for policy 0, policy_version 230 (0.0033)
[2024-10-10 13:31:08,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3504.4). Total num frames: 946176. Throughput: 0: 897.6. Samples: 235266. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:31:08,964][00343] Avg episode reward: [(0, '7.742')]
[2024-10-10 13:31:08,968][03643] Saving new best policy, reward=7.742!
[2024-10-10 13:31:13,958][00343] Fps is (10 sec: 4505.2, 60 sec: 3822.9, 300 sec: 3530.0). Total num frames: 970752. Throughput: 0: 951.8. Samples: 241968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:31:13,964][00343] Avg episode reward: [(0, '8.102')]
[2024-10-10 13:31:13,973][03643] Saving new best policy, reward=8.102!
[2024-10-10 13:31:18,798][03656] Updated weights for policy 0, policy_version 240 (0.0024)
[2024-10-10 13:31:18,961][00343] Fps is (10 sec: 3684.9, 60 sec: 3686.3, 300 sec: 3510.8). Total num frames: 983040. Throughput: 0: 911.9. Samples: 246188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:31:18,963][00343] Avg episode reward: [(0, '7.935')]
[2024-10-10 13:31:23,957][00343] Fps is (10 sec: 2867.5, 60 sec: 3618.1, 300 sec: 3506.8). Total num frames: 999424. Throughput: 0: 892.0. Samples: 248596. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:31:23,961][00343] Avg episode reward: [(0, '7.937')]
[2024-10-10 13:31:28,576][03656] Updated weights for policy 0, policy_version 250 (0.0020)
[2024-10-10 13:31:28,957][00343] Fps is (10 sec: 4097.6, 60 sec: 3822.9, 300 sec: 3531.0). Total num frames: 1024000. Throughput: 0: 929.0. Samples: 255392. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:31:28,961][00343] Avg episode reward: [(0, '8.673')]
[2024-10-10 13:31:28,965][03643] Saving new best policy, reward=8.673!
[2024-10-10 13:31:33,960][00343] Fps is (10 sec: 4094.7, 60 sec: 3686.2, 300 sec: 3526.7). Total num frames: 1040384. Throughput: 0: 933.9. Samples: 260728. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:31:33,964][00343] Avg episode reward: [(0, '8.642')]
[2024-10-10 13:31:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 1056768. Throughput: 0: 906.7. Samples: 262766. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:31:38,959][00343] Avg episode reward: [(0, '8.586')]
[2024-10-10 13:31:40,551][03656] Updated weights for policy 0, policy_version 260 (0.0024)
[2024-10-10 13:31:43,957][00343] Fps is (10 sec: 3687.5, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1077248. Throughput: 0: 912.0. Samples: 269136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:31:43,959][00343] Avg episode reward: [(0, '7.862')]
[2024-10-10 13:31:48,958][00343] Fps is (10 sec: 4095.5, 60 sec: 3754.6, 300 sec: 3707.2). Total num frames: 1097728. Throughput: 0: 956.9. Samples: 275198. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:31:48,960][00343] Avg episode reward: [(0, '8.189')]
[2024-10-10 13:31:51,260][03656] Updated weights for policy 0, policy_version 270 (0.0025)
[2024-10-10 13:31:53,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 1110016. Throughput: 0: 932.8. Samples: 277242. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:31:53,959][00343] Avg episode reward: [(0, '7.652')]
[2024-10-10 13:31:58,957][00343] Fps is (10 sec: 3277.1, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 1130496. Throughput: 0: 905.6. Samples: 282720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:31:58,960][00343] Avg episode reward: [(0, '8.056')]
[2024-10-10 13:32:01,707][03656] Updated weights for policy 0, policy_version 280 (0.0018)
[2024-10-10 13:32:03,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1155072. Throughput: 0: 958.9. Samples: 289334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:32:03,959][00343] Avg episode reward: [(0, '8.408')]
[2024-10-10 13:32:08,957][00343] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 1171456. Throughput: 0: 962.0. Samples: 291884. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:32:08,965][00343] Avg episode reward: [(0, '9.036')]
[2024-10-10 13:32:08,966][03643] Saving new best policy, reward=9.036!
[2024-10-10 13:32:13,639][03656] Updated weights for policy 0, policy_version 290 (0.0029)
[2024-10-10 13:32:13,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3693.4). Total num frames: 1187840. Throughput: 0: 910.0. Samples: 296344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:32:13,959][00343] Avg episode reward: [(0, '7.909')]
[2024-10-10 13:32:13,973][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000290_1187840.pth...
[2024-10-10 13:32:14,091][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth
[2024-10-10 13:32:18,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3754.9, 300 sec: 3707.2). Total num frames: 1208320. Throughput: 0: 932.6. Samples: 302694. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:32:18,959][00343] Avg episode reward: [(0, '7.713')]
[2024-10-10 13:32:23,747][03656] Updated weights for policy 0, policy_version 300 (0.0022)
[2024-10-10 13:32:23,958][00343] Fps is (10 sec: 4095.7, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1228800. Throughput: 0: 963.9. Samples: 306142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:32:23,960][00343] Avg episode reward: [(0, '7.648')]
[2024-10-10 13:32:28,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1241088. Throughput: 0: 916.2. Samples: 310364. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:32:28,959][00343] Avg episode reward: [(0, '7.909')]
[2024-10-10 13:32:33,957][00343] Fps is (10 sec: 3686.7, 60 sec: 3754.9, 300 sec: 3721.1). Total num frames: 1265664. Throughput: 0: 921.2. Samples: 316652. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:32:33,959][00343] Avg episode reward: [(0, '8.920')]
[2024-10-10 13:32:34,881][03656] Updated weights for policy 0, policy_version 310 (0.0012)
[2024-10-10 13:32:38,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1286144. Throughput: 0: 951.1. Samples: 320042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:32:38,959][00343] Avg episode reward: [(0, '9.947')]
[2024-10-10 13:32:38,967][03643] Saving new best policy, reward=9.947!
[2024-10-10 13:32:43,958][00343] Fps is (10 sec: 3276.5, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1298432. Throughput: 0: 938.9. Samples: 324970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:32:43,963][00343] Avg episode reward: [(0, '10.666')]
[2024-10-10 13:32:43,970][03643] Saving new best policy, reward=10.666!
[2024-10-10 13:32:46,924][03656] Updated weights for policy 0, policy_version 320 (0.0035)
[2024-10-10 13:32:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3707.3). Total num frames: 1318912. Throughput: 0: 907.2. Samples: 330158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:32:48,959][00343] Avg episode reward: [(0, '10.713')]
[2024-10-10 13:32:48,965][03643] Saving new best policy, reward=10.713!
[2024-10-10 13:32:53,957][00343] Fps is (10 sec: 4096.2, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1339392. Throughput: 0: 921.4. Samples: 333346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:32:53,959][00343] Avg episode reward: [(0, '10.542')]
[2024-10-10 13:32:56,103][03656] Updated weights for policy 0, policy_version 330 (0.0014)
[2024-10-10 13:32:58,957][00343] Fps is (10 sec: 3686.2, 60 sec: 3754.6, 300 sec: 3693.3). Total num frames: 1355776. Throughput: 0: 956.4. Samples: 339384. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:32:58,960][00343] Avg episode reward: [(0, '10.073')]
[2024-10-10 13:33:03,958][00343] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1372160. Throughput: 0: 913.0. Samples: 343780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:33:03,961][00343] Avg episode reward: [(0, '9.957')]
[2024-10-10 13:33:07,906][03656] Updated weights for policy 0, policy_version 340 (0.0025)
[2024-10-10 13:33:08,957][00343] Fps is (10 sec: 4096.3, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1396736. Throughput: 0: 912.2. Samples: 347192. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:33:08,960][00343] Avg episode reward: [(0, '9.546')]
[2024-10-10 13:33:13,961][00343] Fps is (10 sec: 4504.2, 60 sec: 3822.7, 300 sec: 3721.1). Total num frames: 1417216. Throughput: 0: 964.7. Samples: 353778. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:33:13,967][00343] Avg episode reward: [(0, '9.753')]
[2024-10-10 13:33:18,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1429504. Throughput: 0: 921.2. Samples: 358106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:33:18,959][00343] Avg episode reward: [(0, '9.488')]
[2024-10-10 13:33:19,651][03656] Updated weights for policy 0, policy_version 350 (0.0022)
[2024-10-10 13:33:23,957][00343] Fps is (10 sec: 3278.2, 60 sec: 3686.5, 300 sec: 3721.1). Total num frames: 1449984. Throughput: 0: 904.8. Samples: 360758. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:33:23,962][00343] Avg episode reward: [(0, '9.489')]
[2024-10-10 13:33:28,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1470464. Throughput: 0: 942.9. Samples: 367400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:33:28,959][00343] Avg episode reward: [(0, '10.669')]
[2024-10-10 13:33:29,328][03656] Updated weights for policy 0, policy_version 360 (0.0012)
[2024-10-10 13:33:33,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1486848. Throughput: 0: 947.2. Samples: 372780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:33:33,961][00343] Avg episode reward: [(0, '10.988')]
[2024-10-10 13:33:33,974][03643] Saving new best policy, reward=10.988!
[2024-10-10 13:33:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.3). Total num frames: 1503232. Throughput: 0: 921.5. Samples: 374814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:33:38,959][00343] Avg episode reward: [(0, '12.612')]
[2024-10-10 13:33:38,963][03643] Saving new best policy, reward=12.612!
[2024-10-10 13:33:41,095][03656] Updated weights for policy 0, policy_version 370 (0.0035)
[2024-10-10 13:33:43,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3735.0). Total num frames: 1527808. Throughput: 0: 926.6. Samples: 381082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:33:43,959][00343] Avg episode reward: [(0, '12.908')]
[2024-10-10 13:33:43,966][03643] Saving new best policy, reward=12.908!
[2024-10-10 13:33:48,961][00343] Fps is (10 sec: 4094.3, 60 sec: 3754.4, 300 sec: 3707.2). Total num frames: 1544192. Throughput: 0: 961.6. Samples: 387054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:33:48,971][00343] Avg episode reward: [(0, '13.417')]
[2024-10-10 13:33:48,973][03643] Saving new best policy, reward=13.417!
[2024-10-10 13:33:52,558][03656] Updated weights for policy 0, policy_version 380 (0.0013)
[2024-10-10 13:33:53,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1556480. Throughput: 0: 927.8. Samples: 388942. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:33:53,962][00343] Avg episode reward: [(0, '13.098')]
[2024-10-10 13:33:58,959][00343] Fps is (10 sec: 3687.0, 60 sec: 3754.5, 300 sec: 3721.1). Total num frames: 1581056. Throughput: 0: 906.0. Samples: 394548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:33:58,970][00343] Avg episode reward: [(0, '12.734')]
[2024-10-10 13:34:02,488][03656] Updated weights for policy 0, policy_version 390 (0.0020)
[2024-10-10 13:34:03,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3823.0, 300 sec: 3721.1). Total num frames: 1601536. Throughput: 0: 959.4. Samples: 401278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:03,960][00343] Avg episode reward: [(0, '13.999')]
[2024-10-10 13:34:03,968][03643] Saving new best policy, reward=13.999!
[2024-10-10 13:34:08,962][00343] Fps is (10 sec: 3685.6, 60 sec: 3686.1, 300 sec: 3707.2). Total num frames: 1617920. Throughput: 0: 951.9. Samples: 403596. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:34:08,964][00343] Avg episode reward: [(0, '14.208')]
[2024-10-10 13:34:08,971][03643] Saving new best policy, reward=14.208!
[2024-10-10 13:34:13,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3618.4, 300 sec: 3721.1). Total num frames: 1634304. Throughput: 0: 905.1. Samples: 408132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:13,959][00343] Avg episode reward: [(0, '15.044')]
[2024-10-10 13:34:13,968][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000399_1634304.pth...
[2024-10-10 13:34:14,080][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000181_741376.pth
[2024-10-10 13:34:14,101][03643] Saving new best policy, reward=15.044!
[2024-10-10 13:34:14,591][03656] Updated weights for policy 0, policy_version 400 (0.0024)
[2024-10-10 13:34:18,957][00343] Fps is (10 sec: 3688.2, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1654784. Throughput: 0: 927.2. Samples: 414506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:18,959][00343] Avg episode reward: [(0, '15.028')]
[2024-10-10 13:34:23,957][00343] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1671168. Throughput: 0: 952.0. Samples: 417652. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:23,959][00343] Avg episode reward: [(0, '14.265')]
[2024-10-10 13:34:25,598][03656] Updated weights for policy 0, policy_version 410 (0.0015)
[2024-10-10 13:34:28,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1687552. Throughput: 0: 907.3. Samples: 421910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:34:28,962][00343] Avg episode reward: [(0, '13.312')]
[2024-10-10 13:34:33,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1712128. Throughput: 0: 916.7. Samples: 428300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:33,960][00343] Avg episode reward: [(0, '12.892')]
[2024-10-10 13:34:35,776][03656] Updated weights for policy 0, policy_version 420 (0.0014)
[2024-10-10 13:34:38,957][00343] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 1732608. Throughput: 0: 949.0. Samples: 431648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:34:38,959][00343] Avg episode reward: [(0, '13.777')]
[2024-10-10 13:34:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 1744896. Throughput: 0: 933.4. Samples: 436548. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:34:43,959][00343] Avg episode reward: [(0, '13.730')]
[2024-10-10 13:34:47,700][03656] Updated weights for policy 0, policy_version 430 (0.0030)
[2024-10-10 13:34:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.7, 300 sec: 3721.1). Total num frames: 1765376. Throughput: 0: 904.2. Samples: 441968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:48,959][00343] Avg episode reward: [(0, '13.823')]
[2024-10-10 13:34:53,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1785856. Throughput: 0: 926.4. Samples: 445280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:34:53,959][00343] Avg episode reward: [(0, '13.544')]
[2024-10-10 13:34:57,783][03656] Updated weights for policy 0, policy_version 440 (0.0012)
[2024-10-10 13:34:58,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3707.2). Total num frames: 1802240. Throughput: 0: 952.3. Samples: 450984. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:34:58,959][00343] Avg episode reward: [(0, '13.808')]
[2024-10-10 13:35:03,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 1818624. Throughput: 0: 913.8. Samples: 455628. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:35:03,959][00343] Avg episode reward: [(0, '14.550')]
[2024-10-10 13:35:08,717][03656] Updated weights for policy 0, policy_version 450 (0.0013)
[2024-10-10 13:35:08,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3755.0, 300 sec: 3735.0). Total num frames: 1843200. Throughput: 0: 918.4. Samples: 458978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:35:08,960][00343] Avg episode reward: [(0, '16.665')]
[2024-10-10 13:35:08,965][03643] Saving new best policy, reward=16.665!
[2024-10-10 13:35:13,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3823.0, 300 sec: 3735.0). Total num frames: 1863680. Throughput: 0: 971.2. Samples: 465614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:35:13,959][00343] Avg episode reward: [(0, '18.369')]
[2024-10-10 13:35:13,966][03643] Saving new best policy, reward=18.369!
[2024-10-10 13:35:18,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1875968. Throughput: 0: 914.9. Samples: 469472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:35:18,960][00343] Avg episode reward: [(0, '17.937')]
[2024-10-10 13:35:21,001][03656] Updated weights for policy 0, policy_version 460 (0.0023)
[2024-10-10 13:35:23,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1896448. Throughput: 0: 902.9. Samples: 472280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:35:23,959][00343] Avg episode reward: [(0, '18.836')]
[2024-10-10 13:35:23,968][03643] Saving new best policy, reward=18.836!
[2024-10-10 13:35:28,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3721.1). Total num frames: 1916928. Throughput: 0: 936.5. Samples: 478692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:35:28,959][00343] Avg episode reward: [(0, '17.666')]
[2024-10-10 13:35:31,323][03656] Updated weights for policy 0, policy_version 470 (0.0013)
[2024-10-10 13:35:33,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1929216. Throughput: 0: 923.3. Samples: 483516. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:35:33,964][00343] Avg episode reward: [(0, '17.629')]
[2024-10-10 13:35:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3721.1). Total num frames: 1949696. Throughput: 0: 895.0. Samples: 485554. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:35:38,958][00343] Avg episode reward: [(0, '16.641')]
[2024-10-10 13:35:42,512][03656] Updated weights for policy 0, policy_version 480 (0.0022)
[2024-10-10 13:35:43,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1970176. Throughput: 0: 912.9. Samples: 492064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:35:43,961][00343] Avg episode reward: [(0, '17.749')]
[2024-10-10 13:35:48,958][00343] Fps is (10 sec: 3686.0, 60 sec: 3686.3, 300 sec: 3707.2). Total num frames: 1986560. Throughput: 0: 936.2. Samples: 497758. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:35:48,966][00343] Avg episode reward: [(0, '17.718')]
[2024-10-10 13:35:53,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 1998848. Throughput: 0: 905.4. Samples: 499722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:35:53,963][00343] Avg episode reward: [(0, '18.207')]
[2024-10-10 13:35:55,007][03656] Updated weights for policy 0, policy_version 490 (0.0026)
[2024-10-10 13:35:58,957][00343] Fps is (10 sec: 3686.8, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2023424. Throughput: 0: 881.9. Samples: 505300. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:35:58,959][00343] Avg episode reward: [(0, '18.780')]
[2024-10-10 13:36:03,957][00343] Fps is (10 sec: 4505.7, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2043904. Throughput: 0: 938.1. Samples: 511686. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:36:03,965][00343] Avg episode reward: [(0, '18.860')]
[2024-10-10 13:36:03,992][03643] Saving new best policy, reward=18.860!
[2024-10-10 13:36:04,836][03656] Updated weights for policy 0, policy_version 500 (0.0012)
[2024-10-10 13:36:08,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 2056192. Throughput: 0: 921.8. Samples: 513762. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:36:08,959][00343] Avg episode reward: [(0, '19.410')]
[2024-10-10 13:36:08,964][03643] Saving new best policy, reward=19.410!
[2024-10-10 13:36:13,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3707.3). Total num frames: 2076672. Throughput: 0: 882.3. Samples: 518394. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:36:13,959][00343] Avg episode reward: [(0, '20.088')]
[2024-10-10 13:36:13,971][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000507_2076672.pth...
[2024-10-10 13:36:14,085][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000290_1187840.pth
[2024-10-10 13:36:14,099][03643] Saving new best policy, reward=20.088!
[2024-10-10 13:36:16,812][03656] Updated weights for policy 0, policy_version 510 (0.0015)
[2024-10-10 13:36:18,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2097152. Throughput: 0: 911.5. Samples: 524534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:36:18,965][00343] Avg episode reward: [(0, '20.519')]
[2024-10-10 13:36:18,968][03643] Saving new best policy, reward=20.519!
[2024-10-10 13:36:23,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2113536. Throughput: 0: 930.5. Samples: 527428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:36:23,964][00343] Avg episode reward: [(0, '21.066')]
[2024-10-10 13:36:23,974][03643] Saving new best policy, reward=21.066!
[2024-10-10 13:36:28,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3679.5). Total num frames: 2125824. Throughput: 0: 874.3. Samples: 531406. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:36:28,963][00343] Avg episode reward: [(0, '20.493')]
[2024-10-10 13:36:29,202][03656] Updated weights for policy 0, policy_version 520 (0.0020)
[2024-10-10 13:36:33,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2146304. Throughput: 0: 887.8. Samples: 537708. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:36:33,959][00343] Avg episode reward: [(0, '21.202')]
[2024-10-10 13:36:34,038][03643] Saving new best policy, reward=21.202!
[2024-10-10 13:36:38,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2166784. Throughput: 0: 912.2. Samples: 540770. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:36:38,966][00343] Avg episode reward: [(0, '20.037')]
[2024-10-10 13:36:39,345][03656] Updated weights for policy 0, policy_version 530 (0.0036)
[2024-10-10 13:36:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 2179072. Throughput: 0: 887.6. Samples: 545240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:36:43,959][00343] Avg episode reward: [(0, '19.800')]
[2024-10-10 13:36:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 2199552. Throughput: 0: 866.4. Samples: 550676. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:36:48,964][00343] Avg episode reward: [(0, '19.252')]
[2024-10-10 13:36:51,315][03656] Updated weights for policy 0, policy_version 540 (0.0015)
[2024-10-10 13:36:53,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2220032. Throughput: 0: 892.3. Samples: 553916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:36:53,959][00343] Avg episode reward: [(0, '20.421')]
[2024-10-10 13:36:58,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2236416. Throughput: 0: 909.1. Samples: 559302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:36:58,965][00343] Avg episode reward: [(0, '20.203')]
[2024-10-10 13:37:03,292][03656] Updated weights for policy 0, policy_version 550 (0.0027)
[2024-10-10 13:37:03,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 2252800. Throughput: 0: 877.5. Samples: 564020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:37:03,965][00343] Avg episode reward: [(0, '20.128')]
[2024-10-10 13:37:08,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2273280. Throughput: 0: 883.4. Samples: 567180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:37:08,963][00343] Avg episode reward: [(0, '22.326')]
[2024-10-10 13:37:08,965][03643] Saving new best policy, reward=22.326!
[2024-10-10 13:37:13,627][03656] Updated weights for policy 0, policy_version 560 (0.0025)
[2024-10-10 13:37:13,957][00343] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2293760. Throughput: 0: 932.1. Samples: 573350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:37:13,960][00343] Avg episode reward: [(0, '22.316')]
[2024-10-10 13:37:18,959][00343] Fps is (10 sec: 3276.1, 60 sec: 3481.5, 300 sec: 3651.7). Total num frames: 2306048. Throughput: 0: 879.3. Samples: 577278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:37:18,961][00343] Avg episode reward: [(0, '22.445')]
[2024-10-10 13:37:18,963][03643] Saving new best policy, reward=22.445!
[2024-10-10 13:37:23,960][00343] Fps is (10 sec: 3275.9, 60 sec: 3549.7, 300 sec: 3679.4). Total num frames: 2326528. Throughput: 0: 871.2. Samples: 579978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:37:23,966][00343] Avg episode reward: [(0, '22.367')]
[2024-10-10 13:37:25,725][03656] Updated weights for policy 0, policy_version 570 (0.0031)
[2024-10-10 13:37:28,957][00343] Fps is (10 sec: 4096.8, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2347008. Throughput: 0: 913.3. Samples: 586338. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:37:28,959][00343] Avg episode reward: [(0, '23.574')]
[2024-10-10 13:37:28,965][03643] Saving new best policy, reward=23.574!
[2024-10-10 13:37:33,963][00343] Fps is (10 sec: 3275.7, 60 sec: 3549.5, 300 sec: 3637.7). Total num frames: 2359296. Throughput: 0: 896.4. Samples: 591020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:37:33,970][00343] Avg episode reward: [(0, '23.641')]
[2024-10-10 13:37:33,988][03643] Saving new best policy, reward=23.641!
[2024-10-10 13:37:38,259][03656] Updated weights for policy 0, policy_version 580 (0.0021)
[2024-10-10 13:37:38,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 2375680. Throughput: 0: 867.1. Samples: 592934. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:37:38,960][00343] Avg episode reward: [(0, '23.104')]
[2024-10-10 13:37:43,957][00343] Fps is (10 sec: 4098.5, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2400256. Throughput: 0: 888.4. Samples: 599278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:37:43,959][00343] Avg episode reward: [(0, '22.793')]
[2024-10-10 13:37:48,275][03656] Updated weights for policy 0, policy_version 590 (0.0027)
[2024-10-10 13:37:48,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2416640. Throughput: 0: 911.5. Samples: 605036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:37:48,960][00343] Avg episode reward: [(0, '21.786')]
[2024-10-10 13:37:53,958][00343] Fps is (10 sec: 2866.7, 60 sec: 3481.5, 300 sec: 3637.8). Total num frames: 2428928. Throughput: 0: 882.9. Samples: 606914. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:37:53,966][00343] Avg episode reward: [(0, '21.874')]
[2024-10-10 13:37:58,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3651.7). Total num frames: 2449408. Throughput: 0: 868.7. Samples: 612440. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:37:58,959][00343] Avg episode reward: [(0, '20.818')]
[2024-10-10 13:37:59,864][03656] Updated weights for policy 0, policy_version 600 (0.0012)
[2024-10-10 13:38:03,957][00343] Fps is (10 sec: 4506.3, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 2473984. Throughput: 0: 926.3. Samples: 618958. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:38:03,964][00343] Avg episode reward: [(0, '20.947')]
[2024-10-10 13:38:08,957][00343] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3624.0). Total num frames: 2486272. Throughput: 0: 913.3. Samples: 621072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:08,960][00343] Avg episode reward: [(0, '20.748')]
[2024-10-10 13:38:12,283][03656] Updated weights for policy 0, policy_version 610 (0.0016)
[2024-10-10 13:38:13,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3637.8). Total num frames: 2502656. Throughput: 0: 874.0. Samples: 625668. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:38:13,962][00343] Avg episode reward: [(0, '20.974')]
[2024-10-10 13:38:14,032][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000612_2506752.pth...
[2024-10-10 13:38:14,168][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000399_1634304.pth
[2024-10-10 13:38:18,957][00343] Fps is (10 sec: 4096.1, 60 sec: 3686.5, 300 sec: 3651.7). Total num frames: 2527232. Throughput: 0: 915.0. Samples: 632190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:38:18,968][00343] Avg episode reward: [(0, '23.118')]
[2024-10-10 13:38:21,877][03656] Updated weights for policy 0, policy_version 620 (0.0012)
[2024-10-10 13:38:23,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3618.3, 300 sec: 3637.8). Total num frames: 2543616. Throughput: 0: 936.8. Samples: 635090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:23,962][00343] Avg episode reward: [(0, '23.502')]
[2024-10-10 13:38:28,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 2555904. Throughput: 0: 884.8. Samples: 639092. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:38:28,963][00343] Avg episode reward: [(0, '23.972')]
[2024-10-10 13:38:28,968][03643] Saving new best policy, reward=23.972!
[2024-10-10 13:38:33,953][03656] Updated weights for policy 0, policy_version 630 (0.0029)
[2024-10-10 13:38:33,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.8, 300 sec: 3651.7). Total num frames: 2580480. Throughput: 0: 896.8. Samples: 645390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:33,966][00343] Avg episode reward: [(0, '23.848')]
[2024-10-10 13:38:38,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2600960. Throughput: 0: 927.2. Samples: 648638. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:38,960][00343] Avg episode reward: [(0, '22.927')]
[2024-10-10 13:38:43,963][00343] Fps is (10 sec: 3274.8, 60 sec: 3549.5, 300 sec: 3623.9). Total num frames: 2613248. Throughput: 0: 907.5. Samples: 653284. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:43,966][00343] Avg episode reward: [(0, '22.013')]
[2024-10-10 13:38:46,017][03656] Updated weights for policy 0, policy_version 640 (0.0021)
[2024-10-10 13:38:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2633728. Throughput: 0: 883.2. Samples: 658704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:38:48,959][00343] Avg episode reward: [(0, '21.828')]
[2024-10-10 13:38:53,957][00343] Fps is (10 sec: 4098.5, 60 sec: 3754.8, 300 sec: 3637.8). Total num frames: 2654208. Throughput: 0: 904.5. Samples: 661776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:38:53,965][00343] Avg episode reward: [(0, '21.510')]
[2024-10-10 13:38:55,809][03656] Updated weights for policy 0, policy_version 650 (0.0022)
[2024-10-10 13:38:58,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 2670592. Throughput: 0: 925.8. Samples: 667330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:38:58,958][00343] Avg episode reward: [(0, '22.895')]
[2024-10-10 13:39:03,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3610.1). Total num frames: 2682880. Throughput: 0: 882.6. Samples: 671906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:39:03,960][00343] Avg episode reward: [(0, '24.063')]
[2024-10-10 13:39:04,040][03643] Saving new best policy, reward=24.063!
[2024-10-10 13:39:07,875][03656] Updated weights for policy 0, policy_version 660 (0.0012)
[2024-10-10 13:39:08,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2707456. Throughput: 0: 887.9. Samples: 675044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:39:08,965][00343] Avg episode reward: [(0, '22.994')]
[2024-10-10 13:39:13,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 2723840. Throughput: 0: 941.4. Samples: 681454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:39:13,959][00343] Avg episode reward: [(0, '24.334')]
[2024-10-10 13:39:14,050][03643] Saving new best policy, reward=24.334!
[2024-10-10 13:39:18,961][00343] Fps is (10 sec: 3275.6, 60 sec: 3549.7, 300 sec: 3623.9). Total num frames: 2740224. Throughput: 0: 887.9. Samples: 685348. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:39:18,965][00343] Avg episode reward: [(0, '24.024')]
[2024-10-10 13:39:20,039][03656] Updated weights for policy 0, policy_version 670 (0.0024)
[2024-10-10 13:39:23,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2760704. Throughput: 0: 878.4. Samples: 688164. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:39:23,959][00343] Avg episode reward: [(0, '23.492')]
[2024-10-10 13:39:28,957][00343] Fps is (10 sec: 4097.5, 60 sec: 3754.7, 300 sec: 3623.9). Total num frames: 2781184. Throughput: 0: 919.1. Samples: 694640. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:39:28,962][00343] Avg episode reward: [(0, '22.977')]
[2024-10-10 13:39:29,970][03656] Updated weights for policy 0, policy_version 680 (0.0016)
[2024-10-10 13:39:33,959][00343] Fps is (10 sec: 3276.1, 60 sec: 3549.7, 300 sec: 3596.1). Total num frames: 2793472. Throughput: 0: 902.7. Samples: 699328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:39:33,962][00343] Avg episode reward: [(0, '23.545')]
[2024-10-10 13:39:38,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 2809856. Throughput: 0: 881.4. Samples: 701440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:39:38,960][00343] Avg episode reward: [(0, '24.504')]
[2024-10-10 13:39:38,981][03643] Saving new best policy, reward=24.504!
[2024-10-10 13:39:41,824][03656] Updated weights for policy 0, policy_version 690 (0.0015)
[2024-10-10 13:39:43,957][00343] Fps is (10 sec: 4096.8, 60 sec: 3686.8, 300 sec: 3623.9). Total num frames: 2834432. Throughput: 0: 900.3. Samples: 707842. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:39:43,959][00343] Avg episode reward: [(0, '25.513')]
[2024-10-10 13:39:43,970][03643] Saving new best policy, reward=25.513!
[2024-10-10 13:39:48,960][00343] Fps is (10 sec: 4094.8, 60 sec: 3618.0, 300 sec: 3610.0). Total num frames: 2850816. Throughput: 0: 919.7. Samples: 713294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:39:48,964][00343] Avg episode reward: [(0, '25.431')]
[2024-10-10 13:39:53,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 2867200. Throughput: 0: 894.0. Samples: 715274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:39:53,959][00343] Avg episode reward: [(0, '25.458')]
[2024-10-10 13:39:53,968][03656] Updated weights for policy 0, policy_version 700 (0.0015)
[2024-10-10 13:39:58,957][00343] Fps is (10 sec: 3687.5, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 2887680. Throughput: 0: 881.2. Samples: 721108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:39:58,959][00343] Avg episode reward: [(0, '24.825')]
[2024-10-10 13:40:03,699][03656] Updated weights for policy 0, policy_version 710 (0.0015)
[2024-10-10 13:40:03,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3610.0). Total num frames: 2908160. Throughput: 0: 940.0. Samples: 727644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:40:03,959][00343] Avg episode reward: [(0, '24.601')]
[2024-10-10 13:40:08,958][00343] Fps is (10 sec: 3276.3, 60 sec: 3549.8, 300 sec: 3582.2). Total num frames: 2920448. Throughput: 0: 922.1. Samples: 729662. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:08,963][00343] Avg episode reward: [(0, '24.731')]
[2024-10-10 13:40:13,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 2940928. Throughput: 0: 887.8. Samples: 734592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:13,967][00343] Avg episode reward: [(0, '24.461')]
[2024-10-10 13:40:13,978][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000718_2940928.pth...
[2024-10-10 13:40:14,106][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000507_2076672.pth
[2024-10-10 13:40:15,721][03656] Updated weights for policy 0, policy_version 720 (0.0020)
[2024-10-10 13:40:18,957][00343] Fps is (10 sec: 4096.6, 60 sec: 3686.6, 300 sec: 3610.0). Total num frames: 2961408. Throughput: 0: 927.3. Samples: 741056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:18,962][00343] Avg episode reward: [(0, '24.072')]
[2024-10-10 13:40:23,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2973696. Throughput: 0: 938.6. Samples: 743676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:23,961][00343] Avg episode reward: [(0, '25.104')]
[2024-10-10 13:40:27,973][03656] Updated weights for policy 0, policy_version 730 (0.0027)
[2024-10-10 13:40:28,959][00343] Fps is (10 sec: 2866.6, 60 sec: 3481.5, 300 sec: 3596.1). Total num frames: 2990080. Throughput: 0: 884.6. Samples: 747652. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:40:28,961][00343] Avg episode reward: [(0, '25.302')]
[2024-10-10 13:40:33,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3610.0). Total num frames: 3014656. Throughput: 0: 908.4. Samples: 754170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:40:33,960][00343] Avg episode reward: [(0, '23.614')]
[2024-10-10 13:40:37,705][03656] Updated weights for policy 0, policy_version 740 (0.0022)
[2024-10-10 13:40:38,957][00343] Fps is (10 sec: 4096.8, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 3031040. Throughput: 0: 936.3. Samples: 757406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:38,959][00343] Avg episode reward: [(0, '23.113')]
[2024-10-10 13:40:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.2). Total num frames: 3047424. Throughput: 0: 900.4. Samples: 761626. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:40:43,960][00343] Avg episode reward: [(0, '23.187')]
[2024-10-10 13:40:48,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3623.9). Total num frames: 3067904. Throughput: 0: 885.7. Samples: 767500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:40:48,959][00343] Avg episode reward: [(0, '25.039')]
[2024-10-10 13:40:49,462][03656] Updated weights for policy 0, policy_version 750 (0.0012)
[2024-10-10 13:40:53,957][00343] Fps is (10 sec: 4095.9, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 3088384. Throughput: 0: 908.6. Samples: 770546. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:40:53,964][00343] Avg episode reward: [(0, '24.635')]
[2024-10-10 13:40:58,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3100672. Throughput: 0: 908.4. Samples: 775472. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:40:58,961][00343] Avg episode reward: [(0, '23.892')]
[2024-10-10 13:41:01,840][03656] Updated weights for policy 0, policy_version 760 (0.0023)
[2024-10-10 13:41:03,957][00343] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 3121152. Throughput: 0: 879.4. Samples: 780628. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:41:03,964][00343] Avg episode reward: [(0, '24.668')]
[2024-10-10 13:41:08,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3610.0). Total num frames: 3141632. Throughput: 0: 891.6. Samples: 783800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:41:08,959][00343] Avg episode reward: [(0, '24.561')]
[2024-10-10 13:41:11,831][03656] Updated weights for policy 0, policy_version 770 (0.0023)
[2024-10-10 13:41:13,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3158016. Throughput: 0: 931.2. Samples: 789556. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:41:13,962][00343] Avg episode reward: [(0, '24.965')]
[2024-10-10 13:41:18,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.2). Total num frames: 3174400. Throughput: 0: 885.4. Samples: 794012. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:41:18,960][00343] Avg episode reward: [(0, '25.515')]
[2024-10-10 13:41:18,962][03643] Saving new best policy, reward=25.515!
[2024-10-10 13:41:23,725][03656] Updated weights for policy 0, policy_version 780 (0.0026)
[2024-10-10 13:41:23,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3194880. Throughput: 0: 878.6. Samples: 796942. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:41:23,963][00343] Avg episode reward: [(0, '24.713')]
[2024-10-10 13:41:28,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3623.9). Total num frames: 3215360. Throughput: 0: 929.1. Samples: 803436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:41:28,964][00343] Avg episode reward: [(0, '25.506')]
[2024-10-10 13:41:33,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3227648. Throughput: 0: 885.8. Samples: 807360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:41:33,959][00343] Avg episode reward: [(0, '25.911')]
[2024-10-10 13:41:33,968][03643] Saving new best policy, reward=25.911!
[2024-10-10 13:41:35,911][03656] Updated weights for policy 0, policy_version 790 (0.0012)
[2024-10-10 13:41:38,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3248128. Throughput: 0: 880.0. Samples: 810146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:41:38,959][00343] Avg episode reward: [(0, '27.547')]
[2024-10-10 13:41:38,964][03643] Saving new best policy, reward=27.547!
[2024-10-10 13:41:43,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3268608. Throughput: 0: 912.3. Samples: 816524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:41:43,961][00343] Avg episode reward: [(0, '28.623')]
[2024-10-10 13:41:43,974][03643] Saving new best policy, reward=28.623!
[2024-10-10 13:41:46,344][03656] Updated weights for policy 0, policy_version 800 (0.0016)
[2024-10-10 13:41:48,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3280896. Throughput: 0: 898.8. Samples: 821072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:41:48,962][00343] Avg episode reward: [(0, '27.640')]
[2024-10-10 13:41:53,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3297280. Throughput: 0: 874.9. Samples: 823170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:41:53,959][00343] Avg episode reward: [(0, '28.532')]
[2024-10-10 13:41:58,024][03656] Updated weights for policy 0, policy_version 810 (0.0013)
[2024-10-10 13:41:58,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3317760. Throughput: 0: 889.9. Samples: 829602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:41:58,959][00343] Avg episode reward: [(0, '29.305')]
[2024-10-10 13:41:59,019][03643] Saving new best policy, reward=29.305!
[2024-10-10 13:42:03,959][00343] Fps is (10 sec: 3685.7, 60 sec: 3549.7, 300 sec: 3596.1). Total num frames: 3334144. Throughput: 0: 910.0. Samples: 834964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:42:03,961][00343] Avg episode reward: [(0, '29.579')]
[2024-10-10 13:42:04,014][03643] Saving new best policy, reward=29.579!
[2024-10-10 13:42:08,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3350528. Throughput: 0: 888.8. Samples: 836940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:42:08,959][00343] Avg episode reward: [(0, '29.103')]
[2024-10-10 13:42:10,214][03656] Updated weights for policy 0, policy_version 820 (0.0041)
[2024-10-10 13:42:13,957][00343] Fps is (10 sec: 3687.1, 60 sec: 3549.9, 300 sec: 3610.1). Total num frames: 3371008. Throughput: 0: 874.4. Samples: 842784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:42:13,965][00343] Avg episode reward: [(0, '28.281')]
[2024-10-10 13:42:13,981][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000824_3375104.pth...
[2024-10-10 13:42:14,100][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000612_2506752.pth
[2024-10-10 13:42:18,959][00343] Fps is (10 sec: 4095.2, 60 sec: 3618.0, 300 sec: 3610.0). Total num frames: 3391488. Throughput: 0: 925.0. Samples: 848988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:42:18,962][00343] Avg episode reward: [(0, '26.473')]
[2024-10-10 13:42:21,027][03656] Updated weights for policy 0, policy_version 830 (0.0016)
[2024-10-10 13:42:23,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3403776. Throughput: 0: 907.4. Samples: 850978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:42:23,960][00343] Avg episode reward: [(0, '25.690')]
[2024-10-10 13:42:28,957][00343] Fps is (10 sec: 3277.5, 60 sec: 3481.6, 300 sec: 3610.1). Total num frames: 3424256. Throughput: 0: 877.5. Samples: 856010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:42:28,959][00343] Avg episode reward: [(0, '23.797')]
[2024-10-10 13:42:32,024][03656] Updated weights for policy 0, policy_version 840 (0.0012)
[2024-10-10 13:42:33,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3448832. Throughput: 0: 922.1. Samples: 862568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:42:33,959][00343] Avg episode reward: [(0, '22.651')]
[2024-10-10 13:42:38,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3461120. Throughput: 0: 930.9. Samples: 865062. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2024-10-10 13:42:38,959][00343] Avg episode reward: [(0, '22.820')]
[2024-10-10 13:42:43,861][03656] Updated weights for policy 0, policy_version 850 (0.0032)
[2024-10-10 13:42:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 3481600. Throughput: 0: 886.2. Samples: 869482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:42:43,959][00343] Avg episode reward: [(0, '21.710')]
[2024-10-10 13:42:48,957][00343] Fps is (10 sec: 4095.9, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3502080. Throughput: 0: 910.4. Samples: 875930. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:42:48,960][00343] Avg episode reward: [(0, '22.657')]
[2024-10-10 13:42:53,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3518464. Throughput: 0: 938.0. Samples: 879152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:42:53,959][00343] Avg episode reward: [(0, '24.229')]
[2024-10-10 13:42:54,810][03656] Updated weights for policy 0, policy_version 860 (0.0016)
[2024-10-10 13:42:58,957][00343] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3530752. Throughput: 0: 898.3. Samples: 883206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:42:58,962][00343] Avg episode reward: [(0, '24.562')]
[2024-10-10 13:43:03,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3623.9). Total num frames: 3555328. Throughput: 0: 892.7. Samples: 889156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:43:03,959][00343] Avg episode reward: [(0, '25.797')]
[2024-10-10 13:43:05,723][03656] Updated weights for policy 0, policy_version 870 (0.0012)
[2024-10-10 13:43:08,957][00343] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 3575808. Throughput: 0: 919.7. Samples: 892364. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:43:08,964][00343] Avg episode reward: [(0, '25.588')]
[2024-10-10 13:43:13,957][00343] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3588096. Throughput: 0: 914.8. Samples: 897178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:43:13,966][00343] Avg episode reward: [(0, '25.280')]
[2024-10-10 13:43:17,794][03656] Updated weights for policy 0, policy_version 880 (0.0035)
[2024-10-10 13:43:18,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3610.0). Total num frames: 3608576. Throughput: 0: 888.8. Samples: 902562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:43:18,960][00343] Avg episode reward: [(0, '23.780')]
[2024-10-10 13:43:23,957][00343] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 3629056. Throughput: 0: 906.1. Samples: 905836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:43:23,961][00343] Avg episode reward: [(0, '23.402')]
[2024-10-10 13:43:28,573][03656] Updated weights for policy 0, policy_version 890 (0.0028)
[2024-10-10 13:43:28,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 3645440. Throughput: 0: 931.0. Samples: 911376. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2024-10-10 13:43:28,964][00343] Avg episode reward: [(0, '23.839')]
[2024-10-10 13:43:33,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3661824. Throughput: 0: 887.6. Samples: 915872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:43:33,961][00343] Avg episode reward: [(0, '22.404')]
[2024-10-10 13:43:38,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3624.0). Total num frames: 3682304. Throughput: 0: 886.6. Samples: 919048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:43:38,959][00343] Avg episode reward: [(0, '21.713')]
[2024-10-10 13:43:39,597][03656] Updated weights for policy 0, policy_version 900 (0.0024)
[2024-10-10 13:43:43,957][00343] Fps is (10 sec: 4095.8, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3702784. Throughput: 0: 939.3. Samples: 925476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:43:43,963][00343] Avg episode reward: [(0, '22.362')]
[2024-10-10 13:43:48,958][00343] Fps is (10 sec: 3276.5, 60 sec: 3549.8, 300 sec: 3596.1). Total num frames: 3715072. Throughput: 0: 895.6. Samples: 929460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:43:48,961][00343] Avg episode reward: [(0, '22.216')]
[2024-10-10 13:43:51,333][03656] Updated weights for policy 0, policy_version 910 (0.0021)
[2024-10-10 13:43:53,957][00343] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3735552. Throughput: 0: 895.2. Samples: 932650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:43:53,963][00343] Avg episode reward: [(0, '22.835')]
[2024-10-10 13:43:58,957][00343] Fps is (10 sec: 4096.4, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 3756032. Throughput: 0: 926.6. Samples: 938874. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:43:58,959][00343] Avg episode reward: [(0, '23.199')]
[2024-10-10 13:44:02,208][03656] Updated weights for policy 0, policy_version 920 (0.0012)
[2024-10-10 13:44:03,957][00343] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3772416. Throughput: 0: 908.0. Samples: 943424. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:44:03,963][00343] Avg episode reward: [(0, '23.686')]
[2024-10-10 13:44:08,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 3788800. Throughput: 0: 886.4. Samples: 945722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2024-10-10 13:44:08,960][00343] Avg episode reward: [(0, '25.824')]
[2024-10-10 13:44:13,019][03656] Updated weights for policy 0, policy_version 930 (0.0013)
[2024-10-10 13:44:13,960][00343] Fps is (10 sec: 4094.8, 60 sec: 3754.5, 300 sec: 3637.8). Total num frames: 3813376. Throughput: 0: 908.6. Samples: 952266. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2024-10-10 13:44:13,963][00343] Avg episode reward: [(0, '26.391')]
[2024-10-10 13:44:13,971][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000931_3813376.pth...
[2024-10-10 13:44:14,110][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000718_2940928.pth
[2024-10-10 13:44:18,957][00343] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3825664. Throughput: 0: 924.2. Samples: 957462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:44:18,961][00343] Avg episode reward: [(0, '27.233')]
[2024-10-10 13:44:23,957][00343] Fps is (10 sec: 2868.2, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3842048. Throughput: 0: 898.8. Samples: 959492. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:44:23,964][00343] Avg episode reward: [(0, '27.380')]
[2024-10-10 13:44:25,311][03656] Updated weights for policy 0, policy_version 940 (0.0016)
[2024-10-10 13:44:28,957][00343] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3866624. Throughput: 0: 891.8. Samples: 965608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:44:28,961][00343] Avg episode reward: [(0, '26.341')]
[2024-10-10 13:44:33,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3883008. Throughput: 0: 936.0. Samples: 971580. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:44:33,962][00343] Avg episode reward: [(0, '25.877')]
[2024-10-10 13:44:36,222][03656] Updated weights for policy 0, policy_version 950 (0.0023)
[2024-10-10 13:44:38,957][00343] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3895296. Throughput: 0: 908.0. Samples: 973512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:44:38,959][00343] Avg episode reward: [(0, '26.398')]
[2024-10-10 13:44:43,957][00343] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.1). Total num frames: 3915776. Throughput: 0: 893.5. Samples: 979080. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:44:43,959][00343] Avg episode reward: [(0, '25.892')]
[2024-10-10 13:44:46,829][03656] Updated weights for policy 0, policy_version 960 (0.0019)
[2024-10-10 13:44:48,957][00343] Fps is (10 sec: 4505.5, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 3940352. Throughput: 0: 937.4. Samples: 985608. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2024-10-10 13:44:48,960][00343] Avg episode reward: [(0, '26.235')]
[2024-10-10 13:44:53,957][00343] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3952640. Throughput: 0: 935.8. Samples: 987832. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2024-10-10 13:44:53,964][00343] Avg episode reward: [(0, '25.880')]
[2024-10-10 13:44:58,623][03656] Updated weights for policy 0, policy_version 970 (0.0015)
[2024-10-10 13:44:58,957][00343] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3973120. Throughput: 0: 897.1. Samples: 992634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2024-10-10 13:44:58,959][00343] Avg episode reward: [(0, '25.841')]
[2024-10-10 13:45:03,957][00343] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3993600. Throughput: 0: 928.4. Samples: 999240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2024-10-10 13:45:03,970][00343] Avg episode reward: [(0, '27.746')]
[2024-10-10 13:45:06,181][03643] Stopping Batcher_0...
[2024-10-10 13:45:06,184][03643] Loop batcher_evt_loop terminating...
[2024-10-10 13:45:06,186][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:45:06,201][00343] Component Batcher_0 stopped!
[2024-10-10 13:45:06,273][03656] Weights refcount: 2 0
[2024-10-10 13:45:06,287][03656] Stopping InferenceWorker_p0-w0...
[2024-10-10 13:45:06,288][03656] Loop inference_proc0-0_evt_loop terminating...
[2024-10-10 13:45:06,291][00343] Component InferenceWorker_p0-w0 stopped!
[2024-10-10 13:45:06,379][03643] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000824_3375104.pth
[2024-10-10 13:45:06,412][03643] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:45:06,637][00343] Component LearnerWorker_p0 stopped!
[2024-10-10 13:45:06,635][03643] Stopping LearnerWorker_p0...
[2024-10-10 13:45:06,650][03643] Loop learner_proc0_evt_loop terminating...
[2024-10-10 13:45:07,015][00343] Component RolloutWorker_w6 stopped!
[2024-10-10 13:45:07,020][03663] Stopping RolloutWorker_w6...
[2024-10-10 13:45:07,020][03663] Loop rollout_proc6_evt_loop terminating...
[2024-10-10 13:45:07,029][03662] Stopping RolloutWorker_w5...
[2024-10-10 13:45:07,029][00343] Component RolloutWorker_w5 stopped!
[2024-10-10 13:45:07,035][03662] Loop rollout_proc5_evt_loop terminating...
[2024-10-10 13:45:07,043][00343] Component RolloutWorker_w4 stopped!
[2024-10-10 13:45:07,050][03661] Stopping RolloutWorker_w4...
[2024-10-10 13:45:07,061][00343] Component RolloutWorker_w0 stopped!
[2024-10-10 13:45:07,069][03657] Stopping RolloutWorker_w0...
[2024-10-10 13:45:07,075][03657] Loop rollout_proc0_evt_loop terminating...
[2024-10-10 13:45:07,050][03661] Loop rollout_proc4_evt_loop terminating...
[2024-10-10 13:45:07,103][00343] Component RolloutWorker_w2 stopped!
[2024-10-10 13:45:07,109][03659] Stopping RolloutWorker_w2...
[2024-10-10 13:45:07,110][03659] Loop rollout_proc2_evt_loop terminating...
[2024-10-10 13:45:07,125][03664] Stopping RolloutWorker_w7...
[2024-10-10 13:45:07,126][03664] Loop rollout_proc7_evt_loop terminating...
[2024-10-10 13:45:07,125][00343] Component RolloutWorker_w7 stopped!
[2024-10-10 13:45:07,139][00343] Component RolloutWorker_w1 stopped!
[2024-10-10 13:45:07,147][03658] Stopping RolloutWorker_w1...
[2024-10-10 13:45:07,147][03658] Loop rollout_proc1_evt_loop terminating...
[2024-10-10 13:45:07,183][00343] Component RolloutWorker_w3 stopped!
[2024-10-10 13:45:07,188][00343] Waiting for process learner_proc0 to stop...
[2024-10-10 13:45:07,193][03660] Stopping RolloutWorker_w3...
[2024-10-10 13:45:07,193][03660] Loop rollout_proc3_evt_loop terminating...
[2024-10-10 13:45:08,757][00343] Waiting for process inference_proc0-0 to join...
[2024-10-10 13:45:09,429][00343] Waiting for process rollout_proc0 to join...
[2024-10-10 13:45:11,311][00343] Waiting for process rollout_proc1 to join...
[2024-10-10 13:45:11,322][00343] Waiting for process rollout_proc2 to join...
[2024-10-10 13:45:11,326][00343] Waiting for process rollout_proc3 to join...
[2024-10-10 13:45:11,329][00343] Waiting for process rollout_proc4 to join...
[2024-10-10 13:45:11,334][00343] Waiting for process rollout_proc5 to join...
[2024-10-10 13:45:11,338][00343] Waiting for process rollout_proc6 to join...
[2024-10-10 13:45:11,341][00343] Waiting for process rollout_proc7 to join...
[2024-10-10 13:45:11,345][00343] Batcher 0 profile tree view:
batching: 26.3734, releasing_batches: 0.0299
[2024-10-10 13:45:11,348][00343] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 494.5015
update_model: 8.3388
weight_update: 0.0015
one_step: 0.0058
handle_policy_step: 560.5116
deserialize: 15.7946, stack: 3.0552, obs_to_device_normalize: 119.1453, forward: 281.5172, send_messages: 28.8063
prepare_outputs: 83.8149
to_cpu: 50.8172
[2024-10-10 13:45:11,350][00343] Learner 0 profile tree view:
misc: 0.0052, prepare_batch: 15.3273
train: 74.4340
epoch_init: 0.0070, minibatch_init: 0.0111, losses_postprocess: 0.6001, kl_divergence: 0.6538, after_optimizer: 33.8800
calculate_losses: 24.6394
losses_init: 0.0057, forward_head: 1.7118, bptt_initial: 15.8536, tail: 0.9895, advantages_returns: 0.2886, losses: 3.3456
bptt: 2.1077
bptt_forward_core: 2.0142
update: 13.9651
clip: 1.4722
[2024-10-10 13:45:11,351][00343] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.3369, enqueue_policy_requests: 129.4691, env_step: 849.9385, overhead: 16.0647, complete_rollouts: 6.5671
save_policy_outputs: 27.3215
split_output_tensors: 9.5191
[2024-10-10 13:45:11,352][00343] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.4094, enqueue_policy_requests: 129.4679, env_step: 847.5567, overhead: 15.6809, complete_rollouts: 7.3085
save_policy_outputs: 26.6914
split_output_tensors: 9.0876
[2024-10-10 13:45:11,353][00343] Loop Runner_EvtLoop terminating...
[2024-10-10 13:45:11,355][00343] Runner profile tree view:
main_loop: 1133.0690
[2024-10-10 13:45:11,356][00343] Collected {0: 4005888}, FPS: 3535.4
[2024-10-10 13:45:11,571][00343] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2024-10-10 13:45:11,572][00343] Overriding arg 'num_workers' with value 1 passed from command line
[2024-10-10 13:45:11,574][00343] Adding new argument 'no_render'=True that is not in the saved config file!
[2024-10-10 13:45:11,576][00343] Adding new argument 'save_video'=True that is not in the saved config file!
[2024-10-10 13:45:11,578][00343] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2024-10-10 13:45:11,579][00343] Adding new argument 'video_name'=None that is not in the saved config file!
[2024-10-10 13:45:11,581][00343] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2024-10-10 13:45:11,582][00343] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2024-10-10 13:45:11,583][00343] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2024-10-10 13:45:11,584][00343] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2024-10-10 13:45:11,585][00343] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2024-10-10 13:45:11,586][00343] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2024-10-10 13:45:11,587][00343] Adding new argument 'train_script'=None that is not in the saved config file!
[2024-10-10 13:45:11,588][00343] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2024-10-10 13:45:11,589][00343] Using frameskip 1 and render_action_repeat=4 for evaluation
[2024-10-10 13:45:11,607][00343] Doom resolution: 160x120, resize resolution: (128, 72)
[2024-10-10 13:45:11,609][00343] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:45:11,611][00343] RunningMeanStd input shape: (1,)
[2024-10-10 13:45:11,626][00343] ConvEncoder: input_channels=3
[2024-10-10 13:45:11,778][00343] Conv encoder output size: 512
[2024-10-10 13:45:11,781][00343] Policy head output size: 512
[2024-10-10 13:45:13,423][00343] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:45:14,285][00343] Num frames 100...
[2024-10-10 13:45:14,405][00343] Num frames 200...
[2024-10-10 13:45:14,524][00343] Num frames 300...
[2024-10-10 13:45:14,648][00343] Num frames 400...
[2024-10-10 13:45:14,766][00343] Num frames 500...
[2024-10-10 13:45:14,885][00343] Num frames 600...
[2024-10-10 13:45:15,008][00343] Num frames 700...
[2024-10-10 13:45:15,133][00343] Num frames 800...
[2024-10-10 13:45:15,252][00343] Num frames 900...
[2024-10-10 13:45:15,375][00343] Num frames 1000...
[2024-10-10 13:45:15,495][00343] Num frames 1100...
[2024-10-10 13:45:15,612][00343] Num frames 1200...
[2024-10-10 13:45:15,740][00343] Num frames 1300...
[2024-10-10 13:45:15,856][00343] Num frames 1400...
[2024-10-10 13:45:15,974][00343] Num frames 1500...
[2024-10-10 13:45:16,102][00343] Num frames 1600...
[2024-10-10 13:45:16,234][00343] Num frames 1700...
[2024-10-10 13:45:16,348][00343] Num frames 1800...
[2024-10-10 13:45:16,469][00343] Avg episode rewards: #0: 46.559, true rewards: #0: 18.560
[2024-10-10 13:45:16,471][00343] Avg episode reward: 46.559, avg true_objective: 18.560
[2024-10-10 13:45:16,524][00343] Num frames 1900...
[2024-10-10 13:45:16,646][00343] Num frames 2000...
[2024-10-10 13:45:16,773][00343] Num frames 2100...
[2024-10-10 13:45:16,892][00343] Num frames 2200...
[2024-10-10 13:45:17,012][00343] Num frames 2300...
[2024-10-10 13:45:17,130][00343] Num frames 2400...
[2024-10-10 13:45:17,254][00343] Num frames 2500...
[2024-10-10 13:45:17,375][00343] Num frames 2600...
[2024-10-10 13:45:17,500][00343] Num frames 2700...
[2024-10-10 13:45:17,617][00343] Num frames 2800...
[2024-10-10 13:45:17,742][00343] Num frames 2900...
[2024-10-10 13:45:17,857][00343] Num frames 3000...
[2024-10-10 13:45:17,960][00343] Avg episode rewards: #0: 37.200, true rewards: #0: 15.200
[2024-10-10 13:45:17,961][00343] Avg episode reward: 37.200, avg true_objective: 15.200
[2024-10-10 13:45:18,037][00343] Num frames 3100...
[2024-10-10 13:45:18,155][00343] Num frames 3200...
[2024-10-10 13:45:18,294][00343] Num frames 3300...
[2024-10-10 13:45:18,414][00343] Num frames 3400...
[2024-10-10 13:45:18,536][00343] Num frames 3500...
[2024-10-10 13:45:18,658][00343] Num frames 3600...
[2024-10-10 13:45:18,762][00343] Avg episode rewards: #0: 28.783, true rewards: #0: 12.117
[2024-10-10 13:45:18,765][00343] Avg episode reward: 28.783, avg true_objective: 12.117
[2024-10-10 13:45:18,843][00343] Num frames 3700...
[2024-10-10 13:45:18,962][00343] Num frames 3800...
[2024-10-10 13:45:19,079][00343] Num frames 3900...
[2024-10-10 13:45:19,203][00343] Num frames 4000...
[2024-10-10 13:45:19,323][00343] Num frames 4100...
[2024-10-10 13:45:19,445][00343] Num frames 4200...
[2024-10-10 13:45:19,580][00343] Num frames 4300...
[2024-10-10 13:45:19,714][00343] Num frames 4400...
[2024-10-10 13:45:19,837][00343] Num frames 4500...
[2024-10-10 13:45:19,963][00343] Num frames 4600...
[2024-10-10 13:45:20,085][00343] Num frames 4700...
[2024-10-10 13:45:20,205][00343] Num frames 4800...
[2024-10-10 13:45:20,331][00343] Num frames 4900...
[2024-10-10 13:45:20,453][00343] Num frames 5000...
[2024-10-10 13:45:20,582][00343] Num frames 5100...
[2024-10-10 13:45:20,709][00343] Num frames 5200...
[2024-10-10 13:45:20,875][00343] Num frames 5300...
[2024-10-10 13:45:21,042][00343] Num frames 5400...
[2024-10-10 13:45:21,202][00343] Num frames 5500...
[2024-10-10 13:45:21,365][00343] Num frames 5600...
[2024-10-10 13:45:21,537][00343] Num frames 5700...
[2024-10-10 13:45:21,653][00343] Avg episode rewards: #0: 34.587, true rewards: #0: 14.338
[2024-10-10 13:45:21,655][00343] Avg episode reward: 34.587, avg true_objective: 14.338
[2024-10-10 13:45:21,770][00343] Num frames 5800...
[2024-10-10 13:45:21,935][00343] Num frames 5900...
[2024-10-10 13:45:22,100][00343] Num frames 6000...
[2024-10-10 13:45:22,315][00343] Avg episode rewards: #0: 28.576, true rewards: #0: 12.176
[2024-10-10 13:45:22,317][00343] Avg episode reward: 28.576, avg true_objective: 12.176
[2024-10-10 13:45:22,343][00343] Num frames 6100...
[2024-10-10 13:45:22,519][00343] Num frames 6200...
[2024-10-10 13:45:22,723][00343] Num frames 6300...
[2024-10-10 13:45:22,892][00343] Num frames 6400...
[2024-10-10 13:45:23,032][00343] Num frames 6500...
[2024-10-10 13:45:23,150][00343] Num frames 6600...
[2024-10-10 13:45:23,251][00343] Avg episode rewards: #0: 25.228, true rewards: #0: 11.062
[2024-10-10 13:45:23,253][00343] Avg episode reward: 25.228, avg true_objective: 11.062
[2024-10-10 13:45:23,331][00343] Num frames 6700...
[2024-10-10 13:45:23,453][00343] Num frames 6800...
[2024-10-10 13:45:23,571][00343] Num frames 6900...
[2024-10-10 13:45:23,712][00343] Num frames 7000...
[2024-10-10 13:45:23,830][00343] Avg episode rewards: #0: 22.504, true rewards: #0: 10.076
[2024-10-10 13:45:23,833][00343] Avg episode reward: 22.504, avg true_objective: 10.076
[2024-10-10 13:45:23,915][00343] Num frames 7100...
[2024-10-10 13:45:24,038][00343] Num frames 7200...
[2024-10-10 13:45:24,160][00343] Num frames 7300...
[2024-10-10 13:45:24,276][00343] Num frames 7400...
[2024-10-10 13:45:24,393][00343] Num frames 7500...
[2024-10-10 13:45:24,516][00343] Num frames 7600...
[2024-10-10 13:45:24,633][00343] Num frames 7700...
[2024-10-10 13:45:24,767][00343] Num frames 7800...
[2024-10-10 13:45:24,883][00343] Num frames 7900...
[2024-10-10 13:45:25,001][00343] Num frames 8000...
[2024-10-10 13:45:25,118][00343] Num frames 8100...
[2024-10-10 13:45:25,235][00343] Num frames 8200...
[2024-10-10 13:45:25,352][00343] Num frames 8300...
[2024-10-10 13:45:25,474][00343] Num frames 8400...
[2024-10-10 13:45:25,595][00343] Num frames 8500...
[2024-10-10 13:45:25,725][00343] Num frames 8600...
[2024-10-10 13:45:25,841][00343] Num frames 8700...
[2024-10-10 13:45:25,918][00343] Avg episode rewards: #0: 24.396, true rewards: #0: 10.896
[2024-10-10 13:45:25,919][00343] Avg episode reward: 24.396, avg true_objective: 10.896
[2024-10-10 13:45:26,019][00343] Num frames 8800...
[2024-10-10 13:45:26,133][00343] Num frames 8900...
[2024-10-10 13:45:26,249][00343] Num frames 9000...
[2024-10-10 13:45:26,368][00343] Num frames 9100...
[2024-10-10 13:45:26,486][00343] Num frames 9200...
[2024-10-10 13:45:26,606][00343] Num frames 9300...
[2024-10-10 13:45:26,740][00343] Num frames 9400...
[2024-10-10 13:45:26,859][00343] Num frames 9500...
[2024-10-10 13:45:26,975][00343] Num frames 9600...
[2024-10-10 13:45:27,096][00343] Num frames 9700...
[2024-10-10 13:45:27,212][00343] Num frames 9800...
[2024-10-10 13:45:27,328][00343] Num frames 9900...
[2024-10-10 13:45:27,501][00343] Avg episode rewards: #0: 25.108, true rewards: #0: 11.108
[2024-10-10 13:45:27,503][00343] Avg episode reward: 25.108, avg true_objective: 11.108
[2024-10-10 13:45:27,510][00343] Num frames 10000...
[2024-10-10 13:45:27,626][00343] Num frames 10100...
[2024-10-10 13:45:27,750][00343] Num frames 10200...
[2024-10-10 13:45:27,871][00343] Num frames 10300...
[2024-10-10 13:45:27,988][00343] Num frames 10400...
[2024-10-10 13:45:28,076][00343] Avg episode rewards: #0: 23.228, true rewards: #0: 10.428
[2024-10-10 13:45:28,078][00343] Avg episode reward: 23.228, avg true_objective: 10.428
[2024-10-10 13:46:26,828][00343] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2024-10-10 13:46:27,446][00343] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2024-10-10 13:46:27,448][00343] Overriding arg 'num_workers' with value 1 passed from command line
[2024-10-10 13:46:27,450][00343] Adding new argument 'no_render'=True that is not in the saved config file!
[2024-10-10 13:46:27,452][00343] Adding new argument 'save_video'=True that is not in the saved config file!
[2024-10-10 13:46:27,453][00343] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2024-10-10 13:46:27,454][00343] Adding new argument 'video_name'=None that is not in the saved config file!
[2024-10-10 13:46:27,455][00343] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2024-10-10 13:46:27,456][00343] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2024-10-10 13:46:27,457][00343] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2024-10-10 13:46:27,458][00343] Adding new argument 'hf_repository'='Juu24/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2024-10-10 13:46:27,459][00343] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2024-10-10 13:46:27,461][00343] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2024-10-10 13:46:27,462][00343] Adding new argument 'train_script'=None that is not in the saved config file!
[2024-10-10 13:46:27,463][00343] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2024-10-10 13:46:27,464][00343] Using frameskip 1 and render_action_repeat=4 for evaluation
[2024-10-10 13:46:27,473][00343] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:46:27,481][00343] RunningMeanStd input shape: (1,)
[2024-10-10 13:46:27,497][00343] ConvEncoder: input_channels=3
[2024-10-10 13:46:27,554][00343] Conv encoder output size: 512
[2024-10-10 13:46:27,556][00343] Policy head output size: 512
[2024-10-10 13:46:27,587][00343] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:46:28,329][00343] Num frames 100...
[2024-10-10 13:46:28,487][00343] Num frames 200...
[2024-10-10 13:46:28,651][00343] Num frames 300...
[2024-10-10 13:46:28,807][00343] Num frames 400...
[2024-10-10 13:46:28,962][00343] Num frames 500...
[2024-10-10 13:46:29,131][00343] Num frames 600...
[2024-10-10 13:46:29,279][00343] Num frames 700...
[2024-10-10 13:46:29,427][00343] Num frames 800...
[2024-10-10 13:46:29,580][00343] Num frames 900...
[2024-10-10 13:46:29,736][00343] Num frames 1000...
[2024-10-10 13:46:29,886][00343] Num frames 1100...
[2024-10-10 13:46:30,041][00343] Num frames 1200...
[2024-10-10 13:46:30,216][00343] Num frames 1300...
[2024-10-10 13:46:30,381][00343] Num frames 1400...
[2024-10-10 13:46:30,564][00343] Num frames 1500...
[2024-10-10 13:46:30,625][00343] Avg episode rewards: #0: 42.020, true rewards: #0: 15.020
[2024-10-10 13:46:30,628][00343] Avg episode reward: 42.020, avg true_objective: 15.020
[2024-10-10 13:46:30,800][00343] Num frames 1600...
[2024-10-10 13:46:30,960][00343] Num frames 1700...
[2024-10-10 13:46:31,122][00343] Num frames 1800...
[2024-10-10 13:46:31,284][00343] Num frames 1900...
[2024-10-10 13:46:31,472][00343] Num frames 2000...
[2024-10-10 13:46:31,642][00343] Num frames 2100...
[2024-10-10 13:46:31,846][00343] Num frames 2200...
[2024-10-10 13:46:32,041][00343] Num frames 2300...
[2024-10-10 13:46:32,223][00343] Num frames 2400...
[2024-10-10 13:46:32,336][00343] Avg episode rewards: #0: 31.650, true rewards: #0: 12.150
[2024-10-10 13:46:32,338][00343] Avg episode reward: 31.650, avg true_objective: 12.150
[2024-10-10 13:46:32,456][00343] Num frames 2500...
[2024-10-10 13:46:32,632][00343] Num frames 2600...
[2024-10-10 13:46:32,809][00343] Num frames 2700...
[2024-10-10 13:46:32,985][00343] Num frames 2800...
[2024-10-10 13:46:33,155][00343] Num frames 2900...
[2024-10-10 13:46:33,353][00343] Num frames 3000...
[2024-10-10 13:46:33,508][00343] Num frames 3100...
[2024-10-10 13:46:33,673][00343] Num frames 3200...
[2024-10-10 13:46:33,843][00343] Num frames 3300...
[2024-10-10 13:46:33,997][00343] Num frames 3400...
[2024-10-10 13:46:34,118][00343] Num frames 3500...
[2024-10-10 13:46:34,240][00343] Num frames 3600...
[2024-10-10 13:46:34,367][00343] Num frames 3700...
[2024-10-10 13:46:34,486][00343] Num frames 3800...
[2024-10-10 13:46:34,606][00343] Num frames 3900...
[2024-10-10 13:46:34,739][00343] Num frames 4000...
[2024-10-10 13:46:34,861][00343] Num frames 4100...
[2024-10-10 13:46:34,982][00343] Num frames 4200...
[2024-10-10 13:46:35,085][00343] Avg episode rewards: #0: 36.130, true rewards: #0: 14.130
[2024-10-10 13:46:35,086][00343] Avg episode reward: 36.130, avg true_objective: 14.130
[2024-10-10 13:46:35,161][00343] Num frames 4300...
[2024-10-10 13:46:35,279][00343] Num frames 4400...
[2024-10-10 13:46:35,406][00343] Num frames 4500...
[2024-10-10 13:46:35,522][00343] Num frames 4600...
[2024-10-10 13:46:35,640][00343] Num frames 4700...
[2024-10-10 13:46:35,768][00343] Num frames 4800...
[2024-10-10 13:46:35,885][00343] Num frames 4900...
[2024-10-10 13:46:36,006][00343] Num frames 5000...
[2024-10-10 13:46:36,123][00343] Num frames 5100...
[2024-10-10 13:46:36,238][00343] Num frames 5200...
[2024-10-10 13:46:36,352][00343] Num frames 5300...
[2024-10-10 13:46:36,497][00343] Num frames 5400...
[2024-10-10 13:46:36,670][00343] Num frames 5500...
[2024-10-10 13:46:36,842][00343] Num frames 5600...
[2024-10-10 13:46:37,009][00343] Num frames 5700...
[2024-10-10 13:46:37,171][00343] Num frames 5800...
[2024-10-10 13:46:37,335][00343] Num frames 5900...
[2024-10-10 13:46:37,498][00343] Num frames 6000...
[2024-10-10 13:46:37,670][00343] Num frames 6100...
[2024-10-10 13:46:37,843][00343] Num frames 6200...
[2024-10-10 13:46:38,016][00343] Num frames 6300...
[2024-10-10 13:46:38,139][00343] Avg episode rewards: #0: 43.097, true rewards: #0: 15.848
[2024-10-10 13:46:38,140][00343] Avg episode reward: 43.097, avg true_objective: 15.848
[2024-10-10 13:46:38,246][00343] Num frames 6400...
[2024-10-10 13:46:38,411][00343] Num frames 6500...
[2024-10-10 13:46:38,587][00343] Num frames 6600...
[2024-10-10 13:46:38,752][00343] Num frames 6700...
[2024-10-10 13:46:38,873][00343] Num frames 6800...
[2024-10-10 13:46:38,993][00343] Num frames 6900...
[2024-10-10 13:46:39,113][00343] Num frames 7000...
[2024-10-10 13:46:39,261][00343] Avg episode rewards: #0: 37.550, true rewards: #0: 14.150
[2024-10-10 13:46:39,263][00343] Avg episode reward: 37.550, avg true_objective: 14.150
[2024-10-10 13:46:39,295][00343] Num frames 7100...
[2024-10-10 13:46:39,414][00343] Num frames 7200...
[2024-10-10 13:46:39,538][00343] Num frames 7300...
[2024-10-10 13:46:39,663][00343] Num frames 7400...
[2024-10-10 13:46:39,788][00343] Num frames 7500...
[2024-10-10 13:46:39,904][00343] Num frames 7600...
[2024-10-10 13:46:40,022][00343] Num frames 7700...
[2024-10-10 13:46:40,146][00343] Num frames 7800...
[2024-10-10 13:46:40,266][00343] Num frames 7900...
[2024-10-10 13:46:40,369][00343] Avg episode rewards: #0: 34.231, true rewards: #0: 13.232
[2024-10-10 13:46:40,371][00343] Avg episode reward: 34.231, avg true_objective: 13.232
[2024-10-10 13:46:40,445][00343] Num frames 8000...
[2024-10-10 13:46:40,568][00343] Num frames 8100...
[2024-10-10 13:46:40,696][00343] Num frames 8200...
[2024-10-10 13:46:40,814][00343] Num frames 8300...
[2024-10-10 13:46:40,933][00343] Num frames 8400...
[2024-10-10 13:46:41,049][00343] Num frames 8500...
[2024-10-10 13:46:41,164][00343] Num frames 8600...
[2024-10-10 13:46:41,282][00343] Num frames 8700...
[2024-10-10 13:46:41,400][00343] Num frames 8800...
[2024-10-10 13:46:41,534][00343] Num frames 8900...
[2024-10-10 13:46:41,664][00343] Num frames 9000...
[2024-10-10 13:46:41,791][00343] Num frames 9100...
[2024-10-10 13:46:41,876][00343] Avg episode rewards: #0: 33.461, true rewards: #0: 13.033
[2024-10-10 13:46:41,878][00343] Avg episode reward: 33.461, avg true_objective: 13.033
[2024-10-10 13:46:41,973][00343] Num frames 9200...
[2024-10-10 13:46:42,107][00343] Num frames 9300...
[2024-10-10 13:46:42,227][00343] Num frames 9400...
[2024-10-10 13:46:42,347][00343] Num frames 9500...
[2024-10-10 13:46:42,465][00343] Num frames 9600...
[2024-10-10 13:46:42,586][00343] Num frames 9700...
[2024-10-10 13:46:42,725][00343] Num frames 9800...
[2024-10-10 13:46:42,848][00343] Num frames 9900...
[2024-10-10 13:46:42,969][00343] Num frames 10000...
[2024-10-10 13:46:43,099][00343] Num frames 10100...
[2024-10-10 13:46:43,220][00343] Num frames 10200...
[2024-10-10 13:46:43,365][00343] Avg episode rewards: #0: 32.719, true rewards: #0: 12.844
[2024-10-10 13:46:43,366][00343] Avg episode reward: 32.719, avg true_objective: 12.844
[2024-10-10 13:46:43,400][00343] Num frames 10300...
[2024-10-10 13:46:43,518][00343] Num frames 10400...
[2024-10-10 13:46:43,644][00343] Num frames 10500...
[2024-10-10 13:46:43,784][00343] Num frames 10600...
[2024-10-10 13:46:43,902][00343] Num frames 10700...
[2024-10-10 13:46:44,023][00343] Num frames 10800...
[2024-10-10 13:46:44,142][00343] Num frames 10900...
[2024-10-10 13:46:44,261][00343] Num frames 11000...
[2024-10-10 13:46:44,381][00343] Num frames 11100...
[2024-10-10 13:46:44,503][00343] Num frames 11200...
[2024-10-10 13:46:44,677][00343] Avg episode rewards: #0: 31.665, true rewards: #0: 12.554
[2024-10-10 13:46:44,679][00343] Avg episode reward: 31.665, avg true_objective: 12.554
[2024-10-10 13:46:44,682][00343] Num frames 11300...
[2024-10-10 13:46:44,801][00343] Num frames 11400...
[2024-10-10 13:46:44,921][00343] Num frames 11500...
[2024-10-10 13:46:45,040][00343] Num frames 11600...
[2024-10-10 13:46:45,157][00343] Num frames 11700...
[2024-10-10 13:46:45,273][00343] Num frames 11800...
[2024-10-10 13:46:45,421][00343] Avg episode rewards: #0: 29.679, true rewards: #0: 11.879
[2024-10-10 13:46:45,422][00343] Avg episode reward: 29.679, avg true_objective: 11.879
[2024-10-10 13:47:51,702][00343] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2024-10-10 13:48:00,136][00343] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2024-10-10 13:48:00,137][00343] Overriding arg 'num_workers' with value 1 passed from command line
[2024-10-10 13:48:00,140][00343] Adding new argument 'no_render'=True that is not in the saved config file!
[2024-10-10 13:48:00,142][00343] Adding new argument 'save_video'=True that is not in the saved config file!
[2024-10-10 13:48:00,143][00343] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2024-10-10 13:48:00,146][00343] Adding new argument 'video_name'=None that is not in the saved config file!
[2024-10-10 13:48:00,147][00343] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2024-10-10 13:48:00,149][00343] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2024-10-10 13:48:00,151][00343] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2024-10-10 13:48:00,153][00343] Adding new argument 'hf_repository'='Juu24/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2024-10-10 13:48:00,155][00343] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2024-10-10 13:48:00,156][00343] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2024-10-10 13:48:00,157][00343] Adding new argument 'train_script'=None that is not in the saved config file!
[2024-10-10 13:48:00,158][00343] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2024-10-10 13:48:00,160][00343] Using frameskip 1 and render_action_repeat=4 for evaluation
[2024-10-10 13:48:00,178][00343] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:48:00,179][00343] RunningMeanStd input shape: (1,)
[2024-10-10 13:48:00,197][00343] ConvEncoder: input_channels=3
[2024-10-10 13:48:00,232][00343] Conv encoder output size: 512
[2024-10-10 13:48:00,233][00343] Policy head output size: 512
[2024-10-10 13:48:00,251][00343] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:48:00,738][00343] Num frames 100...
[2024-10-10 13:48:00,860][00343] Num frames 200...
[2024-10-10 13:48:00,978][00343] Num frames 300...
[2024-10-10 13:48:01,095][00343] Num frames 400...
[2024-10-10 13:48:01,226][00343] Num frames 500...
[2024-10-10 13:48:01,351][00343] Num frames 600...
[2024-10-10 13:48:01,474][00343] Num frames 700...
[2024-10-10 13:48:01,598][00343] Num frames 800...
[2024-10-10 13:48:01,728][00343] Num frames 900...
[2024-10-10 13:48:01,851][00343] Num frames 1000...
[2024-10-10 13:48:01,977][00343] Num frames 1100...
[2024-10-10 13:48:02,098][00343] Num frames 1200...
[2024-10-10 13:48:02,223][00343] Num frames 1300...
[2024-10-10 13:48:02,353][00343] Num frames 1400...
[2024-10-10 13:48:02,478][00343] Num frames 1500...
[2024-10-10 13:48:02,604][00343] Num frames 1600...
[2024-10-10 13:48:02,737][00343] Num frames 1700...
[2024-10-10 13:48:02,828][00343] Avg episode rewards: #0: 52.279, true rewards: #0: 17.280
[2024-10-10 13:48:02,830][00343] Avg episode reward: 52.279, avg true_objective: 17.280
[2024-10-10 13:48:02,918][00343] Num frames 1800...
[2024-10-10 13:48:03,038][00343] Num frames 1900...
[2024-10-10 13:48:03,159][00343] Num frames 2000...
[2024-10-10 13:48:03,292][00343] Avg episode rewards: #0: 30.330, true rewards: #0: 10.330
[2024-10-10 13:48:03,294][00343] Avg episode reward: 30.330, avg true_objective: 10.330
[2024-10-10 13:48:03,337][00343] Num frames 2100...
[2024-10-10 13:48:03,461][00343] Num frames 2200...
[2024-10-10 13:48:03,582][00343] Num frames 2300...
[2024-10-10 13:48:03,708][00343] Num frames 2400...
[2024-10-10 13:48:03,856][00343] Num frames 2500...
[2024-10-10 13:48:04,028][00343] Num frames 2600...
[2024-10-10 13:48:04,189][00343] Num frames 2700...
[2024-10-10 13:48:04,369][00343] Num frames 2800...
[2024-10-10 13:48:04,526][00343] Num frames 2900...
[2024-10-10 13:48:04,683][00343] Avg episode rewards: #0: 26.866, true rewards: #0: 9.867
[2024-10-10 13:48:04,685][00343] Avg episode reward: 26.866, avg true_objective: 9.867
[2024-10-10 13:48:04,748][00343] Num frames 3000...
[2024-10-10 13:48:04,915][00343] Num frames 3100...
[2024-10-10 13:48:05,080][00343] Num frames 3200...
[2024-10-10 13:48:05,242][00343] Num frames 3300...
[2024-10-10 13:48:05,420][00343] Num frames 3400...
[2024-10-10 13:48:05,593][00343] Num frames 3500...
[2024-10-10 13:48:05,771][00343] Num frames 3600...
[2024-10-10 13:48:05,939][00343] Num frames 3700...
[2024-10-10 13:48:06,105][00343] Num frames 3800...
[2024-10-10 13:48:06,228][00343] Num frames 3900...
[2024-10-10 13:48:06,348][00343] Num frames 4000...
[2024-10-10 13:48:06,484][00343] Num frames 4100...
[2024-10-10 13:48:06,611][00343] Num frames 4200...
[2024-10-10 13:48:06,742][00343] Num frames 4300...
[2024-10-10 13:48:06,863][00343] Num frames 4400...
[2024-10-10 13:48:06,985][00343] Num frames 4500...
[2024-10-10 13:48:07,109][00343] Num frames 4600...
[2024-10-10 13:48:07,235][00343] Num frames 4700...
[2024-10-10 13:48:07,400][00343] Avg episode rewards: #0: 31.722, true rewards: #0: 11.972
[2024-10-10 13:48:07,402][00343] Avg episode reward: 31.722, avg true_objective: 11.972
[2024-10-10 13:48:07,422][00343] Num frames 4800...
[2024-10-10 13:48:07,540][00343] Num frames 4900...
[2024-10-10 13:48:07,664][00343] Num frames 5000...
[2024-10-10 13:48:07,794][00343] Num frames 5100...
[2024-10-10 13:48:07,914][00343] Num frames 5200...
[2024-10-10 13:48:08,035][00343] Num frames 5300...
[2024-10-10 13:48:08,157][00343] Num frames 5400...
[2024-10-10 13:48:08,284][00343] Avg episode rewards: #0: 27.920, true rewards: #0: 10.920
[2024-10-10 13:48:08,286][00343] Avg episode reward: 27.920, avg true_objective: 10.920
[2024-10-10 13:48:08,335][00343] Num frames 5500...
[2024-10-10 13:48:08,461][00343] Num frames 5600...
[2024-10-10 13:48:08,581][00343] Num frames 5700...
[2024-10-10 13:48:08,708][00343] Num frames 5800...
[2024-10-10 13:48:08,826][00343] Num frames 5900...
[2024-10-10 13:48:08,944][00343] Num frames 6000...
[2024-10-10 13:48:09,062][00343] Num frames 6100...
[2024-10-10 13:48:09,178][00343] Num frames 6200...
[2024-10-10 13:48:09,296][00343] Num frames 6300...
[2024-10-10 13:48:09,412][00343] Num frames 6400...
[2024-10-10 13:48:09,536][00343] Num frames 6500...
[2024-10-10 13:48:09,664][00343] Num frames 6600...
[2024-10-10 13:48:09,783][00343] Num frames 6700...
[2024-10-10 13:48:09,898][00343] Num frames 6800...
[2024-10-10 13:48:10,020][00343] Num frames 6900...
[2024-10-10 13:48:10,114][00343] Avg episode rewards: #0: 29.720, true rewards: #0: 11.553
[2024-10-10 13:48:10,116][00343] Avg episode reward: 29.720, avg true_objective: 11.553
[2024-10-10 13:48:10,198][00343] Num frames 7000...
[2024-10-10 13:48:10,313][00343] Num frames 7100...
[2024-10-10 13:48:10,428][00343] Num frames 7200...
[2024-10-10 13:48:10,550][00343] Num frames 7300...
[2024-10-10 13:48:10,668][00343] Num frames 7400...
[2024-10-10 13:48:10,787][00343] Num frames 7500...
[2024-10-10 13:48:10,902][00343] Num frames 7600...
[2024-10-10 13:48:11,008][00343] Avg episode rewards: #0: 27.347, true rewards: #0: 10.919
[2024-10-10 13:48:11,009][00343] Avg episode reward: 27.347, avg true_objective: 10.919
[2024-10-10 13:48:11,079][00343] Num frames 7700...
[2024-10-10 13:48:11,197][00343] Num frames 7800...
[2024-10-10 13:48:11,314][00343] Num frames 7900...
[2024-10-10 13:48:11,437][00343] Num frames 8000...
[2024-10-10 13:48:11,569][00343] Num frames 8100...
[2024-10-10 13:48:11,693][00343] Num frames 8200...
[2024-10-10 13:48:11,808][00343] Num frames 8300...
[2024-10-10 13:48:11,929][00343] Num frames 8400...
[2024-10-10 13:48:12,045][00343] Num frames 8500...
[2024-10-10 13:48:12,165][00343] Num frames 8600...
[2024-10-10 13:48:12,285][00343] Num frames 8700...
[2024-10-10 13:48:12,403][00343] Num frames 8800...
[2024-10-10 13:48:12,520][00343] Num frames 8900...
[2024-10-10 13:48:12,651][00343] Num frames 9000...
[2024-10-10 13:48:12,779][00343] Num frames 9100...
[2024-10-10 13:48:12,898][00343] Num frames 9200...
[2024-10-10 13:48:13,018][00343] Num frames 9300...
[2024-10-10 13:48:13,142][00343] Num frames 9400...
[2024-10-10 13:48:13,316][00343] Avg episode rewards: #0: 30.498, true rewards: #0: 11.874
[2024-10-10 13:48:13,318][00343] Avg episode reward: 30.498, avg true_objective: 11.874
[2024-10-10 13:48:13,323][00343] Num frames 9500...
[2024-10-10 13:48:13,438][00343] Num frames 9600...
[2024-10-10 13:48:13,560][00343] Num frames 9700...
[2024-10-10 13:48:13,694][00343] Num frames 9800...
[2024-10-10 13:48:13,813][00343] Num frames 9900...
[2024-10-10 13:48:13,932][00343] Num frames 10000...
[2024-10-10 13:48:14,047][00343] Num frames 10100...
[2024-10-10 13:48:14,165][00343] Num frames 10200...
[2024-10-10 13:48:14,287][00343] Num frames 10300...
[2024-10-10 13:48:14,405][00343] Num frames 10400...
[2024-10-10 13:48:14,524][00343] Num frames 10500...
[2024-10-10 13:48:14,651][00343] Num frames 10600...
[2024-10-10 13:48:14,773][00343] Num frames 10700...
[2024-10-10 13:48:14,890][00343] Num frames 10800...
[2024-10-10 13:48:15,009][00343] Num frames 10900...
[2024-10-10 13:48:15,128][00343] Num frames 11000...
[2024-10-10 13:48:15,247][00343] Num frames 11100...
[2024-10-10 13:48:15,369][00343] Num frames 11200...
[2024-10-10 13:48:15,487][00343] Num frames 11300...
[2024-10-10 13:48:15,607][00343] Num frames 11400...
[2024-10-10 13:48:15,733][00343] Avg episode rewards: #0: 32.942, true rewards: #0: 12.720
[2024-10-10 13:48:15,735][00343] Avg episode reward: 32.942, avg true_objective: 12.720
[2024-10-10 13:48:15,797][00343] Num frames 11500...
[2024-10-10 13:48:15,922][00343] Num frames 11600...
[2024-10-10 13:48:16,046][00343] Num frames 11700...
[2024-10-10 13:48:16,200][00343] Num frames 11800...
[2024-10-10 13:48:16,373][00343] Num frames 11900...
[2024-10-10 13:48:16,539][00343] Num frames 12000...
[2024-10-10 13:48:16,773][00343] Avg episode rewards: #0: 30.899, true rewards: #0: 12.099
[2024-10-10 13:48:16,779][00343] Avg episode reward: 30.899, avg true_objective: 12.099
[2024-10-10 13:48:16,783][00343] Num frames 12100...
[2024-10-10 13:49:24,380][00343] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2024-10-10 13:51:19,451][00343] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2024-10-10 13:51:19,453][00343] Overriding arg 'num_workers' with value 1 passed from command line
[2024-10-10 13:51:19,455][00343] Adding new argument 'no_render'=True that is not in the saved config file!
[2024-10-10 13:51:19,457][00343] Adding new argument 'save_video'=True that is not in the saved config file!
[2024-10-10 13:51:19,459][00343] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2024-10-10 13:51:19,460][00343] Adding new argument 'video_name'=None that is not in the saved config file!
[2024-10-10 13:51:19,462][00343] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2024-10-10 13:51:19,465][00343] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2024-10-10 13:51:19,466][00343] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2024-10-10 13:51:19,468][00343] Adding new argument 'hf_repository'='Juu24/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2024-10-10 13:51:19,469][00343] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2024-10-10 13:51:19,470][00343] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2024-10-10 13:51:19,471][00343] Adding new argument 'train_script'=None that is not in the saved config file!
[2024-10-10 13:51:19,472][00343] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2024-10-10 13:51:19,473][00343] Using frameskip 1 and render_action_repeat=4 for evaluation
[2024-10-10 13:51:19,490][00343] RunningMeanStd input shape: (3, 72, 128)
[2024-10-10 13:51:19,492][00343] RunningMeanStd input shape: (1,)
[2024-10-10 13:51:19,504][00343] ConvEncoder: input_channels=3
[2024-10-10 13:51:19,539][00343] Conv encoder output size: 512
[2024-10-10 13:51:19,541][00343] Policy head output size: 512
[2024-10-10 13:51:19,559][00343] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2024-10-10 13:51:20,062][00343] Num frames 100...
[2024-10-10 13:51:20,181][00343] Num frames 200...
[2024-10-10 13:51:20,296][00343] Num frames 300...
[2024-10-10 13:51:20,413][00343] Num frames 400...
[2024-10-10 13:51:20,533][00343] Num frames 500...
[2024-10-10 13:51:20,659][00343] Num frames 600...
[2024-10-10 13:51:20,778][00343] Num frames 700...
[2024-10-10 13:51:20,896][00343] Num frames 800...
[2024-10-10 13:51:21,026][00343] Num frames 900...
[2024-10-10 13:51:21,146][00343] Num frames 1000...
[2024-10-10 13:51:21,267][00343] Avg episode rewards: #0: 25.560, true rewards: #0: 10.560
[2024-10-10 13:51:21,269][00343] Avg episode reward: 25.560, avg true_objective: 10.560
[2024-10-10 13:51:21,324][00343] Num frames 1100...
[2024-10-10 13:51:21,444][00343] Num frames 1200...
[2024-10-10 13:51:21,564][00343] Num frames 1300...
[2024-10-10 13:51:21,703][00343] Num frames 1400...
[2024-10-10 13:51:21,826][00343] Num frames 1500...
[2024-10-10 13:51:21,949][00343] Num frames 1600...
[2024-10-10 13:51:22,073][00343] Num frames 1700...
[2024-10-10 13:51:22,197][00343] Num frames 1800...
[2024-10-10 13:51:22,316][00343] Num frames 1900...
[2024-10-10 13:51:22,436][00343] Num frames 2000...
[2024-10-10 13:51:22,549][00343] Avg episode rewards: #0: 25.240, true rewards: #0: 10.240
[2024-10-10 13:51:22,551][00343] Avg episode reward: 25.240, avg true_objective: 10.240
[2024-10-10 13:51:22,618][00343] Num frames 2100...
[2024-10-10 13:51:22,746][00343] Num frames 2200...
[2024-10-10 13:51:22,866][00343] Num frames 2300...
[2024-10-10 13:51:22,991][00343] Num frames 2400...
[2024-10-10 13:51:23,125][00343] Num frames 2500...
[2024-10-10 13:51:23,247][00343] Num frames 2600...
[2024-10-10 13:51:23,367][00343] Num frames 2700...
[2024-10-10 13:51:23,490][00343] Num frames 2800...
[2024-10-10 13:51:23,612][00343] Num frames 2900...
[2024-10-10 13:51:23,740][00343] Num frames 3000...
[2024-10-10 13:51:23,860][00343] Num frames 3100...
[2024-10-10 13:51:23,985][00343] Num frames 3200...
[2024-10-10 13:51:24,116][00343] Num frames 3300...
[2024-10-10 13:51:24,238][00343] Num frames 3400...
[2024-10-10 13:51:24,359][00343] Num frames 3500...
[2024-10-10 13:51:24,482][00343] Num frames 3600...
[2024-10-10 13:51:24,602][00343] Num frames 3700...
[2024-10-10 13:51:24,774][00343] Avg episode rewards: #0: 32.313, true rewards: #0: 12.647
[2024-10-10 13:51:24,776][00343] Avg episode reward: 32.313, avg true_objective: 12.647
[2024-10-10 13:51:24,788][00343] Num frames 3800...
[2024-10-10 13:51:24,907][00343] Num frames 3900...
[2024-10-10 13:51:25,027][00343] Num frames 4000...
[2024-10-10 13:51:25,177][00343] Num frames 4100...
[2024-10-10 13:51:25,349][00343] Num frames 4200...
[2024-10-10 13:51:25,523][00343] Num frames 4300...
[2024-10-10 13:51:25,690][00343] Num frames 4400...
[2024-10-10 13:51:25,861][00343] Num frames 4500...
[2024-10-10 13:51:26,033][00343] Num frames 4600...
[2024-10-10 13:51:26,197][00343] Num frames 4700...
[2024-10-10 13:51:26,352][00343] Num frames 4800...
[2024-10-10 13:51:26,512][00343] Num frames 4900...
[2024-10-10 13:51:26,681][00343] Num frames 5000...
[2024-10-10 13:51:26,857][00343] Num frames 5100...
[2024-10-10 13:51:27,033][00343] Num frames 5200...
[2024-10-10 13:51:27,217][00343] Num frames 5300...
[2024-10-10 13:51:27,437][00343] Avg episode rewards: #0: 34.235, true rewards: #0: 13.485
[2024-10-10 13:51:27,440][00343] Avg episode reward: 34.235, avg true_objective: 13.485
[2024-10-10 13:51:27,455][00343] Num frames 5400...
[2024-10-10 13:51:27,589][00343] Num frames 5500...
[2024-10-10 13:51:27,719][00343] Num frames 5600...
[2024-10-10 13:51:27,839][00343] Num frames 5700...
[2024-10-10 13:51:27,961][00343] Num frames 5800...
[2024-10-10 13:51:28,084][00343] Num frames 5900...
[2024-10-10 13:51:28,203][00343] Num frames 6000...
[2024-10-10 13:51:28,333][00343] Num frames 6100...
[2024-10-10 13:51:28,482][00343] Avg episode rewards: #0: 30.758, true rewards: #0: 12.358
[2024-10-10 13:51:28,484][00343] Avg episode reward: 30.758, avg true_objective: 12.358
[2024-10-10 13:51:28,512][00343] Num frames 6200...
[2024-10-10 13:51:28,629][00343] Num frames 6300...
[2024-10-10 13:51:28,759][00343] Num frames 6400...
[2024-10-10 13:51:28,878][00343] Num frames 6500...
[2024-10-10 13:51:29,000][00343] Num frames 6600...
[2024-10-10 13:51:29,125][00343] Num frames 6700...
[2024-10-10 13:51:29,257][00343] Num frames 6800...
[2024-10-10 13:51:29,381][00343] Num frames 6900...
[2024-10-10 13:51:29,502][00343] Num frames 7000...
[2024-10-10 13:51:29,624][00343] Num frames 7100...
[2024-10-10 13:51:29,762][00343] Avg episode rewards: #0: 29.438, true rewards: #0: 11.938
[2024-10-10 13:51:29,764][00343] Avg episode reward: 29.438, avg true_objective: 11.938
[2024-10-10 13:51:29,812][00343] Num frames 7200...
[2024-10-10 13:51:29,937][00343] Num frames 7300...
[2024-10-10 13:51:30,057][00343] Num frames 7400...
[2024-10-10 13:51:30,176][00343] Num frames 7500...
[2024-10-10 13:51:30,310][00343] Num frames 7600...
[2024-10-10 13:51:30,428][00343] Num frames 7700...
[2024-10-10 13:51:30,547][00343] Num frames 7800...
[2024-10-10 13:51:30,674][00343] Num frames 7900...
[2024-10-10 13:51:30,796][00343] Num frames 8000...
[2024-10-10 13:51:30,914][00343] Num frames 8100...
[2024-10-10 13:51:31,038][00343] Num frames 8200...
[2024-10-10 13:51:31,159][00343] Num frames 8300...
[2024-10-10 13:51:31,265][00343] Avg episode rewards: #0: 29.060, true rewards: #0: 11.917
[2024-10-10 13:51:31,267][00343] Avg episode reward: 29.060, avg true_objective: 11.917
[2024-10-10 13:51:31,345][00343] Num frames 8400...
[2024-10-10 13:51:31,468][00343] Num frames 8500...
[2024-10-10 13:51:31,587][00343] Num frames 8600...
[2024-10-10 13:51:31,714][00343] Num frames 8700...
[2024-10-10 13:51:31,833][00343] Num frames 8800...
[2024-10-10 13:51:31,952][00343] Num frames 8900...
[2024-10-10 13:51:32,072][00343] Num frames 9000...
[2024-10-10 13:51:32,234][00343] Avg episode rewards: #0: 27.238, true rewards: #0: 11.362
[2024-10-10 13:51:32,236][00343] Avg episode reward: 27.238, avg true_objective: 11.362
[2024-10-10 13:51:32,252][00343] Num frames 9100...
[2024-10-10 13:51:32,377][00343] Num frames 9200...
[2024-10-10 13:51:32,498][00343] Num frames 9300...
[2024-10-10 13:51:32,617][00343] Num frames 9400...
[2024-10-10 13:51:32,743][00343] Num frames 9500...
[2024-10-10 13:51:32,863][00343] Num frames 9600...
[2024-10-10 13:51:32,983][00343] Num frames 9700...
[2024-10-10 13:51:33,101][00343] Num frames 9800...
[2024-10-10 13:51:33,190][00343] Avg episode rewards: #0: 26.029, true rewards: #0: 10.918
[2024-10-10 13:51:33,192][00343] Avg episode reward: 26.029, avg true_objective: 10.918
[2024-10-10 13:51:33,283][00343] Num frames 9900...
[2024-10-10 13:51:33,412][00343] Num frames 10000...
[2024-10-10 13:51:33,535][00343] Num frames 10100...
[2024-10-10 13:51:33,681][00343] Avg episode rewards: #0: 24.178, true rewards: #0: 10.178
[2024-10-10 13:51:33,682][00343] Avg episode reward: 24.178, avg true_objective: 10.178
[2024-10-10 13:52:30,330][00343] Replay video saved to /content/train_dir/default_experiment/replay.mp4!