|
[2024-10-03 09:08:09,200][00216] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-10-03 09:08:09,202][00216] Rollout worker 0 uses device cpu |
|
[2024-10-03 09:08:09,204][00216] Rollout worker 1 uses device cpu |
|
[2024-10-03 09:08:09,205][00216] Rollout worker 2 uses device cpu |
|
[2024-10-03 09:08:09,207][00216] Rollout worker 3 uses device cpu |
|
[2024-10-03 09:08:09,208][00216] Rollout worker 4 uses device cpu |
|
[2024-10-03 09:08:09,209][00216] Rollout worker 5 uses device cpu |
|
[2024-10-03 09:08:09,210][00216] Rollout worker 6 uses device cpu |
|
[2024-10-03 09:08:09,211][00216] Rollout worker 7 uses device cpu |
|
[2024-10-03 09:08:09,362][00216] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-10-03 09:08:09,364][00216] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-10-03 09:08:09,396][00216] Starting all processes... |
|
[2024-10-03 09:08:09,398][00216] Starting process learner_proc0 |
|
[2024-10-03 09:08:09,446][00216] Starting all processes... |
|
[2024-10-03 09:08:09,456][00216] Starting process inference_proc0-0 |
|
[2024-10-03 09:08:09,456][00216] Starting process rollout_proc0 |
|
[2024-10-03 09:08:09,458][00216] Starting process rollout_proc1 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc2 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc3 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc4 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc5 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc6 |
|
[2024-10-03 09:08:09,459][00216] Starting process rollout_proc7 |
|
[2024-10-03 09:08:19,472][09449] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-10-03 09:08:19,472][09449] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-10-03 09:08:19,545][09466] Worker 3 uses CPU cores [1] |
|
[2024-10-03 09:08:19,600][09449] Num visible devices: 1 |
|
[2024-10-03 09:08:19,649][09449] Starting seed is not provided |
|
[2024-10-03 09:08:19,650][09449] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-10-03 09:08:19,650][09449] Initializing actor-critic model on device cuda:0 |
|
[2024-10-03 09:08:19,651][09449] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-10-03 09:08:19,652][09449] RunningMeanStd input shape: (1,) |
|
[2024-10-03 09:08:19,795][09449] ConvEncoder: input_channels=3 |
|
[2024-10-03 09:08:19,882][09467] Worker 5 uses CPU cores [1] |
|
[2024-10-03 09:08:19,888][09469] Worker 7 uses CPU cores [1] |
|
[2024-10-03 09:08:20,092][09464] Worker 1 uses CPU cores [1] |
|
[2024-10-03 09:08:20,109][09468] Worker 6 uses CPU cores [0] |
|
[2024-10-03 09:08:20,124][09463] Worker 0 uses CPU cores [0] |
|
[2024-10-03 09:08:20,183][09462] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-10-03 09:08:20,185][09462] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-10-03 09:08:20,189][09470] Worker 4 uses CPU cores [0] |
|
[2024-10-03 09:08:20,235][09465] Worker 2 uses CPU cores [0] |
|
[2024-10-03 09:08:20,246][09462] Num visible devices: 1 |
|
[2024-10-03 09:08:20,376][09449] Conv encoder output size: 512 |
|
[2024-10-03 09:08:20,378][09449] Policy head output size: 512 |
|
[2024-10-03 09:08:20,403][09449] Created Actor Critic model with architecture: |
|
[2024-10-03 09:08:20,403][09449] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-10-03 09:08:25,052][09449] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-10-03 09:08:25,053][09449] No checkpoints found |
|
[2024-10-03 09:08:25,054][09449] Did not load from checkpoint, starting from scratch! |
|
[2024-10-03 09:08:25,054][09449] Initialized policy 0 weights for model version 0 |
|
[2024-10-03 09:08:25,058][09449] LearnerWorker_p0 finished initialization! |
|
[2024-10-03 09:08:25,059][09449] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-10-03 09:08:25,259][09462] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-10-03 09:08:25,262][09462] RunningMeanStd input shape: (1,) |
|
[2024-10-03 09:08:25,277][09462] ConvEncoder: input_channels=3 |
|
[2024-10-03 09:08:25,304][00216] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-10-03 09:08:25,381][09462] Conv encoder output size: 512 |
|
[2024-10-03 09:08:25,382][09462] Policy head output size: 512 |
|
[2024-10-03 09:08:26,885][00216] Inference worker 0-0 is ready! |
|
[2024-10-03 09:08:26,888][00216] All inference workers are ready! Signal rollout workers to start! |
|
[2024-10-03 09:08:26,996][09468] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,018][09463] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,035][09467] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,051][09464] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,049][09470] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,052][09469] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,061][09465] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:27,080][09466] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:08:28,005][09465] Decorrelating experience for 0 frames... |
|
[2024-10-03 09:08:28,007][09463] Decorrelating experience for 0 frames... |
|
[2024-10-03 09:08:28,005][09464] Decorrelating experience for 0 frames... |
|
[2024-10-03 09:08:28,747][09463] Decorrelating experience for 32 frames... |
|
[2024-10-03 09:08:28,745][09465] Decorrelating experience for 32 frames... |
|
[2024-10-03 09:08:28,768][09464] Decorrelating experience for 32 frames... |
|
[2024-10-03 09:08:28,806][09469] Decorrelating experience for 0 frames... |
|
[2024-10-03 09:08:29,355][00216] Heartbeat connected on Batcher_0 |
|
[2024-10-03 09:08:29,361][00216] Heartbeat connected on LearnerWorker_p0 |
|
[2024-10-03 09:08:29,392][00216] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-10-03 09:08:29,621][09463] Decorrelating experience for 64 frames... |
|
[2024-10-03 09:08:29,623][09465] Decorrelating experience for 64 frames... |
|
[2024-10-03 09:08:29,733][09467] Decorrelating experience for 0 frames... |
|
[2024-10-03 09:08:29,736][09469] Decorrelating experience for 32 frames... |
|
[2024-10-03 09:08:30,304][00216] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-10-03 09:08:30,482][09464] Decorrelating experience for 64 frames... |
|
[2024-10-03 09:08:30,496][09463] Decorrelating experience for 96 frames... |
|
[2024-10-03 09:08:30,503][09465] Decorrelating experience for 96 frames... |
|
[2024-10-03 09:08:30,584][09467] Decorrelating experience for 32 frames... |
|
[2024-10-03 09:08:30,664][00216] Heartbeat connected on RolloutWorker_w0 |
|
[2024-10-03 09:08:30,684][00216] Heartbeat connected on RolloutWorker_w2 |
|
[2024-10-03 09:08:31,362][09469] Decorrelating experience for 64 frames... |
|
[2024-10-03 09:08:31,452][09464] Decorrelating experience for 96 frames... |
|
[2024-10-03 09:08:31,543][09467] Decorrelating experience for 64 frames... |
|
[2024-10-03 09:08:31,584][00216] Heartbeat connected on RolloutWorker_w1 |
|
[2024-10-03 09:08:32,150][09469] Decorrelating experience for 96 frames... |
|
[2024-10-03 09:08:32,323][00216] Heartbeat connected on RolloutWorker_w7 |
|
[2024-10-03 09:08:32,343][09467] Decorrelating experience for 96 frames... |
|
[2024-10-03 09:08:32,439][00216] Heartbeat connected on RolloutWorker_w5 |
|
[2024-10-03 09:08:35,304][00216] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 2.4. Samples: 24. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-10-03 09:08:35,311][00216] Avg episode reward: [(0, '1.156')] |
|
[2024-10-03 09:08:37,571][09449] Signal inference workers to stop experience collection... |
|
[2024-10-03 09:08:37,580][09462] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-10-03 09:08:39,553][09449] Signal inference workers to resume experience collection... |
|
[2024-10-03 09:08:39,555][09462] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-10-03 09:08:40,304][00216] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 150.4. Samples: 2256. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-10-03 09:08:40,306][00216] Avg episode reward: [(0, '3.080')] |
|
[2024-10-03 09:08:45,304][00216] Fps is (10 sec: 2867.2, 60 sec: 1433.6, 300 sec: 1433.6). Total num frames: 28672. Throughput: 0: 358.6. Samples: 7172. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:08:45,306][00216] Avg episode reward: [(0, '4.038')] |
|
[2024-10-03 09:08:48,719][09462] Updated weights for policy 0, policy_version 10 (0.0017) |
|
[2024-10-03 09:08:50,304][00216] Fps is (10 sec: 4095.9, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 400.1. Samples: 10002. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:08:50,310][00216] Avg episode reward: [(0, '4.316')] |
|
[2024-10-03 09:08:55,304][00216] Fps is (10 sec: 3276.8, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 61440. Throughput: 0: 492.1. Samples: 14762. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:08:55,306][00216] Avg episode reward: [(0, '4.250')] |
|
[2024-10-03 09:08:59,442][09462] Updated weights for policy 0, policy_version 20 (0.0013) |
|
[2024-10-03 09:09:00,304][00216] Fps is (10 sec: 4096.1, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 86016. Throughput: 0: 609.9. Samples: 21346. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:09:00,311][00216] Avg episode reward: [(0, '4.324')] |
|
[2024-10-03 09:09:05,304][00216] Fps is (10 sec: 4096.0, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 611.9. Samples: 24478. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:05,311][00216] Avg episode reward: [(0, '4.312')] |
|
[2024-10-03 09:09:05,323][09449] Saving new best policy, reward=4.312! |
|
[2024-10-03 09:09:10,304][00216] Fps is (10 sec: 3276.8, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 639.8. Samples: 28790. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:09:10,313][00216] Avg episode reward: [(0, '4.327')] |
|
[2024-10-03 09:09:10,318][09449] Saving new best policy, reward=4.327! |
|
[2024-10-03 09:09:11,146][09462] Updated weights for policy 0, policy_version 30 (0.0016) |
|
[2024-10-03 09:09:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 139264. Throughput: 0: 781.9. Samples: 35186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:09:15,312][00216] Avg episode reward: [(0, '4.332')] |
|
[2024-10-03 09:09:15,322][09449] Saving new best policy, reward=4.332! |
|
[2024-10-03 09:09:20,304][00216] Fps is (10 sec: 3686.4, 60 sec: 2830.0, 300 sec: 2830.0). Total num frames: 155648. Throughput: 0: 851.8. Samples: 38354. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:20,312][00216] Avg episode reward: [(0, '4.434')] |
|
[2024-10-03 09:09:20,316][09449] Saving new best policy, reward=4.434! |
|
[2024-10-03 09:09:22,082][09462] Updated weights for policy 0, policy_version 40 (0.0012) |
|
[2024-10-03 09:09:25,304][00216] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 172032. Throughput: 0: 896.8. Samples: 42612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:09:25,308][00216] Avg episode reward: [(0, '4.404')] |
|
[2024-10-03 09:09:30,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3024.7). Total num frames: 196608. Throughput: 0: 934.6. Samples: 49228. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:30,308][00216] Avg episode reward: [(0, '4.210')] |
|
[2024-10-03 09:09:32,134][09462] Updated weights for policy 0, policy_version 50 (0.0014) |
|
[2024-10-03 09:09:35,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3042.7). Total num frames: 212992. Throughput: 0: 943.4. Samples: 52456. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:35,307][00216] Avg episode reward: [(0, '4.306')] |
|
[2024-10-03 09:09:40,305][00216] Fps is (10 sec: 3276.3, 60 sec: 3754.6, 300 sec: 3058.3). Total num frames: 229376. Throughput: 0: 938.7. Samples: 57006. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:40,312][00216] Avg episode reward: [(0, '4.425')] |
|
[2024-10-03 09:09:43,747][09462] Updated weights for policy 0, policy_version 60 (0.0032) |
|
[2024-10-03 09:09:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3123.2). Total num frames: 249856. Throughput: 0: 927.8. Samples: 63096. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:45,311][00216] Avg episode reward: [(0, '4.563')] |
|
[2024-10-03 09:09:45,322][09449] Saving new best policy, reward=4.563! |
|
[2024-10-03 09:09:50,304][00216] Fps is (10 sec: 4096.6, 60 sec: 3754.7, 300 sec: 3180.4). Total num frames: 270336. Throughput: 0: 929.4. Samples: 66302. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:50,306][00216] Avg episode reward: [(0, '4.453')] |
|
[2024-10-03 09:09:54,982][09462] Updated weights for policy 0, policy_version 70 (0.0013) |
|
[2024-10-03 09:09:55,306][00216] Fps is (10 sec: 3685.8, 60 sec: 3754.6, 300 sec: 3185.7). Total num frames: 286720. Throughput: 0: 943.8. Samples: 71264. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:09:55,311][00216] Avg episode reward: [(0, '4.546')] |
|
[2024-10-03 09:10:00,304][00216] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3233.7). Total num frames: 307200. Throughput: 0: 933.8. Samples: 77208. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:00,307][00216] Avg episode reward: [(0, '4.396')] |
|
[2024-10-03 09:10:04,520][09462] Updated weights for policy 0, policy_version 80 (0.0012) |
|
[2024-10-03 09:10:05,304][00216] Fps is (10 sec: 4096.6, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 327680. Throughput: 0: 936.3. Samples: 80486. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:10:05,307][00216] Avg episode reward: [(0, '4.500')] |
|
[2024-10-03 09:10:05,314][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000080_327680.pth... |
|
[2024-10-03 09:10:10,304][00216] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 344064. Throughput: 0: 957.2. Samples: 85686. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:10:10,306][00216] Avg episode reward: [(0, '4.482')] |
|
[2024-10-03 09:10:15,304][00216] Fps is (10 sec: 3686.3, 60 sec: 3754.6, 300 sec: 3314.0). Total num frames: 364544. Throughput: 0: 932.2. Samples: 91178. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:10:15,306][00216] Avg episode reward: [(0, '4.511')] |
|
[2024-10-03 09:10:16,075][09462] Updated weights for policy 0, policy_version 90 (0.0020) |
|
[2024-10-03 09:10:20,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3348.0). Total num frames: 385024. Throughput: 0: 932.9. Samples: 94436. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:20,309][00216] Avg episode reward: [(0, '4.502')] |
|
[2024-10-03 09:10:25,306][00216] Fps is (10 sec: 3685.8, 60 sec: 3822.8, 300 sec: 3345.0). Total num frames: 401408. Throughput: 0: 955.6. Samples: 100008. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:25,311][00216] Avg episode reward: [(0, '4.322')] |
|
[2024-10-03 09:10:27,613][09462] Updated weights for policy 0, policy_version 100 (0.0020) |
|
[2024-10-03 09:10:30,308][00216] Fps is (10 sec: 3275.5, 60 sec: 3686.1, 300 sec: 3342.2). Total num frames: 417792. Throughput: 0: 937.2. Samples: 105272. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:30,313][00216] Avg episode reward: [(0, '4.354')] |
|
[2024-10-03 09:10:35,304][00216] Fps is (10 sec: 4096.9, 60 sec: 3822.9, 300 sec: 3402.8). Total num frames: 442368. Throughput: 0: 938.3. Samples: 108524. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:35,310][00216] Avg episode reward: [(0, '4.537')] |
|
[2024-10-03 09:10:36,956][09462] Updated weights for policy 0, policy_version 110 (0.0012) |
|
[2024-10-03 09:10:40,305][00216] Fps is (10 sec: 4097.2, 60 sec: 3822.9, 300 sec: 3398.1). Total num frames: 458752. Throughput: 0: 957.7. Samples: 114362. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:40,308][00216] Avg episode reward: [(0, '4.559')] |
|
[2024-10-03 09:10:45,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3393.8). Total num frames: 475136. Throughput: 0: 933.6. Samples: 119218. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:45,312][00216] Avg episode reward: [(0, '4.551')] |
|
[2024-10-03 09:10:48,595][09462] Updated weights for policy 0, policy_version 120 (0.0021) |
|
[2024-10-03 09:10:50,304][00216] Fps is (10 sec: 3686.9, 60 sec: 3754.7, 300 sec: 3418.0). Total num frames: 495616. Throughput: 0: 933.4. Samples: 122490. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:50,307][00216] Avg episode reward: [(0, '4.589')] |
|
[2024-10-03 09:10:50,388][09449] Saving new best policy, reward=4.589! |
|
[2024-10-03 09:10:55,304][00216] Fps is (10 sec: 4095.9, 60 sec: 3823.0, 300 sec: 3440.6). Total num frames: 516096. Throughput: 0: 954.5. Samples: 128638. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:10:55,309][00216] Avg episode reward: [(0, '4.620')] |
|
[2024-10-03 09:10:55,322][09449] Saving new best policy, reward=4.620! |
|
[2024-10-03 09:11:00,127][09462] Updated weights for policy 0, policy_version 130 (0.0019) |
|
[2024-10-03 09:11:00,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3435.4). Total num frames: 532480. Throughput: 0: 932.4. Samples: 133134. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:00,308][00216] Avg episode reward: [(0, '4.668')] |
|
[2024-10-03 09:11:00,310][09449] Saving new best policy, reward=4.668! |
|
[2024-10-03 09:11:05,304][00216] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3456.0). Total num frames: 552960. Throughput: 0: 931.2. Samples: 136338. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:11:05,312][00216] Avg episode reward: [(0, '4.767')] |
|
[2024-10-03 09:11:05,326][09449] Saving new best policy, reward=4.767! |
|
[2024-10-03 09:11:10,247][09462] Updated weights for policy 0, policy_version 140 (0.0020) |
|
[2024-10-03 09:11:10,305][00216] Fps is (10 sec: 4095.5, 60 sec: 3822.9, 300 sec: 3475.4). Total num frames: 573440. Throughput: 0: 951.8. Samples: 142838. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:11:10,308][00216] Avg episode reward: [(0, '4.815')] |
|
[2024-10-03 09:11:10,310][09449] Saving new best policy, reward=4.815! |
|
[2024-10-03 09:11:15,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3445.5). Total num frames: 585728. Throughput: 0: 929.0. Samples: 147074. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:11:15,306][00216] Avg episode reward: [(0, '4.842')] |
|
[2024-10-03 09:11:15,316][09449] Saving new best policy, reward=4.842! |
|
[2024-10-03 09:11:20,304][00216] Fps is (10 sec: 3686.9, 60 sec: 3754.7, 300 sec: 3487.5). Total num frames: 610304. Throughput: 0: 928.0. Samples: 150284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:11:20,306][00216] Avg episode reward: [(0, '4.730')] |
|
[2024-10-03 09:11:21,173][09462] Updated weights for policy 0, policy_version 150 (0.0026) |
|
[2024-10-03 09:11:25,306][00216] Fps is (10 sec: 4504.6, 60 sec: 3822.9, 300 sec: 3504.3). Total num frames: 630784. Throughput: 0: 947.3. Samples: 156992. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:11:25,309][00216] Avg episode reward: [(0, '4.554')] |
|
[2024-10-03 09:11:30,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.9, 300 sec: 3476.1). Total num frames: 643072. Throughput: 0: 938.5. Samples: 161452. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:30,306][00216] Avg episode reward: [(0, '4.644')] |
|
[2024-10-03 09:11:32,450][09462] Updated weights for policy 0, policy_version 160 (0.0018) |
|
[2024-10-03 09:11:35,304][00216] Fps is (10 sec: 3277.5, 60 sec: 3686.4, 300 sec: 3492.4). Total num frames: 663552. Throughput: 0: 936.7. Samples: 164642. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:35,306][00216] Avg episode reward: [(0, '4.758')] |
|
[2024-10-03 09:11:40,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3823.0, 300 sec: 3528.9). Total num frames: 688128. Throughput: 0: 947.7. Samples: 171284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:40,310][00216] Avg episode reward: [(0, '4.906')] |
|
[2024-10-03 09:11:40,315][09449] Saving new best policy, reward=4.906! |
|
[2024-10-03 09:11:42,839][09462] Updated weights for policy 0, policy_version 170 (0.0021) |
|
[2024-10-03 09:11:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3502.1). Total num frames: 700416. Throughput: 0: 949.5. Samples: 175860. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:45,307][00216] Avg episode reward: [(0, '4.990')] |
|
[2024-10-03 09:11:45,324][09449] Saving new best policy, reward=4.990! |
|
[2024-10-03 09:11:50,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3516.6). Total num frames: 720896. Throughput: 0: 938.7. Samples: 178578. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:11:50,311][00216] Avg episode reward: [(0, '4.936')] |
|
[2024-10-03 09:11:53,409][09462] Updated weights for policy 0, policy_version 180 (0.0013) |
|
[2024-10-03 09:11:55,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3530.4). Total num frames: 741376. Throughput: 0: 942.0. Samples: 185228. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:11:55,307][00216] Avg episode reward: [(0, '5.025')] |
|
[2024-10-03 09:11:55,332][09449] Saving new best policy, reward=5.025! |
|
[2024-10-03 09:12:00,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3524.5). Total num frames: 757760. Throughput: 0: 958.7. Samples: 190214. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:00,311][00216] Avg episode reward: [(0, '5.230')] |
|
[2024-10-03 09:12:00,314][09449] Saving new best policy, reward=5.230! |
|
[2024-10-03 09:12:04,918][09462] Updated weights for policy 0, policy_version 190 (0.0024) |
|
[2024-10-03 09:12:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3537.5). Total num frames: 778240. Throughput: 0: 942.0. Samples: 192676. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:12:05,309][00216] Avg episode reward: [(0, '5.147')] |
|
[2024-10-03 09:12:05,319][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000190_778240.pth... |
|
[2024-10-03 09:12:10,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3549.9). Total num frames: 798720. Throughput: 0: 937.2. Samples: 199166. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:10,309][00216] Avg episode reward: [(0, '5.315')] |
|
[2024-10-03 09:12:10,312][09449] Saving new best policy, reward=5.315! |
|
[2024-10-03 09:12:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3543.9). Total num frames: 815104. Throughput: 0: 951.4. Samples: 204264. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:15,308][00216] Avg episode reward: [(0, '5.283')] |
|
[2024-10-03 09:12:15,956][09462] Updated weights for policy 0, policy_version 200 (0.0013) |
|
[2024-10-03 09:12:20,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3538.2). Total num frames: 831488. Throughput: 0: 931.1. Samples: 206542. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:20,313][00216] Avg episode reward: [(0, '5.223')] |
|
[2024-10-03 09:12:25,304][00216] Fps is (10 sec: 4095.9, 60 sec: 3754.8, 300 sec: 3566.9). Total num frames: 856064. Throughput: 0: 930.7. Samples: 213166. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:12:25,311][00216] Avg episode reward: [(0, '5.361')] |
|
[2024-10-03 09:12:25,318][09449] Saving new best policy, reward=5.361! |
|
[2024-10-03 09:12:25,899][09462] Updated weights for policy 0, policy_version 210 (0.0013) |
|
[2024-10-03 09:12:30,304][00216] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3561.0). Total num frames: 872448. Throughput: 0: 952.0. Samples: 218698. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:30,311][00216] Avg episode reward: [(0, '5.460')] |
|
[2024-10-03 09:12:30,315][09449] Saving new best policy, reward=5.460! |
|
[2024-10-03 09:12:35,304][00216] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3555.3). Total num frames: 888832. Throughput: 0: 936.7. Samples: 220728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:12:35,310][00216] Avg episode reward: [(0, '5.788')] |
|
[2024-10-03 09:12:35,319][09449] Saving new best policy, reward=5.788! |
|
[2024-10-03 09:12:37,573][09462] Updated weights for policy 0, policy_version 220 (0.0015) |
|
[2024-10-03 09:12:40,304][00216] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3582.0). Total num frames: 913408. Throughput: 0: 932.1. Samples: 227174. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-10-03 09:12:40,309][00216] Avg episode reward: [(0, '5.972')] |
|
[2024-10-03 09:12:40,311][09449] Saving new best policy, reward=5.972! |
|
[2024-10-03 09:12:45,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3576.1). Total num frames: 929792. Throughput: 0: 948.7. Samples: 232906. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:12:45,306][00216] Avg episode reward: [(0, '6.202')] |
|
[2024-10-03 09:12:45,319][09449] Saving new best policy, reward=6.202! |
|
[2024-10-03 09:12:49,189][09462] Updated weights for policy 0, policy_version 230 (0.0017) |
|
[2024-10-03 09:12:50,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3570.5). Total num frames: 946176. Throughput: 0: 938.4. Samples: 234902. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:12:50,310][00216] Avg episode reward: [(0, '5.755')] |
|
[2024-10-03 09:12:55,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3580.2). Total num frames: 966656. Throughput: 0: 930.9. Samples: 241058. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:12:55,312][00216] Avg episode reward: [(0, '6.097')] |
|
[2024-10-03 09:12:58,305][09462] Updated weights for policy 0, policy_version 240 (0.0014) |
|
[2024-10-03 09:13:00,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3589.6). Total num frames: 987136. Throughput: 0: 957.0. Samples: 247330. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:00,308][00216] Avg episode reward: [(0, '6.152')] |
|
[2024-10-03 09:13:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3584.0). Total num frames: 1003520. Throughput: 0: 953.2. Samples: 249434. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:05,307][00216] Avg episode reward: [(0, '6.161')] |
|
[2024-10-03 09:13:09,751][09462] Updated weights for policy 0, policy_version 250 (0.0015) |
|
[2024-10-03 09:13:10,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3593.0). Total num frames: 1024000. Throughput: 0: 938.0. Samples: 255374. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:13:10,307][00216] Avg episode reward: [(0, '6.693')] |
|
[2024-10-03 09:13:10,311][09449] Saving new best policy, reward=6.693! |
|
[2024-10-03 09:13:15,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3601.7). Total num frames: 1044480. Throughput: 0: 953.5. Samples: 261604. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:15,317][00216] Avg episode reward: [(0, '6.989')] |
|
[2024-10-03 09:13:15,330][09449] Saving new best policy, reward=6.989! |
|
[2024-10-03 09:13:20,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3582.3). Total num frames: 1056768. Throughput: 0: 953.3. Samples: 263626. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:13:20,312][00216] Avg episode reward: [(0, '6.311')] |
|
[2024-10-03 09:13:21,456][09462] Updated weights for policy 0, policy_version 260 (0.0017) |
|
[2024-10-03 09:13:25,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1081344. Throughput: 0: 936.7. Samples: 269326. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:25,307][00216] Avg episode reward: [(0, '6.557')] |
|
[2024-10-03 09:13:30,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 1101824. Throughput: 0: 955.1. Samples: 275884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:13:30,310][00216] Avg episode reward: [(0, '6.970')] |
|
[2024-10-03 09:13:30,978][09462] Updated weights for policy 0, policy_version 270 (0.0021) |
|
[2024-10-03 09:13:35,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1114112. Throughput: 0: 960.0. Samples: 278102. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:35,306][00216] Avg episode reward: [(0, '7.120')] |
|
[2024-10-03 09:13:35,315][09449] Saving new best policy, reward=7.120! |
|
[2024-10-03 09:13:40,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 1134592. Throughput: 0: 941.0. Samples: 283402. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:13:40,312][00216] Avg episode reward: [(0, '6.907')] |
|
[2024-10-03 09:13:42,484][09462] Updated weights for policy 0, policy_version 280 (0.0020) |
|
[2024-10-03 09:13:45,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1155072. Throughput: 0: 929.1. Samples: 289140. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:13:45,308][00216] Avg episode reward: [(0, '6.320')] |
|
[2024-10-03 09:13:50,304][00216] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1163264. Throughput: 0: 918.2. Samples: 290752. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:13:50,312][00216] Avg episode reward: [(0, '6.994')] |
|
[2024-10-03 09:13:55,304][00216] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3721.1). Total num frames: 1183744. Throughput: 0: 891.1. Samples: 295472. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:13:55,308][00216] Avg episode reward: [(0, '7.083')] |
|
[2024-10-03 09:13:55,595][09462] Updated weights for policy 0, policy_version 290 (0.0015) |
|
[2024-10-03 09:14:00,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1204224. Throughput: 0: 899.4. Samples: 302078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:14:00,313][00216] Avg episode reward: [(0, '8.313')] |
|
[2024-10-03 09:14:00,355][09449] Saving new best policy, reward=8.313! |
|
[2024-10-03 09:14:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1220608. Throughput: 0: 912.5. Samples: 304690. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:05,307][00216] Avg episode reward: [(0, '8.257')] |
|
[2024-10-03 09:14:05,319][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000298_1220608.pth... |
|
[2024-10-03 09:14:05,450][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000080_327680.pth |
|
[2024-10-03 09:14:06,970][09462] Updated weights for policy 0, policy_version 300 (0.0025) |
|
[2024-10-03 09:14:10,305][00216] Fps is (10 sec: 3686.0, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1241088. Throughput: 0: 894.7. Samples: 309588. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:14:10,310][00216] Avg episode reward: [(0, '8.398')] |
|
[2024-10-03 09:14:10,318][09449] Saving new best policy, reward=8.398! |
|
[2024-10-03 09:14:15,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3748.9). Total num frames: 1261568. Throughput: 0: 891.9. Samples: 316020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:14:15,307][00216] Avg episode reward: [(0, '7.739')] |
|
[2024-10-03 09:14:16,566][09462] Updated weights for policy 0, policy_version 310 (0.0012) |
|
[2024-10-03 09:14:20,304][00216] Fps is (10 sec: 3686.8, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 1277952. Throughput: 0: 907.5. Samples: 318940. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:14:20,307][00216] Avg episode reward: [(0, '7.273')] |
|
[2024-10-03 09:14:25,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 1298432. Throughput: 0: 894.7. Samples: 323662. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:25,308][00216] Avg episode reward: [(0, '8.060')] |
|
[2024-10-03 09:14:27,905][09462] Updated weights for policy 0, policy_version 320 (0.0022) |
|
[2024-10-03 09:14:30,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3748.9). Total num frames: 1318912. Throughput: 0: 915.0. Samples: 330314. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:30,311][00216] Avg episode reward: [(0, '8.103')] |
|
[2024-10-03 09:14:35,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 1339392. Throughput: 0: 950.4. Samples: 333518. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:35,309][00216] Avg episode reward: [(0, '8.623')] |
|
[2024-10-03 09:14:35,316][09449] Saving new best policy, reward=8.623! |
|
[2024-10-03 09:14:39,333][09462] Updated weights for policy 0, policy_version 330 (0.0012) |
|
[2024-10-03 09:14:40,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 1355776. Throughput: 0: 942.5. Samples: 337886. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:40,311][00216] Avg episode reward: [(0, '8.658')] |
|
[2024-10-03 09:14:40,312][09449] Saving new best policy, reward=8.658! |
|
[2024-10-03 09:14:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 1376256. Throughput: 0: 939.8. Samples: 344370. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:14:45,306][00216] Avg episode reward: [(0, '9.174')] |
|
[2024-10-03 09:14:45,314][09449] Saving new best policy, reward=9.174! |
|
[2024-10-03 09:14:48,806][09462] Updated weights for policy 0, policy_version 340 (0.0014) |
|
[2024-10-03 09:14:50,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3762.8). Total num frames: 1396736. Throughput: 0: 953.0. Samples: 347574. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:14:50,313][00216] Avg episode reward: [(0, '9.117')] |
|
[2024-10-03 09:14:55,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1409024. Throughput: 0: 942.8. Samples: 352012. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:14:55,309][00216] Avg episode reward: [(0, '9.115')] |
|
[2024-10-03 09:14:59,978][09462] Updated weights for policy 0, policy_version 350 (0.0013) |
|
[2024-10-03 09:15:00,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1433600. Throughput: 0: 945.8. Samples: 358582. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:15:00,307][00216] Avg episode reward: [(0, '9.457')] |
|
[2024-10-03 09:15:00,313][09449] Saving new best policy, reward=9.457! |
|
[2024-10-03 09:15:05,306][00216] Fps is (10 sec: 4504.9, 60 sec: 3891.1, 300 sec: 3762.7). Total num frames: 1454080. Throughput: 0: 952.6. Samples: 361808. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:15:05,310][00216] Avg episode reward: [(0, '9.606')] |
|
[2024-10-03 09:15:05,323][09449] Saving new best policy, reward=9.606! |
|
[2024-10-03 09:15:10,309][00216] Fps is (10 sec: 3275.2, 60 sec: 3754.4, 300 sec: 3734.9). Total num frames: 1466368. Throughput: 0: 949.2. Samples: 366380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:15:10,311][00216] Avg episode reward: [(0, '9.447')] |
|
[2024-10-03 09:15:11,614][09462] Updated weights for policy 0, policy_version 360 (0.0014) |
|
[2024-10-03 09:15:15,304][00216] Fps is (10 sec: 3687.0, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1490944. Throughput: 0: 940.5. Samples: 372636. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:15:15,306][00216] Avg episode reward: [(0, '8.877')] |
|
[2024-10-03 09:15:20,309][00216] Fps is (10 sec: 4505.6, 60 sec: 3890.9, 300 sec: 3762.7). Total num frames: 1511424. Throughput: 0: 943.4. Samples: 375976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:15:20,311][00216] Avg episode reward: [(0, '8.887')] |
|
[2024-10-03 09:15:21,682][09462] Updated weights for policy 0, policy_version 370 (0.0012) |
|
[2024-10-03 09:15:25,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1523712. Throughput: 0: 955.0. Samples: 380862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:15:25,312][00216] Avg episode reward: [(0, '8.565')] |
|
[2024-10-03 09:15:30,306][00216] Fps is (10 sec: 3687.5, 60 sec: 3822.8, 300 sec: 3748.9). Total num frames: 1548288. Throughput: 0: 947.6. Samples: 387016. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:15:30,308][00216] Avg episode reward: [(0, '9.642')] |
|
[2024-10-03 09:15:30,313][09449] Saving new best policy, reward=9.642! |
|
[2024-10-03 09:15:32,157][09462] Updated weights for policy 0, policy_version 380 (0.0015) |
|
[2024-10-03 09:15:35,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1568768. Throughput: 0: 948.4. Samples: 390250. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:15:35,312][00216] Avg episode reward: [(0, '11.087')] |
|
[2024-10-03 09:15:35,328][09449] Saving new best policy, reward=11.087! |
|
[2024-10-03 09:15:40,304][00216] Fps is (10 sec: 3277.5, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1581056. Throughput: 0: 962.4. Samples: 395320. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:15:40,311][00216] Avg episode reward: [(0, '11.732')] |
|
[2024-10-03 09:15:40,318][09449] Saving new best policy, reward=11.732! |
|
[2024-10-03 09:15:43,709][09462] Updated weights for policy 0, policy_version 390 (0.0015) |
|
[2024-10-03 09:15:45,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1601536. Throughput: 0: 939.9. Samples: 400878. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:15:45,307][00216] Avg episode reward: [(0, '13.386')] |
|
[2024-10-03 09:15:45,314][09449] Saving new best policy, reward=13.386! |
|
[2024-10-03 09:15:50,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1622016. Throughput: 0: 940.8. Samples: 404142. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:15:50,310][00216] Avg episode reward: [(0, '13.924')] |
|
[2024-10-03 09:15:50,385][09449] Saving new best policy, reward=13.924! |
|
[2024-10-03 09:15:54,254][09462] Updated weights for policy 0, policy_version 400 (0.0019) |
|
[2024-10-03 09:15:55,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 1638400. Throughput: 0: 959.3. Samples: 409546. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:15:55,307][00216] Avg episode reward: [(0, '13.292')] |
|
[2024-10-03 09:16:00,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1658880. Throughput: 0: 941.3. Samples: 414996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:16:00,307][00216] Avg episode reward: [(0, '14.922')] |
|
[2024-10-03 09:16:00,309][09449] Saving new best policy, reward=14.922! |
|
[2024-10-03 09:16:04,640][09462] Updated weights for policy 0, policy_version 410 (0.0029) |
|
[2024-10-03 09:16:05,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3748.9). Total num frames: 1679360. Throughput: 0: 938.3. Samples: 418194. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:05,307][00216] Avg episode reward: [(0, '15.583')] |
|
[2024-10-03 09:16:05,317][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000410_1679360.pth... |
|
[2024-10-03 09:16:05,433][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000190_778240.pth |
|
[2024-10-03 09:16:05,445][09449] Saving new best policy, reward=15.583! |
|
[2024-10-03 09:16:10,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3762.8). Total num frames: 1695744. Throughput: 0: 955.2. Samples: 423844. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:10,314][00216] Avg episode reward: [(0, '15.341')] |
|
[2024-10-03 09:16:15,306][00216] Fps is (10 sec: 3276.3, 60 sec: 3686.3, 300 sec: 3735.0). Total num frames: 1712128. Throughput: 0: 929.8. Samples: 428858. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:15,311][00216] Avg episode reward: [(0, '16.096')] |
|
[2024-10-03 09:16:15,322][09449] Saving new best policy, reward=16.096! |
|
[2024-10-03 09:16:16,425][09462] Updated weights for policy 0, policy_version 420 (0.0026) |
|
[2024-10-03 09:16:20,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3755.0, 300 sec: 3748.9). Total num frames: 1736704. Throughput: 0: 929.5. Samples: 432078. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:20,310][00216] Avg episode reward: [(0, '14.741')] |
|
[2024-10-03 09:16:25,306][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.8, 300 sec: 3762.7). Total num frames: 1753088. Throughput: 0: 952.0. Samples: 438160. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:25,308][00216] Avg episode reward: [(0, '15.173')] |
|
[2024-10-03 09:16:27,425][09462] Updated weights for policy 0, policy_version 430 (0.0014) |
|
[2024-10-03 09:16:30,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3748.9). Total num frames: 1769472. Throughput: 0: 938.8. Samples: 443126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:16:30,307][00216] Avg episode reward: [(0, '15.341')] |
|
[2024-10-03 09:16:35,304][00216] Fps is (10 sec: 4096.6, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1794048. Throughput: 0: 939.9. Samples: 446438. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:35,309][00216] Avg episode reward: [(0, '15.793')] |
|
[2024-10-03 09:16:36,912][09462] Updated weights for policy 0, policy_version 440 (0.0012) |
|
[2024-10-03 09:16:40,305][00216] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1810432. Throughput: 0: 959.3. Samples: 452714. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:40,311][00216] Avg episode reward: [(0, '15.735')] |
|
[2024-10-03 09:16:45,304][00216] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1826816. Throughput: 0: 939.1. Samples: 457256. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:16:45,310][00216] Avg episode reward: [(0, '16.487')] |
|
[2024-10-03 09:16:45,319][09449] Saving new best policy, reward=16.487! |
|
[2024-10-03 09:16:48,505][09462] Updated weights for policy 0, policy_version 450 (0.0017) |
|
[2024-10-03 09:16:50,304][00216] Fps is (10 sec: 4096.3, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1851392. Throughput: 0: 940.5. Samples: 460518. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:16:50,309][00216] Avg episode reward: [(0, '17.300')] |
|
[2024-10-03 09:16:50,311][09449] Saving new best policy, reward=17.300! |
|
[2024-10-03 09:16:55,304][00216] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1867776. Throughput: 0: 961.8. Samples: 467124. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:16:55,312][00216] Avg episode reward: [(0, '18.601')] |
|
[2024-10-03 09:16:55,323][09449] Saving new best policy, reward=18.601! |
|
[2024-10-03 09:16:59,836][09462] Updated weights for policy 0, policy_version 460 (0.0012) |
|
[2024-10-03 09:17:00,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1884160. Throughput: 0: 944.4. Samples: 471356. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:00,310][00216] Avg episode reward: [(0, '20.067')] |
|
[2024-10-03 09:17:00,312][09449] Saving new best policy, reward=20.067! |
|
[2024-10-03 09:17:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1904640. Throughput: 0: 945.3. Samples: 474616. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:05,312][00216] Avg episode reward: [(0, '20.086')] |
|
[2024-10-03 09:17:05,321][09449] Saving new best policy, reward=20.086! |
|
[2024-10-03 09:17:09,229][09462] Updated weights for policy 0, policy_version 470 (0.0014) |
|
[2024-10-03 09:17:10,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1925120. Throughput: 0: 956.9. Samples: 481220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:10,310][00216] Avg episode reward: [(0, '18.913')] |
|
[2024-10-03 09:17:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 1941504. Throughput: 0: 944.5. Samples: 485628. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:17:15,307][00216] Avg episode reward: [(0, '19.199')] |
|
[2024-10-03 09:17:20,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1961984. Throughput: 0: 939.9. Samples: 488732. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:20,310][00216] Avg episode reward: [(0, '20.386')] |
|
[2024-10-03 09:17:20,313][09449] Saving new best policy, reward=20.386! |
|
[2024-10-03 09:17:20,827][09462] Updated weights for policy 0, policy_version 480 (0.0019) |
|
[2024-10-03 09:17:25,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 1982464. Throughput: 0: 943.3. Samples: 495164. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:17:25,309][00216] Avg episode reward: [(0, '19.977')] |
|
[2024-10-03 09:17:30,311][00216] Fps is (10 sec: 3683.8, 60 sec: 3822.5, 300 sec: 3762.7). Total num frames: 1998848. Throughput: 0: 950.8. Samples: 500050. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:17:30,317][00216] Avg episode reward: [(0, '19.001')] |
|
[2024-10-03 09:17:32,170][09462] Updated weights for policy 0, policy_version 490 (0.0016) |
|
[2024-10-03 09:17:35,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2019328. Throughput: 0: 943.0. Samples: 502952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:35,309][00216] Avg episode reward: [(0, '20.880')] |
|
[2024-10-03 09:17:35,320][09449] Saving new best policy, reward=20.880! |
|
[2024-10-03 09:17:40,304][00216] Fps is (10 sec: 4098.9, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 2039808. Throughput: 0: 941.5. Samples: 509490. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:17:40,306][00216] Avg episode reward: [(0, '19.271')] |
|
[2024-10-03 09:17:41,843][09462] Updated weights for policy 0, policy_version 500 (0.0016) |
|
[2024-10-03 09:17:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2056192. Throughput: 0: 958.0. Samples: 514464. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:45,308][00216] Avg episode reward: [(0, '19.572')] |
|
[2024-10-03 09:17:50,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2076672. Throughput: 0: 939.9. Samples: 516910. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:17:50,307][00216] Avg episode reward: [(0, '20.140')] |
|
[2024-10-03 09:17:53,060][09462] Updated weights for policy 0, policy_version 510 (0.0014) |
|
[2024-10-03 09:17:55,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2097152. Throughput: 0: 941.0. Samples: 523564. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:17:55,306][00216] Avg episode reward: [(0, '20.157')] |
|
[2024-10-03 09:18:00,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2113536. Throughput: 0: 963.8. Samples: 528998. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:00,307][00216] Avg episode reward: [(0, '20.204')] |
|
[2024-10-03 09:18:04,401][09462] Updated weights for policy 0, policy_version 520 (0.0015) |
|
[2024-10-03 09:18:05,304][00216] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2134016. Throughput: 0: 942.4. Samples: 531142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:18:05,311][00216] Avg episode reward: [(0, '20.157')] |
|
[2024-10-03 09:18:05,321][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000521_2134016.pth... |
|
[2024-10-03 09:18:05,327][00216] Components not started: RolloutWorker_w3, RolloutWorker_w4, RolloutWorker_w6, wait_time=600.0 seconds |
|
[2024-10-03 09:18:05,431][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000298_1220608.pth |
|
[2024-10-03 09:18:10,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2154496. Throughput: 0: 946.2. Samples: 537744. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:18:10,307][00216] Avg episode reward: [(0, '18.350')] |
|
[2024-10-03 09:18:14,340][09462] Updated weights for policy 0, policy_version 530 (0.0016) |
|
[2024-10-03 09:18:15,306][00216] Fps is (10 sec: 3685.7, 60 sec: 3822.8, 300 sec: 3776.6). Total num frames: 2170880. Throughput: 0: 964.5. Samples: 543446. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:15,310][00216] Avg episode reward: [(0, '17.693')] |
|
[2024-10-03 09:18:20,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2187264. Throughput: 0: 945.2. Samples: 545488. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:20,311][00216] Avg episode reward: [(0, '17.764')] |
|
[2024-10-03 09:18:25,175][09462] Updated weights for policy 0, policy_version 540 (0.0018) |
|
[2024-10-03 09:18:25,304][00216] Fps is (10 sec: 4096.9, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2211840. Throughput: 0: 942.8. Samples: 551918. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:18:25,306][00216] Avg episode reward: [(0, '17.859')] |
|
[2024-10-03 09:18:30,306][00216] Fps is (10 sec: 4095.2, 60 sec: 3823.3, 300 sec: 3776.6). Total num frames: 2228224. Throughput: 0: 966.2. Samples: 557944. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:30,315][00216] Avg episode reward: [(0, '18.598')] |
|
[2024-10-03 09:18:35,308][00216] Fps is (10 sec: 3275.6, 60 sec: 3754.4, 300 sec: 3762.7). Total num frames: 2244608. Throughput: 0: 958.4. Samples: 560042. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:18:35,310][00216] Avg episode reward: [(0, '18.232')] |
|
[2024-10-03 09:18:36,577][09462] Updated weights for policy 0, policy_version 550 (0.0013) |
|
[2024-10-03 09:18:40,304][00216] Fps is (10 sec: 4096.8, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2269184. Throughput: 0: 947.4. Samples: 566196. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:40,311][00216] Avg episode reward: [(0, '18.736')] |
|
[2024-10-03 09:18:45,307][00216] Fps is (10 sec: 4096.2, 60 sec: 3822.7, 300 sec: 3804.4). Total num frames: 2285568. Throughput: 0: 965.0. Samples: 572428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:18:45,309][00216] Avg episode reward: [(0, '20.410')] |
|
[2024-10-03 09:18:46,970][09462] Updated weights for policy 0, policy_version 560 (0.0012) |
|
[2024-10-03 09:18:50,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2301952. Throughput: 0: 962.0. Samples: 574432. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:50,310][00216] Avg episode reward: [(0, '20.426')] |
|
[2024-10-03 09:18:55,304][00216] Fps is (10 sec: 3687.6, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2322432. Throughput: 0: 945.6. Samples: 580296. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:18:55,308][00216] Avg episode reward: [(0, '20.491')] |
|
[2024-10-03 09:18:57,194][09462] Updated weights for policy 0, policy_version 570 (0.0023) |
|
[2024-10-03 09:19:00,306][00216] Fps is (10 sec: 4504.6, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 2347008. Throughput: 0: 965.1. Samples: 586876. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:19:00,311][00216] Avg episode reward: [(0, '19.738')] |
|
[2024-10-03 09:19:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2359296. Throughput: 0: 965.7. Samples: 588944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:19:05,306][00216] Avg episode reward: [(0, '19.079')] |
|
[2024-10-03 09:19:08,595][09462] Updated weights for policy 0, policy_version 580 (0.0012) |
|
[2024-10-03 09:19:10,304][00216] Fps is (10 sec: 3277.4, 60 sec: 3754.6, 300 sec: 3790.5). Total num frames: 2379776. Throughput: 0: 948.7. Samples: 594610. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:19:10,307][00216] Avg episode reward: [(0, '18.534')] |
|
[2024-10-03 09:19:15,304][00216] Fps is (10 sec: 4505.5, 60 sec: 3891.3, 300 sec: 3818.3). Total num frames: 2404352. Throughput: 0: 962.3. Samples: 601244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:19:15,307][00216] Avg episode reward: [(0, '19.052')] |
|
[2024-10-03 09:19:19,371][09462] Updated weights for policy 0, policy_version 590 (0.0014) |
|
[2024-10-03 09:19:20,304][00216] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2416640. Throughput: 0: 963.5. Samples: 603398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:19:20,306][00216] Avg episode reward: [(0, '18.673')] |
|
[2024-10-03 09:19:25,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2437120. Throughput: 0: 946.6. Samples: 608792. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:19:25,307][00216] Avg episode reward: [(0, '19.896')] |
|
[2024-10-03 09:19:29,285][09462] Updated weights for policy 0, policy_version 600 (0.0013) |
|
[2024-10-03 09:19:30,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3891.3, 300 sec: 3804.4). Total num frames: 2461696. Throughput: 0: 955.8. Samples: 615438. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:19:30,312][00216] Avg episode reward: [(0, '21.827')] |
|
[2024-10-03 09:19:30,314][09449] Saving new best policy, reward=21.827! |
|
[2024-10-03 09:19:35,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3790.5). Total num frames: 2473984. Throughput: 0: 963.1. Samples: 617770. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:19:35,309][00216] Avg episode reward: [(0, '22.753')] |
|
[2024-10-03 09:19:35,323][09449] Saving new best policy, reward=22.753! |
|
[2024-10-03 09:19:40,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2494464. Throughput: 0: 948.2. Samples: 622964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:19:40,311][00216] Avg episode reward: [(0, '22.378')] |
|
[2024-10-03 09:19:40,753][09462] Updated weights for policy 0, policy_version 610 (0.0013) |
|
[2024-10-03 09:19:45,307][00216] Fps is (10 sec: 4094.7, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2514944. Throughput: 0: 948.5. Samples: 629558. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:19:45,314][00216] Avg episode reward: [(0, '21.618')] |
|
[2024-10-03 09:19:50,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2531328. Throughput: 0: 958.3. Samples: 632068. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:19:50,306][00216] Avg episode reward: [(0, '21.403')] |
|
[2024-10-03 09:19:52,230][09462] Updated weights for policy 0, policy_version 620 (0.0020) |
|
[2024-10-03 09:19:55,304][00216] Fps is (10 sec: 3687.5, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2551808. Throughput: 0: 944.6. Samples: 637118. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:19:55,306][00216] Avg episode reward: [(0, '20.993')] |
|
[2024-10-03 09:20:00,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3790.6). Total num frames: 2572288. Throughput: 0: 943.2. Samples: 643686. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:00,306][00216] Avg episode reward: [(0, '20.652')] |
|
[2024-10-03 09:20:01,520][09462] Updated weights for policy 0, policy_version 630 (0.0012) |
|
[2024-10-03 09:20:05,304][00216] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3804.5). Total num frames: 2588672. Throughput: 0: 958.7. Samples: 646540. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:05,310][00216] Avg episode reward: [(0, '22.377')] |
|
[2024-10-03 09:20:05,325][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth... |
|
[2024-10-03 09:20:05,474][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000410_1679360.pth |
|
[2024-10-03 09:20:10,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2605056. Throughput: 0: 942.9. Samples: 651222. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:10,306][00216] Avg episode reward: [(0, '22.412')] |
|
[2024-10-03 09:20:12,908][09462] Updated weights for policy 0, policy_version 640 (0.0022) |
|
[2024-10-03 09:20:15,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3790.6). Total num frames: 2629632. Throughput: 0: 941.3. Samples: 657798. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:15,312][00216] Avg episode reward: [(0, '23.839')] |
|
[2024-10-03 09:20:15,326][09449] Saving new best policy, reward=23.839! |
|
[2024-10-03 09:20:20,306][00216] Fps is (10 sec: 4095.1, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 2646016. Throughput: 0: 955.7. Samples: 660780. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:20,309][00216] Avg episode reward: [(0, '23.764')] |
|
[2024-10-03 09:20:24,686][09462] Updated weights for policy 0, policy_version 650 (0.0012) |
|
[2024-10-03 09:20:25,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 2662400. Throughput: 0: 939.7. Samples: 665252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:20:25,306][00216] Avg episode reward: [(0, '23.087')] |
|
[2024-10-03 09:20:30,304][00216] Fps is (10 sec: 4097.0, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2686976. Throughput: 0: 940.9. Samples: 671896. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:20:30,309][00216] Avg episode reward: [(0, '21.061')] |
|
[2024-10-03 09:20:34,321][09462] Updated weights for policy 0, policy_version 660 (0.0014) |
|
[2024-10-03 09:20:35,304][00216] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2703360. Throughput: 0: 959.0. Samples: 675224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:35,306][00216] Avg episode reward: [(0, '19.495')] |
|
[2024-10-03 09:20:40,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2719744. Throughput: 0: 943.2. Samples: 679560. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:20:40,310][00216] Avg episode reward: [(0, '18.909')] |
|
[2024-10-03 09:20:45,150][09462] Updated weights for policy 0, policy_version 670 (0.0019) |
|
[2024-10-03 09:20:45,304][00216] Fps is (10 sec: 4096.1, 60 sec: 3823.1, 300 sec: 3804.4). Total num frames: 2744320. Throughput: 0: 944.4. Samples: 686186. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:20:45,306][00216] Avg episode reward: [(0, '19.452')] |
|
[2024-10-03 09:20:50,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2760704. Throughput: 0: 951.3. Samples: 689350. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:20:50,307][00216] Avg episode reward: [(0, '19.624')] |
|
[2024-10-03 09:20:55,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2777088. Throughput: 0: 948.0. Samples: 693884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:20:55,307][00216] Avg episode reward: [(0, '20.679')] |
|
[2024-10-03 09:20:56,817][09462] Updated weights for policy 0, policy_version 680 (0.0030) |
|
[2024-10-03 09:21:00,305][00216] Fps is (10 sec: 3686.0, 60 sec: 3754.6, 300 sec: 3790.5). Total num frames: 2797568. Throughput: 0: 941.4. Samples: 700164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-10-03 09:21:00,307][00216] Avg episode reward: [(0, '22.470')] |
|
[2024-10-03 09:21:05,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2818048. Throughput: 0: 948.8. Samples: 703476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:21:05,306][00216] Avg episode reward: [(0, '22.061')] |
|
[2024-10-03 09:21:07,006][09462] Updated weights for policy 0, policy_version 690 (0.0013) |
|
[2024-10-03 09:21:10,304][00216] Fps is (10 sec: 3686.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2834432. Throughput: 0: 957.2. Samples: 708328. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:10,311][00216] Avg episode reward: [(0, '22.569')] |
|
[2024-10-03 09:21:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2854912. Throughput: 0: 945.5. Samples: 714444. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:15,312][00216] Avg episode reward: [(0, '22.997')] |
|
[2024-10-03 09:21:17,364][09462] Updated weights for policy 0, policy_version 700 (0.0013) |
|
[2024-10-03 09:21:20,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3891.4, 300 sec: 3818.3). Total num frames: 2879488. Throughput: 0: 943.4. Samples: 717678. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:20,306][00216] Avg episode reward: [(0, '21.942')] |
|
[2024-10-03 09:21:25,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2891776. Throughput: 0: 961.0. Samples: 722804. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:25,306][00216] Avg episode reward: [(0, '21.927')] |
|
[2024-10-03 09:21:28,746][09462] Updated weights for policy 0, policy_version 710 (0.0016) |
|
[2024-10-03 09:21:30,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2912256. Throughput: 0: 943.6. Samples: 728646. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:30,306][00216] Avg episode reward: [(0, '21.844')] |
|
[2024-10-03 09:21:35,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2936832. Throughput: 0: 946.8. Samples: 731958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:21:35,306][00216] Avg episode reward: [(0, '23.004')] |
|
[2024-10-03 09:21:39,250][09462] Updated weights for policy 0, policy_version 720 (0.0020) |
|
[2024-10-03 09:21:40,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2949120. Throughput: 0: 965.5. Samples: 737330. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:21:40,311][00216] Avg episode reward: [(0, '23.458')] |
|
[2024-10-03 09:21:45,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 2969600. Throughput: 0: 950.0. Samples: 742914. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:21:45,310][00216] Avg episode reward: [(0, '22.460')] |
|
[2024-10-03 09:21:49,465][09462] Updated weights for policy 0, policy_version 730 (0.0014) |
|
[2024-10-03 09:21:50,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2990080. Throughput: 0: 947.2. Samples: 746102. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:21:50,307][00216] Avg episode reward: [(0, '19.551')] |
|
[2024-10-03 09:21:55,304][00216] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3006464. Throughput: 0: 963.2. Samples: 751670. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:21:55,310][00216] Avg episode reward: [(0, '19.024')] |
|
[2024-10-03 09:22:00,307][00216] Fps is (10 sec: 3685.3, 60 sec: 3822.8, 300 sec: 3804.4). Total num frames: 3026944. Throughput: 0: 948.2. Samples: 757116. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:00,310][00216] Avg episode reward: [(0, '17.605')] |
|
[2024-10-03 09:22:00,800][09462] Updated weights for policy 0, policy_version 740 (0.0013) |
|
[2024-10-03 09:22:05,304][00216] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3047424. Throughput: 0: 949.6. Samples: 760412. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:05,311][00216] Avg episode reward: [(0, '17.785')] |
|
[2024-10-03 09:22:05,321][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000744_3047424.pth... |
|
[2024-10-03 09:22:05,454][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000521_2134016.pth |
|
[2024-10-03 09:22:10,304][00216] Fps is (10 sec: 3687.5, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3063808. Throughput: 0: 962.3. Samples: 766106. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:10,308][00216] Avg episode reward: [(0, '18.144')] |
|
[2024-10-03 09:22:12,122][09462] Updated weights for policy 0, policy_version 750 (0.0027) |
|
[2024-10-03 09:22:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3084288. Throughput: 0: 948.9. Samples: 771346. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:15,309][00216] Avg episode reward: [(0, '17.990')] |
|
[2024-10-03 09:22:20,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3104768. Throughput: 0: 945.0. Samples: 774484. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:20,311][00216] Avg episode reward: [(0, '19.415')] |
|
[2024-10-03 09:22:21,638][09462] Updated weights for policy 0, policy_version 760 (0.0014) |
|
[2024-10-03 09:22:25,305][00216] Fps is (10 sec: 3685.9, 60 sec: 3822.9, 300 sec: 3804.5). Total num frames: 3121152. Throughput: 0: 960.1. Samples: 780536. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:25,308][00216] Avg episode reward: [(0, '20.506')] |
|
[2024-10-03 09:22:30,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3141632. Throughput: 0: 948.5. Samples: 785596. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:22:30,307][00216] Avg episode reward: [(0, '20.138')] |
|
[2024-10-03 09:22:32,803][09462] Updated weights for policy 0, policy_version 770 (0.0014) |
|
[2024-10-03 09:22:35,304][00216] Fps is (10 sec: 4096.5, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3162112. Throughput: 0: 950.5. Samples: 788876. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:35,306][00216] Avg episode reward: [(0, '20.997')] |
|
[2024-10-03 09:22:40,306][00216] Fps is (10 sec: 4095.2, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 3182592. Throughput: 0: 965.9. Samples: 795136. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:22:40,310][00216] Avg episode reward: [(0, '21.201')] |
|
[2024-10-03 09:22:44,264][09462] Updated weights for policy 0, policy_version 780 (0.0012) |
|
[2024-10-03 09:22:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3198976. Throughput: 0: 950.0. Samples: 799864. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:22:45,310][00216] Avg episode reward: [(0, '22.230')] |
|
[2024-10-03 09:22:50,304][00216] Fps is (10 sec: 3687.1, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3219456. Throughput: 0: 947.2. Samples: 803034. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:22:50,309][00216] Avg episode reward: [(0, '22.684')] |
|
[2024-10-03 09:22:53,528][09462] Updated weights for policy 0, policy_version 790 (0.0018) |
|
[2024-10-03 09:22:55,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3239936. Throughput: 0: 965.4. Samples: 809550. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:22:55,310][00216] Avg episode reward: [(0, '22.893')] |
|
[2024-10-03 09:23:00,304][00216] Fps is (10 sec: 3276.7, 60 sec: 3754.8, 300 sec: 3790.5). Total num frames: 3252224. Throughput: 0: 948.4. Samples: 814026. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:23:00,311][00216] Avg episode reward: [(0, '24.037')] |
|
[2024-10-03 09:23:00,323][09449] Saving new best policy, reward=24.037! |
|
[2024-10-03 09:23:04,913][09462] Updated weights for policy 0, policy_version 800 (0.0022) |
|
[2024-10-03 09:23:05,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3276800. Throughput: 0: 950.9. Samples: 817274. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:23:05,311][00216] Avg episode reward: [(0, '25.507')] |
|
[2024-10-03 09:23:05,321][09449] Saving new best policy, reward=25.507! |
|
[2024-10-03 09:23:10,304][00216] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3297280. Throughput: 0: 960.4. Samples: 823752. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:23:10,309][00216] Avg episode reward: [(0, '24.527')] |
|
[2024-10-03 09:23:15,304][00216] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3309568. Throughput: 0: 948.4. Samples: 828272. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:23:15,307][00216] Avg episode reward: [(0, '25.129')] |
|
[2024-10-03 09:23:16,445][09462] Updated weights for policy 0, policy_version 810 (0.0022) |
|
[2024-10-03 09:23:20,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3334144. Throughput: 0: 944.3. Samples: 831370. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:23:20,310][00216] Avg episode reward: [(0, '24.656')] |
|
[2024-10-03 09:23:25,304][00216] Fps is (10 sec: 4505.7, 60 sec: 3891.3, 300 sec: 3818.3). Total num frames: 3354624. Throughput: 0: 953.0. Samples: 838020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-10-03 09:23:25,311][00216] Avg episode reward: [(0, '25.126')] |
|
[2024-10-03 09:23:26,040][09462] Updated weights for policy 0, policy_version 820 (0.0013) |
|
[2024-10-03 09:23:30,309][00216] Fps is (10 sec: 3275.1, 60 sec: 3754.3, 300 sec: 3804.4). Total num frames: 3366912. Throughput: 0: 954.1. Samples: 842802. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:23:30,312][00216] Avg episode reward: [(0, '24.576')] |
|
[2024-10-03 09:23:35,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3391488. Throughput: 0: 948.3. Samples: 845708. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:23:35,311][00216] Avg episode reward: [(0, '23.205')] |
|
[2024-10-03 09:23:37,222][09462] Updated weights for policy 0, policy_version 830 (0.0016) |
|
[2024-10-03 09:23:40,304][00216] Fps is (10 sec: 4507.9, 60 sec: 3823.1, 300 sec: 3818.3). Total num frames: 3411968. Throughput: 0: 949.8. Samples: 852290. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:23:40,306][00216] Avg episode reward: [(0, '22.461')] |
|
[2024-10-03 09:23:45,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3428352. Throughput: 0: 961.7. Samples: 857302. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:23:45,308][00216] Avg episode reward: [(0, '21.827')] |
|
[2024-10-03 09:23:48,420][09462] Updated weights for policy 0, policy_version 840 (0.0014) |
|
[2024-10-03 09:23:50,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3444736. Throughput: 0: 945.6. Samples: 859828. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:23:50,315][00216] Avg episode reward: [(0, '22.270')] |
|
[2024-10-03 09:23:55,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3469312. Throughput: 0: 950.7. Samples: 866532. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:23:55,310][00216] Avg episode reward: [(0, '21.966')] |
|
[2024-10-03 09:23:58,551][09462] Updated weights for policy 0, policy_version 850 (0.0018) |
|
[2024-10-03 09:24:00,306][00216] Fps is (10 sec: 4095.3, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 3485696. Throughput: 0: 965.9. Samples: 871738. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:24:00,308][00216] Avg episode reward: [(0, '22.879')] |
|
[2024-10-03 09:24:05,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3502080. Throughput: 0: 949.2. Samples: 874086. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:24:05,312][00216] Avg episode reward: [(0, '23.150')] |
|
[2024-10-03 09:24:05,322][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000855_3502080.pth... |
|
[2024-10-03 09:24:05,454][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000632_2588672.pth |
|
[2024-10-03 09:24:09,161][09462] Updated weights for policy 0, policy_version 860 (0.0022) |
|
[2024-10-03 09:24:10,304][00216] Fps is (10 sec: 4096.7, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3526656. Throughput: 0: 947.5. Samples: 880656. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:24:10,310][00216] Avg episode reward: [(0, '23.843')] |
|
[2024-10-03 09:24:15,306][00216] Fps is (10 sec: 4095.1, 60 sec: 3891.1, 300 sec: 3818.3). Total num frames: 3543040. Throughput: 0: 965.4. Samples: 886244. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:24:15,311][00216] Avg episode reward: [(0, '24.499')] |
|
[2024-10-03 09:24:20,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3559424. Throughput: 0: 946.1. Samples: 888282. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:24:20,306][00216] Avg episode reward: [(0, '23.193')] |
|
[2024-10-03 09:24:20,747][09462] Updated weights for policy 0, policy_version 870 (0.0013) |
|
[2024-10-03 09:24:25,304][00216] Fps is (10 sec: 3687.2, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 3579904. Throughput: 0: 945.8. Samples: 894850. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2024-10-03 09:24:25,306][00216] Avg episode reward: [(0, '22.481')] |
|
[2024-10-03 09:24:30,304][00216] Fps is (10 sec: 4095.9, 60 sec: 3891.5, 300 sec: 3818.3). Total num frames: 3600384. Throughput: 0: 966.8. Samples: 900808. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:24:30,307][00216] Avg episode reward: [(0, '20.266')] |
|
[2024-10-03 09:24:30,692][09462] Updated weights for policy 0, policy_version 880 (0.0012) |
|
[2024-10-03 09:24:35,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3616768. Throughput: 0: 958.8. Samples: 902972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:24:35,307][00216] Avg episode reward: [(0, '18.760')] |
|
[2024-10-03 09:24:40,304][00216] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3641344. Throughput: 0: 951.7. Samples: 909358. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:24:40,306][00216] Avg episode reward: [(0, '18.141')] |
|
[2024-10-03 09:24:41,078][09462] Updated weights for policy 0, policy_version 890 (0.0019) |
|
[2024-10-03 09:24:45,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3661824. Throughput: 0: 973.5. Samples: 915546. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-10-03 09:24:45,307][00216] Avg episode reward: [(0, '18.969')] |
|
[2024-10-03 09:24:50,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3674112. Throughput: 0: 967.0. Samples: 917602. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:24:50,311][00216] Avg episode reward: [(0, '20.090')] |
|
[2024-10-03 09:24:52,536][09462] Updated weights for policy 0, policy_version 900 (0.0012) |
|
[2024-10-03 09:24:55,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3698688. Throughput: 0: 951.6. Samples: 923476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:24:55,311][00216] Avg episode reward: [(0, '22.391')] |
|
[2024-10-03 09:25:00,304][00216] Fps is (10 sec: 4505.5, 60 sec: 3891.3, 300 sec: 3832.2). Total num frames: 3719168. Throughput: 0: 972.8. Samples: 930020. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:25:00,308][00216] Avg episode reward: [(0, '23.342')] |
|
[2024-10-03 09:25:03,005][09462] Updated weights for policy 0, policy_version 910 (0.0013) |
|
[2024-10-03 09:25:05,309][00216] Fps is (10 sec: 3275.1, 60 sec: 3822.6, 300 sec: 3818.2). Total num frames: 3731456. Throughput: 0: 975.1. Samples: 932168. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:05,312][00216] Avg episode reward: [(0, '24.824')] |
|
[2024-10-03 09:25:10,304][00216] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3756032. Throughput: 0: 954.8. Samples: 937814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:25:10,306][00216] Avg episode reward: [(0, '25.795')] |
|
[2024-10-03 09:25:10,311][09449] Saving new best policy, reward=25.795! |
|
[2024-10-03 09:25:13,139][09462] Updated weights for policy 0, policy_version 920 (0.0013) |
|
[2024-10-03 09:25:15,305][00216] Fps is (10 sec: 4507.4, 60 sec: 3891.3, 300 sec: 3832.2). Total num frames: 3776512. Throughput: 0: 967.9. Samples: 944366. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:15,307][00216] Avg episode reward: [(0, '24.036')] |
|
[2024-10-03 09:25:20,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3788800. Throughput: 0: 969.6. Samples: 946604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:25:20,309][00216] Avg episode reward: [(0, '21.926')] |
|
[2024-10-03 09:25:24,692][09462] Updated weights for policy 0, policy_version 930 (0.0014) |
|
[2024-10-03 09:25:25,304][00216] Fps is (10 sec: 3277.2, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 3809280. Throughput: 0: 945.2. Samples: 951894. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:25,306][00216] Avg episode reward: [(0, '21.190')] |
|
[2024-10-03 09:25:30,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3833856. Throughput: 0: 956.3. Samples: 958580. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:25:30,311][00216] Avg episode reward: [(0, '19.512')] |
|
[2024-10-03 09:25:35,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3846144. Throughput: 0: 968.4. Samples: 961182. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:35,307][00216] Avg episode reward: [(0, '19.261')] |
|
[2024-10-03 09:25:35,518][09462] Updated weights for policy 0, policy_version 940 (0.0016) |
|
[2024-10-03 09:25:40,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 3866624. Throughput: 0: 949.8. Samples: 966216. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:25:40,312][00216] Avg episode reward: [(0, '18.296')] |
|
[2024-10-03 09:25:45,125][09462] Updated weights for policy 0, policy_version 950 (0.0013) |
|
[2024-10-03 09:25:45,304][00216] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3891200. Throughput: 0: 953.0. Samples: 972906. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:45,306][00216] Avg episode reward: [(0, '20.107')] |
|
[2024-10-03 09:25:50,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3907584. Throughput: 0: 967.7. Samples: 975710. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-10-03 09:25:50,308][00216] Avg episode reward: [(0, '21.036')] |
|
[2024-10-03 09:25:55,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3923968. Throughput: 0: 948.6. Samples: 980502. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:25:55,306][00216] Avg episode reward: [(0, '21.894')] |
|
[2024-10-03 09:25:56,479][09462] Updated weights for policy 0, policy_version 960 (0.0014) |
|
[2024-10-03 09:26:00,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3948544. Throughput: 0: 950.2. Samples: 987124. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:26:00,306][00216] Avg episode reward: [(0, '21.736')] |
|
[2024-10-03 09:26:05,304][00216] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3832.2). Total num frames: 3964928. Throughput: 0: 968.2. Samples: 990174. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:26:05,307][00216] Avg episode reward: [(0, '22.074')] |
|
[2024-10-03 09:26:05,319][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000968_3964928.pth... |
|
[2024-10-03 09:26:05,444][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000744_3047424.pth |
|
[2024-10-03 09:26:08,092][09462] Updated weights for policy 0, policy_version 970 (0.0019) |
|
[2024-10-03 09:26:10,304][00216] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 3981312. Throughput: 0: 948.1. Samples: 994560. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-10-03 09:26:10,311][00216] Avg episode reward: [(0, '23.573')] |
|
[2024-10-03 09:26:15,304][00216] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 4001792. Throughput: 0: 947.9. Samples: 1001234. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-10-03 09:26:15,310][00216] Avg episode reward: [(0, '23.938')] |
|
[2024-10-03 09:26:15,381][00216] Component Batcher_0 stopped! |
|
[2024-10-03 09:26:15,386][00216] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2024-10-03 09:26:15,389][00216] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2024-10-03 09:26:15,395][00216] Component RolloutWorker_w6 process died already! Don't wait for it. |
|
[2024-10-03 09:26:15,394][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-10-03 09:26:15,394][09449] Stopping Batcher_0... |
|
[2024-10-03 09:26:15,409][09449] Loop batcher_evt_loop terminating... |
|
[2024-10-03 09:26:15,422][09462] Weights refcount: 2 0 |
|
[2024-10-03 09:26:15,424][09462] Stopping InferenceWorker_p0-w0... |
|
[2024-10-03 09:26:15,426][09462] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-10-03 09:26:15,425][00216] Component InferenceWorker_p0-w0 stopped! |
|
[2024-10-03 09:26:15,514][09449] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000855_3502080.pth |
|
[2024-10-03 09:26:15,530][09449] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-10-03 09:26:15,648][00216] Component RolloutWorker_w2 stopped! |
|
[2024-10-03 09:26:15,647][09465] Stopping RolloutWorker_w2... |
|
[2024-10-03 09:26:15,658][09465] Loop rollout_proc2_evt_loop terminating... |
|
[2024-10-03 09:26:15,676][00216] Component RolloutWorker_w0 stopped! |
|
[2024-10-03 09:26:15,681][09463] Stopping RolloutWorker_w0... |
|
[2024-10-03 09:26:15,682][09463] Loop rollout_proc0_evt_loop terminating... |
|
[2024-10-03 09:26:15,709][00216] Component LearnerWorker_p0 stopped! |
|
[2024-10-03 09:26:15,714][09449] Stopping LearnerWorker_p0... |
|
[2024-10-03 09:26:15,715][09449] Loop learner_proc0_evt_loop terminating... |
|
[2024-10-03 09:26:15,837][00216] Component RolloutWorker_w7 stopped! |
|
[2024-10-03 09:26:15,840][09469] Stopping RolloutWorker_w7... |
|
[2024-10-03 09:26:15,846][00216] Component RolloutWorker_w5 stopped! |
|
[2024-10-03 09:26:15,849][09467] Stopping RolloutWorker_w5... |
|
[2024-10-03 09:26:15,850][09469] Loop rollout_proc7_evt_loop terminating... |
|
[2024-10-03 09:26:15,853][09467] Loop rollout_proc5_evt_loop terminating... |
|
[2024-10-03 09:26:15,857][00216] Component RolloutWorker_w1 stopped! |
|
[2024-10-03 09:26:15,860][00216] Waiting for process learner_proc0 to stop... |
|
[2024-10-03 09:26:15,863][09464] Stopping RolloutWorker_w1... |
|
[2024-10-03 09:26:15,870][09464] Loop rollout_proc1_evt_loop terminating... |
|
[2024-10-03 09:26:17,139][00216] Waiting for process inference_proc0-0 to join... |
|
[2024-10-03 09:26:17,267][00216] Waiting for process rollout_proc0 to join... |
|
[2024-10-03 09:26:18,090][00216] Waiting for process rollout_proc1 to join... |
|
[2024-10-03 09:26:18,132][00216] Waiting for process rollout_proc2 to join... |
|
[2024-10-03 09:26:18,138][00216] Waiting for process rollout_proc3 to join... |
|
[2024-10-03 09:26:18,142][00216] Waiting for process rollout_proc4 to join... |
|
[2024-10-03 09:26:18,144][00216] Waiting for process rollout_proc5 to join... |
|
[2024-10-03 09:26:18,146][00216] Waiting for process rollout_proc6 to join... |
|
[2024-10-03 09:26:18,151][00216] Waiting for process rollout_proc7 to join... |
|
[2024-10-03 09:26:18,155][00216] Batcher 0 profile tree view: |
|
batching: 22.5877, releasing_batches: 0.0268 |
|
[2024-10-03 09:26:18,157][00216] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0049 |
|
wait_policy_total: 444.1572 |
|
update_model: 8.3800 |
|
weight_update: 0.0016 |
|
one_step: 0.0030 |
|
handle_policy_step: 570.8451 |
|
deserialize: 15.1433, stack: 3.3922, obs_to_device_normalize: 128.3878, forward: 286.4069, send_messages: 23.2790 |
|
prepare_outputs: 84.6899 |
|
to_cpu: 52.9584 |
|
[2024-10-03 09:26:18,161][00216] Learner 0 profile tree view: |
|
misc: 0.0053, prepare_batch: 15.0969 |
|
train: 69.3472 |
|
epoch_init: 0.0153, minibatch_init: 0.0117, losses_postprocess: 0.5703, kl_divergence: 0.5071, after_optimizer: 32.4746 |
|
calculate_losses: 22.1653 |
|
losses_init: 0.0086, forward_head: 1.5374, bptt_initial: 14.6789, tail: 0.9004, advantages_returns: 0.2286, losses: 2.6571 |
|
bptt: 1.9129 |
|
bptt_forward_core: 1.8519 |
|
update: 13.0868 |
|
clip: 1.4143 |
|
[2024-10-03 09:26:18,162][00216] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.4010, enqueue_policy_requests: 205.8720, env_step: 742.8072, overhead: 16.0736, complete_rollouts: 5.0101 |
|
save_policy_outputs: 27.8503 |
|
split_output_tensors: 9.8061 |
|
[2024-10-03 09:26:18,163][00216] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.4194, enqueue_policy_requests: 112.7153, env_step: 824.0847, overhead: 16.8831, complete_rollouts: 8.1720 |
|
save_policy_outputs: 30.6297 |
|
split_output_tensors: 10.7400 |
|
[2024-10-03 09:26:18,167][00216] Loop Runner_EvtLoop terminating... |
|
[2024-10-03 09:26:18,168][00216] Runner profile tree view: |
|
main_loop: 1088.7722 |
|
[2024-10-03 09:26:18,169][00216] Collected {0: 4005888}, FPS: 3679.3 |
|
[2024-10-03 09:26:33,315][00216] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-10-03 09:26:33,317][00216] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-10-03 09:26:33,320][00216] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-10-03 09:26:33,322][00216] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-10-03 09:26:33,324][00216] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-10-03 09:26:33,326][00216] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-10-03 09:26:33,327][00216] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-10-03 09:26:33,331][00216] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-10-03 09:26:33,332][00216] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-10-03 09:26:33,333][00216] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-10-03 09:26:33,334][00216] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-10-03 09:26:33,335][00216] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-10-03 09:26:33,336][00216] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-10-03 09:26:33,338][00216] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-10-03 09:26:33,339][00216] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-10-03 09:26:33,356][00216] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-10-03 09:26:33,358][00216] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-10-03 09:26:33,360][00216] RunningMeanStd input shape: (1,) |
|
[2024-10-03 09:26:33,374][00216] ConvEncoder: input_channels=3 |
|
[2024-10-03 09:26:33,497][00216] Conv encoder output size: 512 |
|
[2024-10-03 09:26:33,499][00216] Policy head output size: 512 |
|
[2024-10-03 09:26:35,130][00216] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-10-03 09:26:35,970][00216] Num frames 100... |
|
[2024-10-03 09:26:36,096][00216] Num frames 200... |
|
[2024-10-03 09:26:36,217][00216] Num frames 300... |
|
[2024-10-03 09:26:36,333][00216] Num frames 400... |
|
[2024-10-03 09:26:36,453][00216] Num frames 500... |
|
[2024-10-03 09:26:36,572][00216] Num frames 600... |
|
[2024-10-03 09:26:36,694][00216] Num frames 700... |
|
[2024-10-03 09:26:36,811][00216] Num frames 800... |
|
[2024-10-03 09:26:36,929][00216] Num frames 900... |
|
[2024-10-03 09:26:36,992][00216] Avg episode rewards: #0: 18.060, true rewards: #0: 9.060 |
|
[2024-10-03 09:26:36,993][00216] Avg episode reward: 18.060, avg true_objective: 9.060 |
|
[2024-10-03 09:26:37,125][00216] Num frames 1000... |
|
[2024-10-03 09:26:37,246][00216] Num frames 1100... |
|
[2024-10-03 09:26:37,363][00216] Num frames 1200... |
|
[2024-10-03 09:26:37,479][00216] Num frames 1300... |
|
[2024-10-03 09:26:37,595][00216] Num frames 1400... |
|
[2024-10-03 09:26:37,713][00216] Num frames 1500... |
|
[2024-10-03 09:26:37,831][00216] Num frames 1600... |
|
[2024-10-03 09:26:37,948][00216] Num frames 1700... |
|
[2024-10-03 09:26:38,071][00216] Num frames 1800... |
|
[2024-10-03 09:26:38,196][00216] Num frames 1900... |
|
[2024-10-03 09:26:38,319][00216] Num frames 2000... |
|
[2024-10-03 09:26:38,439][00216] Num frames 2100... |
|
[2024-10-03 09:26:38,534][00216] Avg episode rewards: #0: 24.665, true rewards: #0: 10.665 |
|
[2024-10-03 09:26:38,536][00216] Avg episode reward: 24.665, avg true_objective: 10.665 |
|
[2024-10-03 09:26:38,617][00216] Num frames 2200... |
|
[2024-10-03 09:26:38,734][00216] Num frames 2300... |
|
[2024-10-03 09:26:38,854][00216] Num frames 2400... |
|
[2024-10-03 09:26:38,972][00216] Num frames 2500... |
|
[2024-10-03 09:26:39,094][00216] Num frames 2600... |
|
[2024-10-03 09:26:39,222][00216] Num frames 2700... |
|
[2024-10-03 09:26:39,346][00216] Num frames 2800... |
|
[2024-10-03 09:26:39,464][00216] Num frames 2900... |
|
[2024-10-03 09:26:39,579][00216] Num frames 3000... |
|
[2024-10-03 09:26:39,698][00216] Num frames 3100... |
|
[2024-10-03 09:26:39,816][00216] Num frames 3200... |
|
[2024-10-03 09:26:39,934][00216] Num frames 3300... |
|
[2024-10-03 09:26:40,060][00216] Num frames 3400... |
|
[2024-10-03 09:26:40,178][00216] Num frames 3500... |
|
[2024-10-03 09:26:40,306][00216] Num frames 3600... |
|
[2024-10-03 09:26:40,430][00216] Num frames 3700... |
|
[2024-10-03 09:26:40,547][00216] Num frames 3800... |
|
[2024-10-03 09:26:40,666][00216] Num frames 3900... |
|
[2024-10-03 09:26:40,785][00216] Num frames 4000... |
|
[2024-10-03 09:26:40,905][00216] Num frames 4100... |
|
[2024-10-03 09:26:40,983][00216] Avg episode rewards: #0: 34.063, true rewards: #0: 13.730 |
|
[2024-10-03 09:26:40,985][00216] Avg episode reward: 34.063, avg true_objective: 13.730 |
|
[2024-10-03 09:26:41,121][00216] Num frames 4200... |
|
[2024-10-03 09:26:41,293][00216] Num frames 4300... |
|
[2024-10-03 09:26:41,450][00216] Num frames 4400... |
|
[2024-10-03 09:26:41,609][00216] Num frames 4500... |
|
[2024-10-03 09:26:41,769][00216] Avg episode rewards: #0: 26.917, true rewards: #0: 11.417 |
|
[2024-10-03 09:26:41,773][00216] Avg episode reward: 26.917, avg true_objective: 11.417 |
|
[2024-10-03 09:26:41,828][00216] Num frames 4600... |
|
[2024-10-03 09:26:41,987][00216] Num frames 4700... |
|
[2024-10-03 09:26:42,152][00216] Num frames 4800... |
|
[2024-10-03 09:26:42,326][00216] Num frames 4900... |
|
[2024-10-03 09:26:42,502][00216] Num frames 5000... |
|
[2024-10-03 09:26:42,667][00216] Num frames 5100... |
|
[2024-10-03 09:26:42,832][00216] Num frames 5200... |
|
[2024-10-03 09:26:43,002][00216] Num frames 5300... |
|
[2024-10-03 09:26:43,184][00216] Num frames 5400... |
|
[2024-10-03 09:26:43,366][00216] Num frames 5500... |
|
[2024-10-03 09:26:43,546][00216] Num frames 5600... |
|
[2024-10-03 09:26:43,674][00216] Num frames 5700... |
|
[2024-10-03 09:26:43,793][00216] Num frames 5800... |
|
[2024-10-03 09:26:43,866][00216] Avg episode rewards: #0: 27.430, true rewards: #0: 11.630 |
|
[2024-10-03 09:26:43,868][00216] Avg episode reward: 27.430, avg true_objective: 11.630 |
|
[2024-10-03 09:26:43,967][00216] Num frames 5900... |
|
[2024-10-03 09:26:44,090][00216] Num frames 6000... |
|
[2024-10-03 09:26:44,210][00216] Num frames 6100... |
|
[2024-10-03 09:26:44,325][00216] Num frames 6200... |
|
[2024-10-03 09:26:44,452][00216] Num frames 6300... |
|
[2024-10-03 09:26:44,572][00216] Num frames 6400... |
|
[2024-10-03 09:26:44,689][00216] Num frames 6500... |
|
[2024-10-03 09:26:44,806][00216] Num frames 6600... |
|
[2024-10-03 09:26:44,927][00216] Num frames 6700... |
|
[2024-10-03 09:26:45,033][00216] Avg episode rewards: #0: 25.905, true rewards: #0: 11.238 |
|
[2024-10-03 09:26:45,035][00216] Avg episode reward: 25.905, avg true_objective: 11.238 |
|
[2024-10-03 09:26:45,109][00216] Num frames 6800... |
|
[2024-10-03 09:26:45,224][00216] Num frames 6900... |
|
[2024-10-03 09:26:45,345][00216] Num frames 7000... |
|
[2024-10-03 09:26:45,475][00216] Num frames 7100... |
|
[2024-10-03 09:26:45,591][00216] Num frames 7200... |
|
[2024-10-03 09:26:45,711][00216] Num frames 7300... |
|
[2024-10-03 09:26:45,828][00216] Num frames 7400... |
|
[2024-10-03 09:26:45,948][00216] Num frames 7500... |
|
[2024-10-03 09:26:46,018][00216] Avg episode rewards: #0: 24.873, true rewards: #0: 10.730 |
|
[2024-10-03 09:26:46,019][00216] Avg episode reward: 24.873, avg true_objective: 10.730 |
|
[2024-10-03 09:26:46,142][00216] Num frames 7600... |
|
[2024-10-03 09:26:46,276][00216] Num frames 7700... |
|
[2024-10-03 09:26:46,396][00216] Num frames 7800... |
|
[2024-10-03 09:26:46,519][00216] Num frames 7900... |
|
[2024-10-03 09:26:46,634][00216] Num frames 8000... |
|
[2024-10-03 09:26:46,758][00216] Num frames 8100... |
|
[2024-10-03 09:26:46,835][00216] Avg episode rewards: #0: 23.024, true rewards: #0: 10.149 |
|
[2024-10-03 09:26:46,836][00216] Avg episode reward: 23.024, avg true_objective: 10.149 |
|
[2024-10-03 09:26:46,933][00216] Num frames 8200... |
|
[2024-10-03 09:26:47,057][00216] Num frames 8300... |
|
[2024-10-03 09:26:47,172][00216] Num frames 8400... |
|
[2024-10-03 09:26:47,328][00216] Avg episode rewards: #0: 20.987, true rewards: #0: 9.431 |
|
[2024-10-03 09:26:47,330][00216] Avg episode reward: 20.987, avg true_objective: 9.431 |
|
[2024-10-03 09:26:47,347][00216] Num frames 8500... |
|
[2024-10-03 09:26:47,470][00216] Num frames 8600... |
|
[2024-10-03 09:26:47,591][00216] Num frames 8700... |
|
[2024-10-03 09:26:47,708][00216] Num frames 8800... |
|
[2024-10-03 09:26:47,823][00216] Num frames 8900... |
|
[2024-10-03 09:26:47,937][00216] Num frames 9000... |
|
[2024-10-03 09:26:48,058][00216] Num frames 9100... |
|
[2024-10-03 09:26:48,174][00216] Num frames 9200... |
|
[2024-10-03 09:26:48,289][00216] Num frames 9300... |
|
[2024-10-03 09:26:48,406][00216] Num frames 9400... |
|
[2024-10-03 09:26:48,522][00216] Avg episode rewards: #0: 20.848, true rewards: #0: 9.448 |
|
[2024-10-03 09:26:48,523][00216] Avg episode reward: 20.848, avg true_objective: 9.448 |
|
[2024-10-03 09:27:40,384][00216] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-10-03 09:31:28,106][00216] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-10-03 09:31:28,107][00216] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-10-03 09:31:28,109][00216] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-10-03 09:31:28,111][00216] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-10-03 09:31:28,113][00216] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-10-03 09:31:28,115][00216] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-10-03 09:31:28,116][00216] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-10-03 09:31:28,118][00216] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-10-03 09:31:28,119][00216] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-10-03 09:31:28,120][00216] Adding new argument 'hf_repository'='eloise54/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-10-03 09:31:28,121][00216] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-10-03 09:31:28,122][00216] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-10-03 09:31:28,126][00216] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-10-03 09:31:28,127][00216] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-10-03 09:31:28,128][00216] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-10-03 09:31:28,139][00216] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-10-03 09:31:28,147][00216] RunningMeanStd input shape: (1,) |
|
[2024-10-03 09:31:28,165][00216] ConvEncoder: input_channels=3 |
|
[2024-10-03 09:31:28,200][00216] Conv encoder output size: 512 |
|
[2024-10-03 09:31:28,202][00216] Policy head output size: 512 |
|
[2024-10-03 09:31:28,220][00216] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-10-03 09:31:28,681][00216] Num frames 100... |
|
[2024-10-03 09:31:28,800][00216] Num frames 200... |
|
[2024-10-03 09:31:29,000][00216] Num frames 300... |
|
[2024-10-03 09:31:29,273][00216] Num frames 400... |
|
[2024-10-03 09:31:29,444][00216] Num frames 500... |
|
[2024-10-03 09:31:29,707][00216] Num frames 600... |
|
[2024-10-03 09:31:29,895][00216] Num frames 700... |
|
[2024-10-03 09:31:30,133][00216] Num frames 800... |
|
[2024-10-03 09:31:30,370][00216] Num frames 900... |
|
[2024-10-03 09:31:30,520][00216] Avg episode rewards: #0: 19.430, true rewards: #0: 9.430 |
|
[2024-10-03 09:31:30,522][00216] Avg episode reward: 19.430, avg true_objective: 9.430 |
|
[2024-10-03 09:31:30,636][00216] Num frames 1000... |
|
[2024-10-03 09:31:30,836][00216] Num frames 1100... |
|
[2024-10-03 09:31:31,060][00216] Num frames 1200... |
|
[2024-10-03 09:31:31,322][00216] Num frames 1300... |
|
[2024-10-03 09:31:31,644][00216] Num frames 1400... |
|
[2024-10-03 09:31:31,873][00216] Avg episode rewards: #0: 14.935, true rewards: #0: 7.435 |
|
[2024-10-03 09:31:31,875][00216] Avg episode reward: 14.935, avg true_objective: 7.435 |
|
[2024-10-03 09:31:31,904][00216] Num frames 1500... |
|
[2024-10-03 09:31:32,118][00216] Num frames 1600... |
|
[2024-10-03 09:31:32,310][00216] Num frames 1700... |
|
[2024-10-03 09:31:32,484][00216] Num frames 1800... |
|
[2024-10-03 09:31:32,600][00216] Num frames 1900... |
|
[2024-10-03 09:31:32,715][00216] Num frames 2000... |
|
[2024-10-03 09:31:32,833][00216] Num frames 2100... |
|
[2024-10-03 09:31:33,000][00216] Num frames 2200... |
|
[2024-10-03 09:31:33,165][00216] Num frames 2300... |
|
[2024-10-03 09:31:33,331][00216] Num frames 2400... |
|
[2024-10-03 09:31:33,497][00216] Num frames 2500... |
|
[2024-10-03 09:31:33,658][00216] Num frames 2600... |
|
[2024-10-03 09:31:33,814][00216] Num frames 2700... |
|
[2024-10-03 09:31:33,972][00216] Num frames 2800... |
|
[2024-10-03 09:31:34,139][00216] Num frames 2900... |
|
[2024-10-03 09:31:34,305][00216] Num frames 3000... |
|
[2024-10-03 09:31:34,526][00216] Avg episode rewards: #0: 24.307, true rewards: #0: 10.307 |
|
[2024-10-03 09:31:34,529][00216] Avg episode reward: 24.307, avg true_objective: 10.307 |
|
[2024-10-03 09:31:34,544][00216] Num frames 3100... |
|
[2024-10-03 09:31:34,711][00216] Num frames 3200... |
|
[2024-10-03 09:31:34,879][00216] Num frames 3300... |
|
[2024-10-03 09:31:35,047][00216] Num frames 3400... |
|
[2024-10-03 09:31:35,211][00216] Num frames 3500... |
|
[2024-10-03 09:31:35,348][00216] Num frames 3600... |
|
[2024-10-03 09:31:35,474][00216] Num frames 3700... |
|
[2024-10-03 09:31:35,598][00216] Num frames 3800... |
|
[2024-10-03 09:31:35,734][00216] Num frames 3900... |
|
[2024-10-03 09:31:35,853][00216] Num frames 4000... |
|
[2024-10-03 09:31:35,970][00216] Num frames 4100... |
|
[2024-10-03 09:31:36,097][00216] Num frames 4200... |
|
[2024-10-03 09:31:36,213][00216] Num frames 4300... |
|
[2024-10-03 09:31:36,329][00216] Num frames 4400... |
|
[2024-10-03 09:31:36,452][00216] Num frames 4500... |
|
[2024-10-03 09:31:36,582][00216] Num frames 4600... |
|
[2024-10-03 09:31:36,706][00216] Num frames 4700... |
|
[2024-10-03 09:31:36,826][00216] Num frames 4800... |
|
[2024-10-03 09:31:36,949][00216] Num frames 4900... |
|
[2024-10-03 09:31:37,073][00216] Num frames 5000... |
|
[2024-10-03 09:31:37,192][00216] Num frames 5100... |
|
[2024-10-03 09:31:37,355][00216] Avg episode rewards: #0: 32.730, true rewards: #0: 12.980 |
|
[2024-10-03 09:31:37,356][00216] Avg episode reward: 32.730, avg true_objective: 12.980 |
|
[2024-10-03 09:31:37,371][00216] Num frames 5200... |
|
[2024-10-03 09:31:37,490][00216] Num frames 5300... |
|
[2024-10-03 09:31:37,610][00216] Num frames 5400... |
|
[2024-10-03 09:31:37,728][00216] Num frames 5500... |
|
[2024-10-03 09:31:37,846][00216] Num frames 5600... |
|
[2024-10-03 09:31:37,963][00216] Num frames 5700... |
|
[2024-10-03 09:31:38,084][00216] Num frames 5800... |
|
[2024-10-03 09:31:38,200][00216] Num frames 5900... |
|
[2024-10-03 09:31:38,316][00216] Num frames 6000... |
|
[2024-10-03 09:31:38,434][00216] Num frames 6100... |
|
[2024-10-03 09:31:38,572][00216] Num frames 6200... |
|
[2024-10-03 09:31:38,690][00216] Num frames 6300... |
|
[2024-10-03 09:31:38,805][00216] Num frames 6400... |
|
[2024-10-03 09:31:38,922][00216] Num frames 6500... |
|
[2024-10-03 09:31:39,047][00216] Num frames 6600... |
|
[2024-10-03 09:31:39,216][00216] Avg episode rewards: #0: 32.792, true rewards: #0: 13.392 |
|
[2024-10-03 09:31:39,217][00216] Avg episode reward: 32.792, avg true_objective: 13.392 |
|
[2024-10-03 09:31:39,226][00216] Num frames 6700... |
|
[2024-10-03 09:31:39,347][00216] Num frames 6800... |
|
[2024-10-03 09:31:39,465][00216] Num frames 6900... |
|
[2024-10-03 09:31:39,588][00216] Num frames 7000... |
|
[2024-10-03 09:31:39,702][00216] Num frames 7100... |
|
[2024-10-03 09:31:39,818][00216] Num frames 7200... |
|
[2024-10-03 09:31:39,931][00216] Num frames 7300... |
|
[2024-10-03 09:31:40,050][00216] Num frames 7400... |
|
[2024-10-03 09:31:40,171][00216] Num frames 7500... |
|
[2024-10-03 09:31:40,259][00216] Avg episode rewards: #0: 29.880, true rewards: #0: 12.547 |
|
[2024-10-03 09:31:40,260][00216] Avg episode reward: 29.880, avg true_objective: 12.547 |
|
[2024-10-03 09:31:40,343][00216] Num frames 7600... |
|
[2024-10-03 09:31:40,457][00216] Num frames 7700... |
|
[2024-10-03 09:31:40,579][00216] Num frames 7800... |
|
[2024-10-03 09:31:40,698][00216] Num frames 7900... |
|
[2024-10-03 09:31:40,814][00216] Num frames 8000... |
|
[2024-10-03 09:31:40,936][00216] Num frames 8100... |
|
[2024-10-03 09:31:41,062][00216] Num frames 8200... |
|
[2024-10-03 09:31:41,178][00216] Num frames 8300... |
|
[2024-10-03 09:31:41,300][00216] Num frames 8400... |
|
[2024-10-03 09:31:41,419][00216] Num frames 8500... |
|
[2024-10-03 09:31:41,536][00216] Num frames 8600... |
|
[2024-10-03 09:31:41,663][00216] Num frames 8700... |
|
[2024-10-03 09:31:41,784][00216] Num frames 8800... |
|
[2024-10-03 09:31:41,906][00216] Num frames 8900... |
|
[2024-10-03 09:31:42,023][00216] Num frames 9000... |
|
[2024-10-03 09:31:42,148][00216] Num frames 9100... |
|
[2024-10-03 09:31:42,269][00216] Num frames 9200... |
|
[2024-10-03 09:31:42,395][00216] Num frames 9300... |
|
[2024-10-03 09:31:42,519][00216] Num frames 9400... |
|
[2024-10-03 09:31:42,647][00216] Num frames 9500... |
|
[2024-10-03 09:31:42,769][00216] Num frames 9600... |
|
[2024-10-03 09:31:42,859][00216] Avg episode rewards: #0: 34.040, true rewards: #0: 13.754 |
|
[2024-10-03 09:31:42,861][00216] Avg episode reward: 34.040, avg true_objective: 13.754 |
|
[2024-10-03 09:31:42,944][00216] Num frames 9700... |
|
[2024-10-03 09:31:43,072][00216] Num frames 9800... |
|
[2024-10-03 09:31:43,192][00216] Num frames 9900... |
|
[2024-10-03 09:31:43,312][00216] Num frames 10000... |
|
[2024-10-03 09:31:43,431][00216] Num frames 10100... |
|
[2024-10-03 09:31:43,550][00216] Num frames 10200... |
|
[2024-10-03 09:31:43,679][00216] Num frames 10300... |
|
[2024-10-03 09:31:43,799][00216] Num frames 10400... |
|
[2024-10-03 09:31:43,919][00216] Num frames 10500... |
|
[2024-10-03 09:31:44,048][00216] Num frames 10600... |
|
[2024-10-03 09:31:44,209][00216] Avg episode rewards: #0: 32.980, true rewards: #0: 13.355 |
|
[2024-10-03 09:31:44,211][00216] Avg episode reward: 32.980, avg true_objective: 13.355 |
|
[2024-10-03 09:31:44,232][00216] Num frames 10700... |
|
[2024-10-03 09:31:44,356][00216] Num frames 10800... |
|
[2024-10-03 09:31:44,473][00216] Num frames 10900... |
|
[2024-10-03 09:31:44,591][00216] Num frames 11000... |
|
[2024-10-03 09:31:44,718][00216] Num frames 11100... |
|
[2024-10-03 09:31:44,835][00216] Num frames 11200... |
|
[2024-10-03 09:31:44,953][00216] Num frames 11300... |
|
[2024-10-03 09:31:45,078][00216] Num frames 11400... |
|
[2024-10-03 09:31:45,202][00216] Num frames 11500... |
|
[2024-10-03 09:31:45,337][00216] Num frames 11600... |
|
[2024-10-03 09:31:45,510][00216] Num frames 11700... |
|
[2024-10-03 09:31:45,671][00216] Num frames 11800... |
|
[2024-10-03 09:31:45,840][00216] Num frames 11900... |
|
[2024-10-03 09:31:46,055][00216] Avg episode rewards: #0: 32.771, true rewards: #0: 13.327 |
|
[2024-10-03 09:31:46,057][00216] Avg episode reward: 32.771, avg true_objective: 13.327 |
|
[2024-10-03 09:31:46,071][00216] Num frames 12000... |
|
[2024-10-03 09:31:46,231][00216] Num frames 12100... |
|
[2024-10-03 09:31:46,400][00216] Num frames 12200... |
|
[2024-10-03 09:31:46,560][00216] Num frames 12300... |
|
[2024-10-03 09:31:46,730][00216] Num frames 12400... |
|
[2024-10-03 09:31:46,905][00216] Num frames 12500... |
|
[2024-10-03 09:31:47,079][00216] Avg episode rewards: #0: 30.470, true rewards: #0: 12.570 |
|
[2024-10-03 09:31:47,081][00216] Avg episode reward: 30.470, avg true_objective: 12.570 |
|
[2024-10-03 09:32:56,169][00216] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|