|
[2025-02-18 12:47:55,339][00882] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-02-18 12:47:55,341][00882] Rollout worker 0 uses device cpu |
|
[2025-02-18 12:47:55,343][00882] Rollout worker 1 uses device cpu |
|
[2025-02-18 12:47:55,344][00882] Rollout worker 2 uses device cpu |
|
[2025-02-18 12:47:55,346][00882] Rollout worker 3 uses device cpu |
|
[2025-02-18 12:47:55,347][00882] Rollout worker 4 uses device cpu |
|
[2025-02-18 12:47:55,348][00882] Rollout worker 5 uses device cpu |
|
[2025-02-18 12:47:55,349][00882] Rollout worker 6 uses device cpu |
|
[2025-02-18 12:47:55,351][00882] Rollout worker 7 uses device cpu |
|
[2025-02-18 12:47:55,498][00882] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 12:47:55,499][00882] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-02-18 12:47:55,531][00882] Starting all processes... |
|
[2025-02-18 12:47:55,533][00882] Starting process learner_proc0 |
|
[2025-02-18 12:47:55,595][00882] Starting all processes... |
|
[2025-02-18 12:47:55,606][00882] Starting process inference_proc0-0 |
|
[2025-02-18 12:47:55,606][00882] Starting process rollout_proc0 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc1 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc2 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc3 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc4 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc5 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc6 |
|
[2025-02-18 12:47:55,607][00882] Starting process rollout_proc7 |
|
[2025-02-18 12:48:11,291][13920] Worker 2 uses CPU cores [0] |
|
[2025-02-18 12:48:11,449][13925] Worker 7 uses CPU cores [1] |
|
[2025-02-18 12:48:11,523][13904] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 12:48:11,524][13904] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-02-18 12:48:11,578][13921] Worker 3 uses CPU cores [1] |
|
[2025-02-18 12:48:11,585][13917] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 12:48:11,590][13917] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-02-18 12:48:11,602][13904] Num visible devices: 1 |
|
[2025-02-18 12:48:11,618][13904] Starting seed is not provided |
|
[2025-02-18 12:48:11,619][13904] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 12:48:11,620][13904] Initializing actor-critic model on device cuda:0 |
|
[2025-02-18 12:48:11,621][13904] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 12:48:11,617][13923] Worker 5 uses CPU cores [1] |
|
[2025-02-18 12:48:11,625][13904] RunningMeanStd input shape: (1,) |
|
[2025-02-18 12:48:11,664][13917] Num visible devices: 1 |
|
[2025-02-18 12:48:11,685][13904] ConvEncoder: input_channels=3 |
|
[2025-02-18 12:48:11,738][13919] Worker 1 uses CPU cores [1] |
|
[2025-02-18 12:48:11,856][13918] Worker 0 uses CPU cores [0] |
|
[2025-02-18 12:48:11,931][13922] Worker 4 uses CPU cores [0] |
|
[2025-02-18 12:48:11,949][13924] Worker 6 uses CPU cores [0] |
|
[2025-02-18 12:48:12,040][13904] Conv encoder output size: 512 |
|
[2025-02-18 12:48:12,040][13904] Policy head output size: 512 |
|
[2025-02-18 12:48:12,098][13904] Created Actor Critic model with architecture: |
|
[2025-02-18 12:48:12,098][13904] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-02-18 12:48:12,340][13904] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-02-18 12:48:15,493][00882] Heartbeat connected on Batcher_0 |
|
[2025-02-18 12:48:15,498][00882] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-02-18 12:48:15,508][00882] Heartbeat connected on RolloutWorker_w0 |
|
[2025-02-18 12:48:15,516][00882] Heartbeat connected on RolloutWorker_w1 |
|
[2025-02-18 12:48:15,519][00882] Heartbeat connected on RolloutWorker_w2 |
|
[2025-02-18 12:48:15,520][00882] Heartbeat connected on RolloutWorker_w3 |
|
[2025-02-18 12:48:15,523][00882] Heartbeat connected on RolloutWorker_w4 |
|
[2025-02-18 12:48:15,527][00882] Heartbeat connected on RolloutWorker_w5 |
|
[2025-02-18 12:48:15,529][00882] Heartbeat connected on RolloutWorker_w6 |
|
[2025-02-18 12:48:15,532][00882] Heartbeat connected on RolloutWorker_w7 |
|
[2025-02-18 12:48:17,099][13904] No checkpoints found |
|
[2025-02-18 12:48:17,100][13904] Did not load from checkpoint, starting from scratch! |
|
[2025-02-18 12:48:17,100][13904] Initialized policy 0 weights for model version 0 |
|
[2025-02-18 12:48:17,104][13904] LearnerWorker_p0 finished initialization! |
|
[2025-02-18 12:48:17,104][00882] Heartbeat connected on LearnerWorker_p0 |
|
[2025-02-18 12:48:17,115][13904] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 12:48:17,380][13917] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 12:48:17,382][13917] RunningMeanStd input shape: (1,) |
|
[2025-02-18 12:48:17,400][13917] ConvEncoder: input_channels=3 |
|
[2025-02-18 12:48:17,556][13917] Conv encoder output size: 512 |
|
[2025-02-18 12:48:17,556][13917] Policy head output size: 512 |
|
[2025-02-18 12:48:17,608][00882] Inference worker 0-0 is ready! |
|
[2025-02-18 12:48:17,610][00882] All inference workers are ready! Signal rollout workers to start! |
|
[2025-02-18 12:48:18,177][13925] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,199][13921] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,212][13923] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,211][13919] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,246][13918] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,246][13920] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,256][13924] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:18,263][13922] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 12:48:19,593][13919] Decorrelating experience for 0 frames... |
|
[2025-02-18 12:48:19,593][13924] Decorrelating experience for 0 frames... |
|
[2025-02-18 12:48:19,593][13918] Decorrelating experience for 0 frames... |
|
[2025-02-18 12:48:19,595][13923] Decorrelating experience for 0 frames... |
|
[2025-02-18 12:48:19,825][00882] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-18 12:48:20,365][13923] Decorrelating experience for 32 frames... |
|
[2025-02-18 12:48:20,367][13919] Decorrelating experience for 32 frames... |
|
[2025-02-18 12:48:20,381][13924] Decorrelating experience for 32 frames... |
|
[2025-02-18 12:48:20,383][13918] Decorrelating experience for 32 frames... |
|
[2025-02-18 12:48:21,380][13919] Decorrelating experience for 64 frames... |
|
[2025-02-18 12:48:21,388][13923] Decorrelating experience for 64 frames... |
|
[2025-02-18 12:48:21,414][13918] Decorrelating experience for 64 frames... |
|
[2025-02-18 12:48:21,412][13924] Decorrelating experience for 64 frames... |
|
[2025-02-18 12:48:21,892][13918] Decorrelating experience for 96 frames... |
|
[2025-02-18 12:48:22,183][13923] Decorrelating experience for 96 frames... |
|
[2025-02-18 12:48:22,195][13919] Decorrelating experience for 96 frames... |
|
[2025-02-18 12:48:22,649][13924] Decorrelating experience for 96 frames... |
|
[2025-02-18 12:48:24,825][00882] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 104.0. Samples: 520. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-18 12:48:24,828][00882] Avg episode reward: [(0, '2.422')] |
|
[2025-02-18 12:48:25,571][13904] Signal inference workers to stop experience collection... |
|
[2025-02-18 12:48:25,584][13917] InferenceWorker_p0-w0: stopping experience collection |
|
[2025-02-18 12:48:27,485][13904] Signal inference workers to resume experience collection... |
|
[2025-02-18 12:48:27,486][13917] InferenceWorker_p0-w0: resuming experience collection |
|
[2025-02-18 12:48:29,825][00882] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 8192. Throughput: 0: 319.0. Samples: 3190. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2025-02-18 12:48:29,827][00882] Avg episode reward: [(0, '3.608')] |
|
[2025-02-18 12:48:34,825][00882] Fps is (10 sec: 2867.2, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 28672. Throughput: 0: 344.9. Samples: 5174. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:48:34,826][00882] Avg episode reward: [(0, '4.173')] |
|
[2025-02-18 12:48:37,732][13917] Updated weights for policy 0, policy_version 10 (0.0092) |
|
[2025-02-18 12:48:39,825][00882] Fps is (10 sec: 3686.4, 60 sec: 2252.8, 300 sec: 2252.8). Total num frames: 45056. Throughput: 0: 573.3. Samples: 11466. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:48:39,828][00882] Avg episode reward: [(0, '4.582')] |
|
[2025-02-18 12:48:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 61440. Throughput: 0: 650.6. Samples: 16264. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:48:44,830][00882] Avg episode reward: [(0, '4.529')] |
|
[2025-02-18 12:48:49,258][13917] Updated weights for policy 0, policy_version 20 (0.0018) |
|
[2025-02-18 12:48:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 81920. Throughput: 0: 630.3. Samples: 18908. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:48:49,826][00882] Avg episode reward: [(0, '4.552')] |
|
[2025-02-18 12:48:54,825][00882] Fps is (10 sec: 4096.0, 60 sec: 2925.7, 300 sec: 2925.7). Total num frames: 102400. Throughput: 0: 720.6. Samples: 25222. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:48:54,826][00882] Avg episode reward: [(0, '4.576')] |
|
[2025-02-18 12:48:54,832][13904] Saving new best policy, reward=4.576! |
|
[2025-02-18 12:48:59,825][00882] Fps is (10 sec: 3686.4, 60 sec: 2969.6, 300 sec: 2969.6). Total num frames: 118784. Throughput: 0: 747.7. Samples: 29906. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:48:59,830][00882] Avg episode reward: [(0, '4.419')] |
|
[2025-02-18 12:49:00,735][13917] Updated weights for policy 0, policy_version 30 (0.0014) |
|
[2025-02-18 12:49:04,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3094.8, 300 sec: 3094.8). Total num frames: 139264. Throughput: 0: 732.5. Samples: 32962. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:04,827][00882] Avg episode reward: [(0, '4.484')] |
|
[2025-02-18 12:49:09,826][00882] Fps is (10 sec: 4095.6, 60 sec: 3194.8, 300 sec: 3194.8). Total num frames: 159744. Throughput: 0: 860.4. Samples: 39238. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:09,828][00882] Avg episode reward: [(0, '4.553')] |
|
[2025-02-18 12:49:11,137][13917] Updated weights for policy 0, policy_version 40 (0.0014) |
|
[2025-02-18 12:49:14,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3127.9, 300 sec: 3127.9). Total num frames: 172032. Throughput: 0: 904.1. Samples: 43876. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:14,827][00882] Avg episode reward: [(0, '4.583')] |
|
[2025-02-18 12:49:14,829][13904] Saving new best policy, reward=4.583! |
|
[2025-02-18 12:49:19,825][00882] Fps is (10 sec: 3277.1, 60 sec: 3208.5, 300 sec: 3208.5). Total num frames: 192512. Throughput: 0: 928.4. Samples: 46954. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:49:19,827][00882] Avg episode reward: [(0, '4.511')] |
|
[2025-02-18 12:49:21,918][13917] Updated weights for policy 0, policy_version 50 (0.0016) |
|
[2025-02-18 12:49:24,825][00882] Fps is (10 sec: 4095.9, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 212992. Throughput: 0: 928.6. Samples: 53254. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:49:24,828][00882] Avg episode reward: [(0, '4.627')] |
|
[2025-02-18 12:49:24,839][13904] Saving new best policy, reward=4.627! |
|
[2025-02-18 12:49:29,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3276.8). Total num frames: 229376. Throughput: 0: 924.9. Samples: 57886. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:49:29,826][00882] Avg episode reward: [(0, '4.716')] |
|
[2025-02-18 12:49:29,837][13904] Saving new best policy, reward=4.716! |
|
[2025-02-18 12:49:33,412][13917] Updated weights for policy 0, policy_version 60 (0.0012) |
|
[2025-02-18 12:49:34,825][00882] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3331.4). Total num frames: 249856. Throughput: 0: 932.5. Samples: 60872. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:34,832][00882] Avg episode reward: [(0, '4.542')] |
|
[2025-02-18 12:49:39,826][00882] Fps is (10 sec: 3685.8, 60 sec: 3686.3, 300 sec: 3327.9). Total num frames: 266240. Throughput: 0: 919.7. Samples: 66612. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:39,831][00882] Avg episode reward: [(0, '4.608')] |
|
[2025-02-18 12:49:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3325.0). Total num frames: 282624. Throughput: 0: 928.8. Samples: 71704. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:44,833][00882] Avg episode reward: [(0, '4.690')] |
|
[2025-02-18 12:49:44,868][13917] Updated weights for policy 0, policy_version 70 (0.0013) |
|
[2025-02-18 12:49:49,825][00882] Fps is (10 sec: 4096.7, 60 sec: 3754.7, 300 sec: 3413.3). Total num frames: 307200. Throughput: 0: 930.4. Samples: 74828. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:49,829][00882] Avg episode reward: [(0, '4.485')] |
|
[2025-02-18 12:49:49,838][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000075_307200.pth... |
|
[2025-02-18 12:49:54,828][00882] Fps is (10 sec: 3685.1, 60 sec: 3617.9, 300 sec: 3362.9). Total num frames: 319488. Throughput: 0: 907.8. Samples: 80092. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:49:54,839][00882] Avg episode reward: [(0, '4.591')] |
|
[2025-02-18 12:49:56,292][13917] Updated weights for policy 0, policy_version 80 (0.0013) |
|
[2025-02-18 12:49:59,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3399.7). Total num frames: 339968. Throughput: 0: 927.2. Samples: 85598. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:49:59,831][00882] Avg episode reward: [(0, '4.656')] |
|
[2025-02-18 12:50:04,825][00882] Fps is (10 sec: 4097.5, 60 sec: 3686.4, 300 sec: 3432.8). Total num frames: 360448. Throughput: 0: 927.9. Samples: 88710. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:04,827][00882] Avg episode reward: [(0, '4.539')] |
|
[2025-02-18 12:50:06,753][13917] Updated weights for policy 0, policy_version 90 (0.0014) |
|
[2025-02-18 12:50:09,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3425.7). Total num frames: 376832. Throughput: 0: 895.2. Samples: 93540. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:09,828][00882] Avg episode reward: [(0, '4.684')] |
|
[2025-02-18 12:50:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3454.9). Total num frames: 397312. Throughput: 0: 924.4. Samples: 99486. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:14,827][00882] Avg episode reward: [(0, '4.763')] |
|
[2025-02-18 12:50:14,831][13904] Saving new best policy, reward=4.763! |
|
[2025-02-18 12:50:17,793][13917] Updated weights for policy 0, policy_version 100 (0.0013) |
|
[2025-02-18 12:50:19,830][00882] Fps is (10 sec: 3684.5, 60 sec: 3686.1, 300 sec: 3447.3). Total num frames: 413696. Throughput: 0: 925.4. Samples: 102520. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:19,832][00882] Avg episode reward: [(0, '4.521')] |
|
[2025-02-18 12:50:24,825][00882] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3440.6). Total num frames: 430080. Throughput: 0: 901.2. Samples: 107166. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:24,827][00882] Avg episode reward: [(0, '4.461')] |
|
[2025-02-18 12:50:29,014][13917] Updated weights for policy 0, policy_version 110 (0.0012) |
|
[2025-02-18 12:50:29,825][00882] Fps is (10 sec: 3688.1, 60 sec: 3686.4, 300 sec: 3465.8). Total num frames: 450560. Throughput: 0: 929.1. Samples: 113514. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:29,828][00882] Avg episode reward: [(0, '4.269')] |
|
[2025-02-18 12:50:34,826][00882] Fps is (10 sec: 4095.8, 60 sec: 3686.3, 300 sec: 3489.2). Total num frames: 471040. Throughput: 0: 928.3. Samples: 116602. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:34,832][00882] Avg episode reward: [(0, '4.252')] |
|
[2025-02-18 12:50:39,825][00882] Fps is (10 sec: 3686.6, 60 sec: 3686.5, 300 sec: 3481.6). Total num frames: 487424. Throughput: 0: 914.8. Samples: 121256. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:39,832][00882] Avg episode reward: [(0, '4.599')] |
|
[2025-02-18 12:50:40,462][13917] Updated weights for policy 0, policy_version 120 (0.0012) |
|
[2025-02-18 12:50:44,825][00882] Fps is (10 sec: 3686.8, 60 sec: 3754.7, 300 sec: 3502.8). Total num frames: 507904. Throughput: 0: 930.0. Samples: 127450. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:44,827][00882] Avg episode reward: [(0, '4.662')] |
|
[2025-02-18 12:50:49,825][00882] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3495.2). Total num frames: 524288. Throughput: 0: 923.2. Samples: 130254. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:49,827][00882] Avg episode reward: [(0, '4.685')] |
|
[2025-02-18 12:50:51,826][13917] Updated weights for policy 0, policy_version 130 (0.0017) |
|
[2025-02-18 12:50:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.9, 300 sec: 3514.6). Total num frames: 544768. Throughput: 0: 929.6. Samples: 135372. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:50:54,829][00882] Avg episode reward: [(0, '4.642')] |
|
[2025-02-18 12:50:59,825][00882] Fps is (10 sec: 4096.3, 60 sec: 3754.7, 300 sec: 3532.8). Total num frames: 565248. Throughput: 0: 935.8. Samples: 141598. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:50:59,829][00882] Avg episode reward: [(0, '4.648')] |
|
[2025-02-18 12:51:02,079][13917] Updated weights for policy 0, policy_version 140 (0.0012) |
|
[2025-02-18 12:51:04,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3500.2). Total num frames: 577536. Throughput: 0: 919.7. Samples: 143900. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:04,829][00882] Avg episode reward: [(0, '4.729')] |
|
[2025-02-18 12:51:09,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3517.7). Total num frames: 598016. Throughput: 0: 942.4. Samples: 149574. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:51:09,829][00882] Avg episode reward: [(0, '4.906')] |
|
[2025-02-18 12:51:09,893][13904] Saving new best policy, reward=4.906! |
|
[2025-02-18 12:51:12,850][13917] Updated weights for policy 0, policy_version 150 (0.0013) |
|
[2025-02-18 12:51:14,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3534.3). Total num frames: 618496. Throughput: 0: 931.9. Samples: 155448. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:51:14,830][00882] Avg episode reward: [(0, '4.891')] |
|
[2025-02-18 12:51:19,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3527.1). Total num frames: 634880. Throughput: 0: 904.9. Samples: 157322. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:51:19,827][00882] Avg episode reward: [(0, '4.891')] |
|
[2025-02-18 12:51:24,241][13917] Updated weights for policy 0, policy_version 160 (0.0018) |
|
[2025-02-18 12:51:24,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3542.5). Total num frames: 655360. Throughput: 0: 938.8. Samples: 163502. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:51:24,827][00882] Avg episode reward: [(0, '4.968')] |
|
[2025-02-18 12:51:24,830][13904] Saving new best policy, reward=4.968! |
|
[2025-02-18 12:51:29,826][00882] Fps is (10 sec: 4095.6, 60 sec: 3754.6, 300 sec: 3557.0). Total num frames: 675840. Throughput: 0: 926.7. Samples: 169154. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:29,833][00882] Avg episode reward: [(0, '4.981')] |
|
[2025-02-18 12:51:29,840][13904] Saving new best policy, reward=4.981! |
|
[2025-02-18 12:51:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3549.9). Total num frames: 692224. Throughput: 0: 911.7. Samples: 171282. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:34,831][00882] Avg episode reward: [(0, '4.869')] |
|
[2025-02-18 12:51:35,537][13917] Updated weights for policy 0, policy_version 170 (0.0027) |
|
[2025-02-18 12:51:39,825][00882] Fps is (10 sec: 3686.8, 60 sec: 3754.7, 300 sec: 3563.5). Total num frames: 712704. Throughput: 0: 936.0. Samples: 177490. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:39,831][00882] Avg episode reward: [(0, '4.855')] |
|
[2025-02-18 12:51:44,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3556.5). Total num frames: 729088. Throughput: 0: 911.2. Samples: 182604. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:44,829][00882] Avg episode reward: [(0, '5.021')] |
|
[2025-02-18 12:51:44,836][13904] Saving new best policy, reward=5.021! |
|
[2025-02-18 12:51:47,085][13917] Updated weights for policy 0, policy_version 180 (0.0013) |
|
[2025-02-18 12:51:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3569.4). Total num frames: 749568. Throughput: 0: 919.0. Samples: 185256. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:51:49,829][00882] Avg episode reward: [(0, '4.978')] |
|
[2025-02-18 12:51:49,837][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000183_749568.pth... |
|
[2025-02-18 12:51:54,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3581.6). Total num frames: 770048. Throughput: 0: 932.4. Samples: 191532. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:54,831][00882] Avg episode reward: [(0, '4.756')] |
|
[2025-02-18 12:51:57,266][13917] Updated weights for policy 0, policy_version 190 (0.0023) |
|
[2025-02-18 12:51:59,825][00882] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3556.1). Total num frames: 782336. Throughput: 0: 906.6. Samples: 196246. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:51:59,833][00882] Avg episode reward: [(0, '5.195')] |
|
[2025-02-18 12:51:59,841][13904] Saving new best policy, reward=5.195! |
|
[2025-02-18 12:52:04,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3568.1). Total num frames: 802816. Throughput: 0: 933.1. Samples: 199312. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:52:04,831][00882] Avg episode reward: [(0, '5.496')] |
|
[2025-02-18 12:52:04,836][13904] Saving new best policy, reward=5.496! |
|
[2025-02-18 12:52:08,095][13917] Updated weights for policy 0, policy_version 200 (0.0018) |
|
[2025-02-18 12:52:09,825][00882] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3579.5). Total num frames: 823296. Throughput: 0: 937.6. Samples: 205696. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:52:09,827][00882] Avg episode reward: [(0, '5.520')] |
|
[2025-02-18 12:52:09,840][13904] Saving new best policy, reward=5.520! |
|
[2025-02-18 12:52:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3573.1). Total num frames: 839680. Throughput: 0: 915.4. Samples: 210348. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:52:14,830][00882] Avg episode reward: [(0, '5.619')] |
|
[2025-02-18 12:52:14,833][13904] Saving new best policy, reward=5.619! |
|
[2025-02-18 12:52:19,337][13917] Updated weights for policy 0, policy_version 210 (0.0012) |
|
[2025-02-18 12:52:19,825][00882] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3584.0). Total num frames: 860160. Throughput: 0: 936.8. Samples: 213436. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:52:19,828][00882] Avg episode reward: [(0, '5.689')] |
|
[2025-02-18 12:52:19,838][13904] Saving new best policy, reward=5.689! |
|
[2025-02-18 12:52:24,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3594.5). Total num frames: 880640. Throughput: 0: 937.2. Samples: 219666. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 12:52:24,830][00882] Avg episode reward: [(0, '6.052')] |
|
[2025-02-18 12:52:24,832][13904] Saving new best policy, reward=6.052! |
|
[2025-02-18 12:52:29,825][00882] Fps is (10 sec: 3686.5, 60 sec: 3686.5, 300 sec: 3588.1). Total num frames: 897024. Throughput: 0: 928.9. Samples: 224406. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:52:29,826][00882] Avg episode reward: [(0, '6.680')] |
|
[2025-02-18 12:52:29,837][13904] Saving new best policy, reward=6.680! |
|
[2025-02-18 12:52:30,811][13917] Updated weights for policy 0, policy_version 220 (0.0013) |
|
[2025-02-18 12:52:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3598.1). Total num frames: 917504. Throughput: 0: 938.6. Samples: 227494. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:52:34,831][00882] Avg episode reward: [(0, '6.976')] |
|
[2025-02-18 12:52:34,834][13904] Saving new best policy, reward=6.976! |
|
[2025-02-18 12:52:39,825][00882] Fps is (10 sec: 3686.2, 60 sec: 3686.4, 300 sec: 3591.9). Total num frames: 933888. Throughput: 0: 925.2. Samples: 233166. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 12:52:39,830][00882] Avg episode reward: [(0, '6.782')] |
|
[2025-02-18 12:52:42,333][13917] Updated weights for policy 0, policy_version 230 (0.0016) |
|
[2025-02-18 12:52:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3585.9). Total num frames: 950272. Throughput: 0: 935.4. Samples: 238338. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:52:44,829][00882] Avg episode reward: [(0, '7.191')] |
|
[2025-02-18 12:52:44,832][13904] Saving new best policy, reward=7.191! |
|
[2025-02-18 12:52:49,825][00882] Fps is (10 sec: 3686.6, 60 sec: 3686.4, 300 sec: 3595.4). Total num frames: 970752. Throughput: 0: 935.8. Samples: 241422. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:52:49,831][00882] Avg episode reward: [(0, '7.801')] |
|
[2025-02-18 12:52:49,841][13904] Saving new best policy, reward=7.801! |
|
[2025-02-18 12:52:52,707][13917] Updated weights for policy 0, policy_version 240 (0.0012) |
|
[2025-02-18 12:52:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3589.6). Total num frames: 987136. Throughput: 0: 910.4. Samples: 246662. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:52:54,826][00882] Avg episode reward: [(0, '8.704')] |
|
[2025-02-18 12:52:54,832][13904] Saving new best policy, reward=8.704! |
|
[2025-02-18 12:52:59,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3598.6). Total num frames: 1007616. Throughput: 0: 932.7. Samples: 252320. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:52:59,827][00882] Avg episode reward: [(0, '8.818')] |
|
[2025-02-18 12:52:59,836][13904] Saving new best policy, reward=8.818! |
|
[2025-02-18 12:53:03,533][13917] Updated weights for policy 0, policy_version 250 (0.0012) |
|
[2025-02-18 12:53:04,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3607.4). Total num frames: 1028096. Throughput: 0: 932.2. Samples: 255384. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:53:04,827][00882] Avg episode reward: [(0, '9.386')] |
|
[2025-02-18 12:53:04,832][13904] Saving new best policy, reward=9.386! |
|
[2025-02-18 12:53:09,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3587.5). Total num frames: 1040384. Throughput: 0: 897.6. Samples: 260060. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:53:09,833][00882] Avg episode reward: [(0, '9.215')] |
|
[2025-02-18 12:53:14,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 1060864. Throughput: 0: 930.0. Samples: 266256. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:53:14,835][00882] Avg episode reward: [(0, '7.937')] |
|
[2025-02-18 12:53:14,940][13917] Updated weights for policy 0, policy_version 260 (0.0017) |
|
[2025-02-18 12:53:19,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 1081344. Throughput: 0: 928.2. Samples: 269262. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:53:19,830][00882] Avg episode reward: [(0, '6.896')] |
|
[2025-02-18 12:53:24,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1097728. Throughput: 0: 906.2. Samples: 273946. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:53:24,826][00882] Avg episode reward: [(0, '6.836')] |
|
[2025-02-18 12:53:26,316][13917] Updated weights for policy 0, policy_version 270 (0.0015) |
|
[2025-02-18 12:53:29,828][00882] Fps is (10 sec: 3685.3, 60 sec: 3686.2, 300 sec: 3693.3). Total num frames: 1118208. Throughput: 0: 929.9. Samples: 280188. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:53:29,830][00882] Avg episode reward: [(0, '7.103')] |
|
[2025-02-18 12:53:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1134592. Throughput: 0: 930.5. Samples: 283296. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:53:34,828][00882] Avg episode reward: [(0, '7.842')] |
|
[2025-02-18 12:53:37,572][13917] Updated weights for policy 0, policy_version 280 (0.0012) |
|
[2025-02-18 12:53:39,825][00882] Fps is (10 sec: 3687.6, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1155072. Throughput: 0: 918.6. Samples: 287998. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:53:39,833][00882] Avg episode reward: [(0, '7.754')] |
|
[2025-02-18 12:53:44,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1175552. Throughput: 0: 932.4. Samples: 294278. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:53:44,829][00882] Avg episode reward: [(0, '8.425')] |
|
[2025-02-18 12:53:47,944][13917] Updated weights for policy 0, policy_version 290 (0.0017) |
|
[2025-02-18 12:53:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1191936. Throughput: 0: 924.0. Samples: 296966. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:53:49,829][00882] Avg episode reward: [(0, '9.054')] |
|
[2025-02-18 12:53:49,843][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth... |
|
[2025-02-18 12:53:49,966][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000075_307200.pth |
|
[2025-02-18 12:53:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1212416. Throughput: 0: 936.5. Samples: 302204. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:53:54,829][00882] Avg episode reward: [(0, '9.892')] |
|
[2025-02-18 12:53:54,832][13904] Saving new best policy, reward=9.892! |
|
[2025-02-18 12:53:58,547][13917] Updated weights for policy 0, policy_version 300 (0.0013) |
|
[2025-02-18 12:53:59,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1232896. Throughput: 0: 938.2. Samples: 308476. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:53:59,831][00882] Avg episode reward: [(0, '10.496')] |
|
[2025-02-18 12:53:59,839][13904] Saving new best policy, reward=10.496! |
|
[2025-02-18 12:54:04,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1245184. Throughput: 0: 917.9. Samples: 310568. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:04,829][00882] Avg episode reward: [(0, '10.509')] |
|
[2025-02-18 12:54:04,832][13904] Saving new best policy, reward=10.509! |
|
[2025-02-18 12:54:09,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1265664. Throughput: 0: 940.5. Samples: 316270. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:09,829][00882] Avg episode reward: [(0, '11.550')] |
|
[2025-02-18 12:54:09,839][13904] Saving new best policy, reward=11.550! |
|
[2025-02-18 12:54:10,045][13917] Updated weights for policy 0, policy_version 310 (0.0014) |
|
[2025-02-18 12:54:14,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1286144. Throughput: 0: 932.8. Samples: 322160. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:14,827][00882] Avg episode reward: [(0, '10.616')] |
|
[2025-02-18 12:54:19,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1302528. Throughput: 0: 905.7. Samples: 324054. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:54:19,827][00882] Avg episode reward: [(0, '10.874')] |
|
[2025-02-18 12:54:21,423][13917] Updated weights for policy 0, policy_version 320 (0.0014) |
|
[2025-02-18 12:54:24,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1323008. Throughput: 0: 941.1. Samples: 330346. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:24,830][00882] Avg episode reward: [(0, '11.045')] |
|
[2025-02-18 12:54:29,825][00882] Fps is (10 sec: 3686.2, 60 sec: 3686.6, 300 sec: 3693.3). Total num frames: 1339392. Throughput: 0: 925.5. Samples: 335926. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:54:29,832][00882] Avg episode reward: [(0, '11.203')] |
|
[2025-02-18 12:54:32,589][13917] Updated weights for policy 0, policy_version 330 (0.0013) |
|
[2025-02-18 12:54:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1359872. Throughput: 0: 915.4. Samples: 338158. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:54:34,829][00882] Avg episode reward: [(0, '10.755')] |
|
[2025-02-18 12:54:39,825][00882] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1380352. Throughput: 0: 939.0. Samples: 344460. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:39,829][00882] Avg episode reward: [(0, '10.585')] |
|
[2025-02-18 12:54:42,766][13917] Updated weights for policy 0, policy_version 340 (0.0015) |
|
[2025-02-18 12:54:44,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1396736. Throughput: 0: 912.6. Samples: 349544. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:44,829][00882] Avg episode reward: [(0, '10.651')] |
|
[2025-02-18 12:54:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.2). Total num frames: 1417216. Throughput: 0: 928.6. Samples: 352356. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:49,826][00882] Avg episode reward: [(0, '11.399')] |
|
[2025-02-18 12:54:53,658][13917] Updated weights for policy 0, policy_version 350 (0.0015) |
|
[2025-02-18 12:54:54,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1437696. Throughput: 0: 942.9. Samples: 358700. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:54:54,827][00882] Avg episode reward: [(0, '13.426')] |
|
[2025-02-18 12:54:54,830][13904] Saving new best policy, reward=13.426! |
|
[2025-02-18 12:54:59,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1449984. Throughput: 0: 913.0. Samples: 363246. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:54:59,828][00882] Avg episode reward: [(0, '13.917')] |
|
[2025-02-18 12:54:59,839][13904] Saving new best policy, reward=13.917! |
|
[2025-02-18 12:55:04,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1470464. Throughput: 0: 938.5. Samples: 366288. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:04,828][00882] Avg episode reward: [(0, '14.465')] |
|
[2025-02-18 12:55:04,833][13904] Saving new best policy, reward=14.465! |
|
[2025-02-18 12:55:05,191][13917] Updated weights for policy 0, policy_version 360 (0.0013) |
|
[2025-02-18 12:55:09,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1490944. Throughput: 0: 935.9. Samples: 372462. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:09,830][00882] Avg episode reward: [(0, '15.772')] |
|
[2025-02-18 12:55:09,840][13904] Saving new best policy, reward=15.772! |
|
[2025-02-18 12:55:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.3). Total num frames: 1507328. Throughput: 0: 913.4. Samples: 377030. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:55:14,831][00882] Avg episode reward: [(0, '15.984')] |
|
[2025-02-18 12:55:14,833][13904] Saving new best policy, reward=15.984! |
|
[2025-02-18 12:55:16,811][13917] Updated weights for policy 0, policy_version 370 (0.0013) |
|
[2025-02-18 12:55:19,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1527808. Throughput: 0: 932.0. Samples: 380100. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:19,831][00882] Avg episode reward: [(0, '15.576')] |
|
[2025-02-18 12:55:24,829][00882] Fps is (10 sec: 3684.7, 60 sec: 3686.1, 300 sec: 3707.2). Total num frames: 1544192. Throughput: 0: 928.7. Samples: 386258. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:24,833][00882] Avg episode reward: [(0, '15.030')] |
|
[2025-02-18 12:55:27,902][13917] Updated weights for policy 0, policy_version 380 (0.0020) |
|
[2025-02-18 12:55:29,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 1560576. Throughput: 0: 924.8. Samples: 391162. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:29,829][00882] Avg episode reward: [(0, '13.742')] |
|
[2025-02-18 12:55:34,825][00882] Fps is (10 sec: 4097.9, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1585152. Throughput: 0: 933.1. Samples: 394344. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:34,827][00882] Avg episode reward: [(0, '13.637')] |
|
[2025-02-18 12:55:38,314][13917] Updated weights for policy 0, policy_version 390 (0.0013) |
|
[2025-02-18 12:55:39,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1597440. Throughput: 0: 914.9. Samples: 399870. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:55:39,830][00882] Avg episode reward: [(0, '14.385')] |
|
[2025-02-18 12:55:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1617920. Throughput: 0: 930.8. Samples: 405134. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:55:44,827][00882] Avg episode reward: [(0, '15.057')] |
|
[2025-02-18 12:55:49,196][13917] Updated weights for policy 0, policy_version 400 (0.0012) |
|
[2025-02-18 12:55:49,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1638400. Throughput: 0: 933.6. Samples: 408298. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:55:49,826][00882] Avg episode reward: [(0, '15.736')] |
|
[2025-02-18 12:55:49,845][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000400_1638400.pth... |
|
[2025-02-18 12:55:49,947][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000183_749568.pth |
|
[2025-02-18 12:55:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1654784. Throughput: 0: 910.8. Samples: 413450. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:55:54,831][00882] Avg episode reward: [(0, '15.245')] |
|
[2025-02-18 12:55:59,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1675264. Throughput: 0: 939.1. Samples: 419290. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:55:59,830][00882] Avg episode reward: [(0, '13.712')] |
|
[2025-02-18 12:56:00,432][13917] Updated weights for policy 0, policy_version 410 (0.0019) |
|
[2025-02-18 12:56:04,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1695744. Throughput: 0: 940.7. Samples: 422432. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:04,827][00882] Avg episode reward: [(0, '13.349')] |
|
[2025-02-18 12:56:09,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1712128. Throughput: 0: 910.6. Samples: 427232. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:56:09,827][00882] Avg episode reward: [(0, '12.817')] |
|
[2025-02-18 12:56:11,722][13917] Updated weights for policy 0, policy_version 420 (0.0014) |
|
[2025-02-18 12:56:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1732608. Throughput: 0: 936.8. Samples: 433320. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:56:14,833][00882] Avg episode reward: [(0, '13.180')] |
|
[2025-02-18 12:56:19,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1748992. Throughput: 0: 936.6. Samples: 436492. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:56:19,827][00882] Avg episode reward: [(0, '13.526')] |
|
[2025-02-18 12:56:23,012][13917] Updated weights for policy 0, policy_version 430 (0.0015) |
|
[2025-02-18 12:56:24,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.7, 300 sec: 3693.4). Total num frames: 1765376. Throughput: 0: 918.4. Samples: 441198. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:24,833][00882] Avg episode reward: [(0, '13.945')] |
|
[2025-02-18 12:56:29,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1785856. Throughput: 0: 942.6. Samples: 447552. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:56:29,827][00882] Avg episode reward: [(0, '13.693')] |
|
[2025-02-18 12:56:33,042][13917] Updated weights for policy 0, policy_version 440 (0.0014) |
|
[2025-02-18 12:56:34,827][00882] Fps is (10 sec: 4095.2, 60 sec: 3686.3, 300 sec: 3707.2). Total num frames: 1806336. Throughput: 0: 939.6. Samples: 450580. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:56:34,831][00882] Avg episode reward: [(0, '13.175')] |
|
[2025-02-18 12:56:39,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1822720. Throughput: 0: 930.5. Samples: 455324. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:39,827][00882] Avg episode reward: [(0, '13.052')] |
|
[2025-02-18 12:56:44,323][13917] Updated weights for policy 0, policy_version 450 (0.0013) |
|
[2025-02-18 12:56:44,825][00882] Fps is (10 sec: 3687.1, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1843200. Throughput: 0: 935.6. Samples: 461394. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:44,826][00882] Avg episode reward: [(0, '13.847')] |
|
[2025-02-18 12:56:49,825][00882] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1859584. Throughput: 0: 924.1. Samples: 464016. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:49,831][00882] Avg episode reward: [(0, '14.169')] |
|
[2025-02-18 12:56:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1880064. Throughput: 0: 934.3. Samples: 469276. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:54,832][00882] Avg episode reward: [(0, '15.430')] |
|
[2025-02-18 12:56:55,607][13917] Updated weights for policy 0, policy_version 460 (0.0015) |
|
[2025-02-18 12:56:59,825][00882] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1900544. Throughput: 0: 939.2. Samples: 475584. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:56:59,829][00882] Avg episode reward: [(0, '16.001')] |
|
[2025-02-18 12:56:59,836][13904] Saving new best policy, reward=16.001! |
|
[2025-02-18 12:57:04,829][00882] Fps is (10 sec: 3275.3, 60 sec: 3617.9, 300 sec: 3693.3). Total num frames: 1912832. Throughput: 0: 914.4. Samples: 477646. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:04,832][00882] Avg episode reward: [(0, '15.119')] |
|
[2025-02-18 12:57:06,803][13917] Updated weights for policy 0, policy_version 470 (0.0013) |
|
[2025-02-18 12:57:09,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1937408. Throughput: 0: 939.1. Samples: 483458. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:09,832][00882] Avg episode reward: [(0, '15.534')] |
|
[2025-02-18 12:57:14,825][00882] Fps is (10 sec: 4097.9, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1953792. Throughput: 0: 928.4. Samples: 489330. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:14,832][00882] Avg episode reward: [(0, '14.394')] |
|
[2025-02-18 12:57:18,113][13917] Updated weights for policy 0, policy_version 480 (0.0018) |
|
[2025-02-18 12:57:19,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1970176. Throughput: 0: 903.2. Samples: 491224. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:19,829][00882] Avg episode reward: [(0, '15.502')] |
|
[2025-02-18 12:57:24,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1990656. Throughput: 0: 940.0. Samples: 497624. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:57:24,830][00882] Avg episode reward: [(0, '14.675')] |
|
[2025-02-18 12:57:27,712][13917] Updated weights for policy 0, policy_version 490 (0.0013) |
|
[2025-02-18 12:57:29,825][00882] Fps is (10 sec: 4095.8, 60 sec: 3754.6, 300 sec: 3707.2). Total num frames: 2011136. Throughput: 0: 926.5. Samples: 503088. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 12:57:29,829][00882] Avg episode reward: [(0, '16.404')] |
|
[2025-02-18 12:57:29,839][13904] Saving new best policy, reward=16.404! |
|
[2025-02-18 12:57:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 2027520. Throughput: 0: 921.0. Samples: 505460. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:57:34,826][00882] Avg episode reward: [(0, '16.568')] |
|
[2025-02-18 12:57:34,828][13904] Saving new best policy, reward=16.568! |
|
[2025-02-18 12:57:39,245][13917] Updated weights for policy 0, policy_version 500 (0.0017) |
|
[2025-02-18 12:57:39,825][00882] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2048000. Throughput: 0: 941.5. Samples: 511642. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:57:39,827][00882] Avg episode reward: [(0, '16.583')] |
|
[2025-02-18 12:57:39,837][13904] Saving new best policy, reward=16.583! |
|
[2025-02-18 12:57:44,827][00882] Fps is (10 sec: 3685.6, 60 sec: 3686.3, 300 sec: 3707.2). Total num frames: 2064384. Throughput: 0: 910.0. Samples: 516536. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:44,833][00882] Avg episode reward: [(0, '16.748')] |
|
[2025-02-18 12:57:44,842][13904] Saving new best policy, reward=16.748! |
|
[2025-02-18 12:57:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2084864. Throughput: 0: 926.0. Samples: 519314. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:57:49,831][00882] Avg episode reward: [(0, '16.659')] |
|
[2025-02-18 12:57:49,840][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000509_2084864.pth... |
|
[2025-02-18 12:57:49,960][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000291_1191936.pth |
|
[2025-02-18 12:57:50,790][13917] Updated weights for policy 0, policy_version 510 (0.0013) |
|
[2025-02-18 12:57:54,825][00882] Fps is (10 sec: 4096.9, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2105344. Throughput: 0: 934.3. Samples: 525502. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:57:54,832][00882] Avg episode reward: [(0, '17.481')] |
|
[2025-02-18 12:57:54,836][13904] Saving new best policy, reward=17.481! |
|
[2025-02-18 12:57:59,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2117632. Throughput: 0: 907.4. Samples: 530164. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:57:59,832][00882] Avg episode reward: [(0, '17.278')] |
|
[2025-02-18 12:58:02,050][13917] Updated weights for policy 0, policy_version 520 (0.0014) |
|
[2025-02-18 12:58:04,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3755.0, 300 sec: 3721.1). Total num frames: 2138112. Throughput: 0: 935.6. Samples: 533324. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:58:04,827][00882] Avg episode reward: [(0, '16.449')] |
|
[2025-02-18 12:58:09,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2158592. Throughput: 0: 934.7. Samples: 539684. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:09,830][00882] Avg episode reward: [(0, '16.167')] |
|
[2025-02-18 12:58:13,365][13917] Updated weights for policy 0, policy_version 530 (0.0018) |
|
[2025-02-18 12:58:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2174976. Throughput: 0: 914.5. Samples: 544238. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:58:14,827][00882] Avg episode reward: [(0, '15.797')] |
|
[2025-02-18 12:58:19,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2195456. Throughput: 0: 931.6. Samples: 547384. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:19,827][00882] Avg episode reward: [(0, '14.702')] |
|
[2025-02-18 12:58:23,197][13917] Updated weights for policy 0, policy_version 540 (0.0018) |
|
[2025-02-18 12:58:24,829][00882] Fps is (10 sec: 4094.1, 60 sec: 3754.4, 300 sec: 3721.1). Total num frames: 2215936. Throughput: 0: 928.8. Samples: 553442. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:24,834][00882] Avg episode reward: [(0, '15.264')] |
|
[2025-02-18 12:58:29,830][00882] Fps is (10 sec: 3684.3, 60 sec: 3686.1, 300 sec: 3721.0). Total num frames: 2232320. Throughput: 0: 929.3. Samples: 558360. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:29,833][00882] Avg episode reward: [(0, '15.475')] |
|
[2025-02-18 12:58:34,606][13917] Updated weights for policy 0, policy_version 550 (0.0019) |
|
[2025-02-18 12:58:34,825][00882] Fps is (10 sec: 3688.1, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2252800. Throughput: 0: 936.5. Samples: 561458. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:34,827][00882] Avg episode reward: [(0, '15.927')] |
|
[2025-02-18 12:58:39,825][00882] Fps is (10 sec: 3688.5, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2269184. Throughput: 0: 922.6. Samples: 567020. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:58:39,831][00882] Avg episode reward: [(0, '16.465')] |
|
[2025-02-18 12:58:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 2285568. Throughput: 0: 935.4. Samples: 572258. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:58:44,828][00882] Avg episode reward: [(0, '18.460')] |
|
[2025-02-18 12:58:44,835][13904] Saving new best policy, reward=18.460! |
|
[2025-02-18 12:58:46,090][13917] Updated weights for policy 0, policy_version 560 (0.0013) |
|
[2025-02-18 12:58:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2306048. Throughput: 0: 933.0. Samples: 575308. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:58:49,828][00882] Avg episode reward: [(0, '19.625')] |
|
[2025-02-18 12:58:49,837][13904] Saving new best policy, reward=19.625! |
|
[2025-02-18 12:58:54,825][00882] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2322432. Throughput: 0: 905.0. Samples: 580410. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:54,828][00882] Avg episode reward: [(0, '19.552')] |
|
[2025-02-18 12:58:57,391][13917] Updated weights for policy 0, policy_version 570 (0.0025) |
|
[2025-02-18 12:58:59,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2342912. Throughput: 0: 934.4. Samples: 586284. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:58:59,832][00882] Avg episode reward: [(0, '18.466')] |
|
[2025-02-18 12:59:04,825][00882] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2363392. Throughput: 0: 935.2. Samples: 589468. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:04,831][00882] Avg episode reward: [(0, '18.892')] |
|
[2025-02-18 12:59:08,509][13917] Updated weights for policy 0, policy_version 580 (0.0016) |
|
[2025-02-18 12:59:09,829][00882] Fps is (10 sec: 3685.0, 60 sec: 3686.2, 300 sec: 3707.2). Total num frames: 2379776. Throughput: 0: 907.9. Samples: 594296. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:59:09,830][00882] Avg episode reward: [(0, '19.077')] |
|
[2025-02-18 12:59:14,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2400256. Throughput: 0: 939.5. Samples: 600630. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:59:14,827][00882] Avg episode reward: [(0, '18.489')] |
|
[2025-02-18 12:59:18,207][13917] Updated weights for policy 0, policy_version 590 (0.0013) |
|
[2025-02-18 12:59:19,826][00882] Fps is (10 sec: 4097.0, 60 sec: 3754.6, 300 sec: 3721.1). Total num frames: 2420736. Throughput: 0: 941.8. Samples: 603842. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:19,830][00882] Avg episode reward: [(0, '19.208')] |
|
[2025-02-18 12:59:24,825][00882] Fps is (10 sec: 3686.3, 60 sec: 3686.7, 300 sec: 3721.1). Total num frames: 2437120. Throughput: 0: 924.7. Samples: 608632. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:24,832][00882] Avg episode reward: [(0, '19.416')] |
|
[2025-02-18 12:59:29,203][13917] Updated weights for policy 0, policy_version 600 (0.0012) |
|
[2025-02-18 12:59:29,825][00882] Fps is (10 sec: 3686.9, 60 sec: 3755.0, 300 sec: 3721.1). Total num frames: 2457600. Throughput: 0: 951.0. Samples: 615052. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:29,831][00882] Avg episode reward: [(0, '20.045')] |
|
[2025-02-18 12:59:29,839][13904] Saving new best policy, reward=20.045! |
|
[2025-02-18 12:59:34,825][00882] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 2473984. Throughput: 0: 947.5. Samples: 617944. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:34,827][00882] Avg episode reward: [(0, '19.296')] |
|
[2025-02-18 12:59:39,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2494464. Throughput: 0: 945.4. Samples: 622952. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:59:39,826][00882] Avg episode reward: [(0, '18.005')] |
|
[2025-02-18 12:59:40,410][13917] Updated weights for policy 0, policy_version 610 (0.0012) |
|
[2025-02-18 12:59:44,825][00882] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 2514944. Throughput: 0: 955.5. Samples: 629280. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:44,832][00882] Avg episode reward: [(0, '17.989')] |
|
[2025-02-18 12:59:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 2531328. Throughput: 0: 938.7. Samples: 631710. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 12:59:49,826][00882] Avg episode reward: [(0, '19.149')] |
|
[2025-02-18 12:59:49,835][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000618_2531328.pth... |
|
[2025-02-18 12:59:49,936][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000400_1638400.pth |
|
[2025-02-18 12:59:51,674][13917] Updated weights for policy 0, policy_version 620 (0.0013) |
|
[2025-02-18 12:59:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2551808. Throughput: 0: 953.9. Samples: 637216. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:54,827][00882] Avg episode reward: [(0, '18.441')] |
|
[2025-02-18 12:59:59,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2572288. Throughput: 0: 956.1. Samples: 643656. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 12:59:59,831][00882] Avg episode reward: [(0, '18.020')] |
|
[2025-02-18 13:00:02,286][13917] Updated weights for policy 0, policy_version 630 (0.0018) |
|
[2025-02-18 13:00:04,825][00882] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2588672. Throughput: 0: 926.8. Samples: 645546. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:00:04,830][00882] Avg episode reward: [(0, '18.949')] |
|
[2025-02-18 13:00:09,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3735.0). Total num frames: 2609152. Throughput: 0: 956.7. Samples: 651684. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:00:09,830][00882] Avg episode reward: [(0, '19.520')] |
|
[2025-02-18 13:00:12,264][13917] Updated weights for policy 0, policy_version 640 (0.0013) |
|
[2025-02-18 13:00:14,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2629632. Throughput: 0: 943.8. Samples: 657522. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:14,827][00882] Avg episode reward: [(0, '17.249')] |
|
[2025-02-18 13:00:19,826][00882] Fps is (10 sec: 3685.8, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 2646016. Throughput: 0: 925.3. Samples: 659582. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:19,832][00882] Avg episode reward: [(0, '18.727')] |
|
[2025-02-18 13:00:23,402][13917] Updated weights for policy 0, policy_version 650 (0.0028) |
|
[2025-02-18 13:00:24,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2666496. Throughput: 0: 957.4. Samples: 666036. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:00:24,828][00882] Avg episode reward: [(0, '19.029')] |
|
[2025-02-18 13:00:29,825][00882] Fps is (10 sec: 3686.9, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 2682880. Throughput: 0: 933.8. Samples: 671300. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:29,829][00882] Avg episode reward: [(0, '18.932')] |
|
[2025-02-18 13:00:34,769][13917] Updated weights for policy 0, policy_version 660 (0.0016) |
|
[2025-02-18 13:00:34,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2703360. Throughput: 0: 936.6. Samples: 673858. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:34,827][00882] Avg episode reward: [(0, '18.867')] |
|
[2025-02-18 13:00:39,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2723840. Throughput: 0: 954.4. Samples: 680166. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:39,831][00882] Avg episode reward: [(0, '19.747')] |
|
[2025-02-18 13:00:44,826][00882] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2736128. Throughput: 0: 921.7. Samples: 685132. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2025-02-18 13:00:44,829][00882] Avg episode reward: [(0, '19.244')] |
|
[2025-02-18 13:00:46,002][13917] Updated weights for policy 0, policy_version 670 (0.0013) |
|
[2025-02-18 13:00:49,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2760704. Throughput: 0: 946.4. Samples: 688136. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:49,827][00882] Avg episode reward: [(0, '19.077')] |
|
[2025-02-18 13:00:54,825][00882] Fps is (10 sec: 4505.8, 60 sec: 3823.0, 300 sec: 3748.9). Total num frames: 2781184. Throughput: 0: 953.2. Samples: 694578. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:54,831][00882] Avg episode reward: [(0, '19.354')] |
|
[2025-02-18 13:00:55,754][13917] Updated weights for policy 0, policy_version 680 (0.0012) |
|
[2025-02-18 13:00:59,825][00882] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 2793472. Throughput: 0: 931.2. Samples: 699428. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:00:59,833][00882] Avg episode reward: [(0, '19.030')] |
|
[2025-02-18 13:01:04,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3748.9). Total num frames: 2818048. Throughput: 0: 956.2. Samples: 702610. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:04,831][00882] Avg episode reward: [(0, '18.599')] |
|
[2025-02-18 13:01:06,594][13917] Updated weights for policy 0, policy_version 690 (0.0014) |
|
[2025-02-18 13:01:09,825][00882] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 2834432. Throughput: 0: 952.8. Samples: 708912. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:09,829][00882] Avg episode reward: [(0, '17.437')] |
|
[2025-02-18 13:01:14,826][00882] Fps is (10 sec: 3276.4, 60 sec: 3686.3, 300 sec: 3735.0). Total num frames: 2850816. Throughput: 0: 943.4. Samples: 713754. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:14,829][00882] Avg episode reward: [(0, '15.678')] |
|
[2025-02-18 13:01:17,812][13917] Updated weights for policy 0, policy_version 700 (0.0017) |
|
[2025-02-18 13:01:19,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3762.8). Total num frames: 2875392. Throughput: 0: 955.6. Samples: 716862. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:01:19,827][00882] Avg episode reward: [(0, '15.807')] |
|
[2025-02-18 13:01:24,825][00882] Fps is (10 sec: 4096.3, 60 sec: 3754.6, 300 sec: 3748.9). Total num frames: 2891776. Throughput: 0: 949.7. Samples: 722902. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:01:24,827][00882] Avg episode reward: [(0, '16.528')] |
|
[2025-02-18 13:01:28,867][13917] Updated weights for policy 0, policy_version 710 (0.0013) |
|
[2025-02-18 13:01:29,827][00882] Fps is (10 sec: 3276.0, 60 sec: 3754.5, 300 sec: 3735.0). Total num frames: 2908160. Throughput: 0: 954.9. Samples: 728104. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:29,833][00882] Avg episode reward: [(0, '18.131')] |
|
[2025-02-18 13:01:34,825][00882] Fps is (10 sec: 4096.2, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2932736. Throughput: 0: 960.2. Samples: 731346. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:34,830][00882] Avg episode reward: [(0, '17.932')] |
|
[2025-02-18 13:01:39,825][00882] Fps is (10 sec: 3687.3, 60 sec: 3686.4, 300 sec: 3735.0). Total num frames: 2945024. Throughput: 0: 932.7. Samples: 736550. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:39,831][00882] Avg episode reward: [(0, '17.747')] |
|
[2025-02-18 13:01:40,293][13917] Updated weights for policy 0, policy_version 720 (0.0016) |
|
[2025-02-18 13:01:44,825][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3748.9). Total num frames: 2965504. Throughput: 0: 953.2. Samples: 742324. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:44,827][00882] Avg episode reward: [(0, '18.118')] |
|
[2025-02-18 13:01:49,825][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2985984. Throughput: 0: 951.8. Samples: 745440. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2025-02-18 13:01:49,831][00882] Avg episode reward: [(0, '16.069')] |
|
[2025-02-18 13:01:49,839][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000729_2985984.pth... |
|
[2025-02-18 13:01:49,964][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000509_2084864.pth |
|
[2025-02-18 13:01:50,060][13917] Updated weights for policy 0, policy_version 730 (0.0013) |
|
[2025-02-18 13:01:54,825][00882] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3735.0). Total num frames: 3002368. Throughput: 0: 919.4. Samples: 750284. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:01:54,831][00882] Avg episode reward: [(0, '15.657')] |
|
[2025-02-18 13:01:55,430][13904] Stopping Batcher_0... |
|
[2025-02-18 13:01:55,431][13904] Loop batcher_evt_loop terminating... |
|
[2025-02-18 13:01:55,432][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth... |
|
[2025-02-18 13:01:55,430][00882] Component Batcher_0 stopped! |
|
[2025-02-18 13:01:55,433][00882] Component RolloutWorker_w2 process died already! Don't wait for it. |
|
[2025-02-18 13:01:55,439][00882] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2025-02-18 13:01:55,441][00882] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2025-02-18 13:01:55,446][00882] Component RolloutWorker_w7 process died already! Don't wait for it. |
|
[2025-02-18 13:01:55,498][13917] Weights refcount: 2 0 |
|
[2025-02-18 13:01:55,500][13917] Stopping InferenceWorker_p0-w0... |
|
[2025-02-18 13:01:55,501][13917] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-02-18 13:01:55,501][00882] Component InferenceWorker_p0-w0 stopped! |
|
[2025-02-18 13:01:55,541][13904] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000618_2531328.pth |
|
[2025-02-18 13:01:55,551][13904] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth... |
|
[2025-02-18 13:01:55,728][00882] Component LearnerWorker_p0 stopped! |
|
[2025-02-18 13:01:55,732][13904] Stopping LearnerWorker_p0... |
|
[2025-02-18 13:01:55,732][13904] Loop learner_proc0_evt_loop terminating... |
|
[2025-02-18 13:01:55,768][13918] Stopping RolloutWorker_w0... |
|
[2025-02-18 13:01:55,768][00882] Component RolloutWorker_w0 stopped! |
|
[2025-02-18 13:01:55,773][13924] Stopping RolloutWorker_w6... |
|
[2025-02-18 13:01:55,773][00882] Component RolloutWorker_w6 stopped! |
|
[2025-02-18 13:01:55,774][13924] Loop rollout_proc6_evt_loop terminating... |
|
[2025-02-18 13:01:55,769][13918] Loop rollout_proc0_evt_loop terminating... |
|
[2025-02-18 13:01:55,869][00882] Component RolloutWorker_w1 stopped! |
|
[2025-02-18 13:01:55,875][13919] Stopping RolloutWorker_w1... |
|
[2025-02-18 13:01:55,878][13919] Loop rollout_proc1_evt_loop terminating... |
|
[2025-02-18 13:01:55,886][00882] Component RolloutWorker_w5 stopped! |
|
[2025-02-18 13:01:55,892][00882] Waiting for process learner_proc0 to stop... |
|
[2025-02-18 13:01:55,895][13923] Stopping RolloutWorker_w5... |
|
[2025-02-18 13:01:55,896][13923] Loop rollout_proc5_evt_loop terminating... |
|
[2025-02-18 13:01:57,306][00882] Waiting for process inference_proc0-0 to join... |
|
[2025-02-18 13:01:57,314][00882] Waiting for process rollout_proc0 to join... |
|
[2025-02-18 13:01:58,192][00882] Waiting for process rollout_proc1 to join... |
|
[2025-02-18 13:01:58,222][00882] Waiting for process rollout_proc2 to join... |
|
[2025-02-18 13:01:58,223][00882] Waiting for process rollout_proc3 to join... |
|
[2025-02-18 13:01:58,225][00882] Waiting for process rollout_proc4 to join... |
|
[2025-02-18 13:01:58,226][00882] Waiting for process rollout_proc5 to join... |
|
[2025-02-18 13:01:58,228][00882] Waiting for process rollout_proc6 to join... |
|
[2025-02-18 13:01:58,229][00882] Waiting for process rollout_proc7 to join... |
|
[2025-02-18 13:01:58,230][00882] Batcher 0 profile tree view: |
|
batching: 16.8245, releasing_batches: 0.0257 |
|
[2025-02-18 13:01:58,231][00882] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0032 |
|
wait_policy_total: 324.3432 |
|
update_model: 7.0818 |
|
weight_update: 0.0017 |
|
one_step: 0.0025 |
|
handle_policy_step: 456.2621 |
|
deserialize: 11.0520, stack: 2.7888, obs_to_device_normalize: 103.0021, forward: 238.5585, send_messages: 17.1994 |
|
prepare_outputs: 63.5422 |
|
to_cpu: 39.7017 |
|
[2025-02-18 13:01:58,233][00882] Learner 0 profile tree view: |
|
misc: 0.0035, prepare_batch: 9.4729 |
|
train: 50.3736 |
|
epoch_init: 0.0034, minibatch_init: 0.0154, losses_postprocess: 0.4115, kl_divergence: 0.4006, after_optimizer: 24.2423 |
|
calculate_losses: 16.7474 |
|
losses_init: 0.0043, forward_head: 0.9914, bptt_initial: 11.4078, tail: 0.6082, advantages_returns: 0.1835, losses: 2.1894 |
|
bptt: 1.1977 |
|
bptt_forward_core: 1.1351 |
|
update: 8.1631 |
|
clip: 0.6785 |
|
[2025-02-18 13:01:58,234][00882] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3960, enqueue_policy_requests: 115.5666, env_step: 571.3800, overhead: 13.4035, complete_rollouts: 5.0337 |
|
save_policy_outputs: 19.3091 |
|
split_output_tensors: 7.4201 |
|
[2025-02-18 13:01:58,235][00882] Loop Runner_EvtLoop terminating... |
|
[2025-02-18 13:01:58,236][00882] Runner profile tree view: |
|
main_loop: 842.7059 |
|
[2025-02-18 13:01:58,238][00882] Collected {0: 3006464}, FPS: 3567.6 |
|
[2025-02-18 13:02:21,701][00882] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-18 13:02:21,705][00882] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-18 13:02:21,708][00882] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-18 13:02:21,711][00882] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-18 13:02:21,712][00882] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:02:21,714][00882] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-18 13:02:21,717][00882] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:02:21,718][00882] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-18 13:02:21,721][00882] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-02-18 13:02:21,723][00882] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-02-18 13:02:21,725][00882] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-18 13:02:21,728][00882] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-18 13:02:21,729][00882] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-18 13:02:21,730][00882] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-18 13:02:21,731][00882] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-18 13:02:21,766][00882] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:02:21,769][00882] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:02:21,771][00882] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:02:21,788][00882] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:02:21,907][00882] Conv encoder output size: 512 |
|
[2025-02-18 13:02:21,909][00882] Policy head output size: 512 |
|
[2025-02-18 13:02:22,086][00882] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth... |
|
[2025-02-18 13:02:22,860][00882] Num frames 100... |
|
[2025-02-18 13:02:22,992][00882] Num frames 200... |
|
[2025-02-18 13:02:23,119][00882] Num frames 300... |
|
[2025-02-18 13:02:23,249][00882] Num frames 400... |
|
[2025-02-18 13:02:23,379][00882] Num frames 500... |
|
[2025-02-18 13:02:23,513][00882] Num frames 600... |
|
[2025-02-18 13:02:23,642][00882] Num frames 700... |
|
[2025-02-18 13:02:23,779][00882] Num frames 800... |
|
[2025-02-18 13:02:23,914][00882] Num frames 900... |
|
[2025-02-18 13:02:24,043][00882] Num frames 1000... |
|
[2025-02-18 13:02:24,178][00882] Num frames 1100... |
|
[2025-02-18 13:02:24,309][00882] Num frames 1200... |
|
[2025-02-18 13:02:24,440][00882] Num frames 1300... |
|
[2025-02-18 13:02:24,599][00882] Avg episode rewards: #0: 29.810, true rewards: #0: 13.810 |
|
[2025-02-18 13:02:24,600][00882] Avg episode reward: 29.810, avg true_objective: 13.810 |
|
[2025-02-18 13:02:24,627][00882] Num frames 1400... |
|
[2025-02-18 13:02:24,757][00882] Num frames 1500... |
|
[2025-02-18 13:02:24,903][00882] Num frames 1600... |
|
[2025-02-18 13:02:25,038][00882] Num frames 1700... |
|
[2025-02-18 13:02:25,164][00882] Num frames 1800... |
|
[2025-02-18 13:02:25,290][00882] Num frames 1900... |
|
[2025-02-18 13:02:25,381][00882] Avg episode rewards: #0: 18.625, true rewards: #0: 9.625 |
|
[2025-02-18 13:02:25,382][00882] Avg episode reward: 18.625, avg true_objective: 9.625 |
|
[2025-02-18 13:02:25,478][00882] Num frames 2000... |
|
[2025-02-18 13:02:25,604][00882] Num frames 2100... |
|
[2025-02-18 13:02:25,730][00882] Num frames 2200... |
|
[2025-02-18 13:02:25,873][00882] Num frames 2300... |
|
[2025-02-18 13:02:26,006][00882] Num frames 2400... |
|
[2025-02-18 13:02:26,134][00882] Num frames 2500... |
|
[2025-02-18 13:02:26,264][00882] Num frames 2600... |
|
[2025-02-18 13:02:26,396][00882] Num frames 2700... |
|
[2025-02-18 13:02:26,525][00882] Num frames 2800... |
|
[2025-02-18 13:02:26,692][00882] Num frames 2900... |
|
[2025-02-18 13:02:26,892][00882] Num frames 3000... |
|
[2025-02-18 13:02:26,974][00882] Avg episode rewards: #0: 19.710, true rewards: #0: 10.043 |
|
[2025-02-18 13:02:26,976][00882] Avg episode reward: 19.710, avg true_objective: 10.043 |
|
[2025-02-18 13:02:27,129][00882] Num frames 3100... |
|
[2025-02-18 13:02:27,299][00882] Num frames 3200... |
|
[2025-02-18 13:02:27,469][00882] Num frames 3300... |
|
[2025-02-18 13:02:27,635][00882] Num frames 3400... |
|
[2025-02-18 13:02:27,803][00882] Num frames 3500... |
|
[2025-02-18 13:02:28,005][00882] Num frames 3600... |
|
[2025-02-18 13:02:28,164][00882] Avg episode rewards: #0: 17.633, true rewards: #0: 9.132 |
|
[2025-02-18 13:02:28,166][00882] Avg episode reward: 17.633, avg true_objective: 9.132 |
|
[2025-02-18 13:02:28,253][00882] Num frames 3700... |
|
[2025-02-18 13:02:28,437][00882] Num frames 3800... |
|
[2025-02-18 13:02:28,615][00882] Num frames 3900... |
|
[2025-02-18 13:02:28,763][00882] Num frames 4000... |
|
[2025-02-18 13:02:28,899][00882] Num frames 4100... |
|
[2025-02-18 13:02:28,958][00882] Avg episode rewards: #0: 15.202, true rewards: #0: 8.202 |
|
[2025-02-18 13:02:28,960][00882] Avg episode reward: 15.202, avg true_objective: 8.202 |
|
[2025-02-18 13:02:29,091][00882] Num frames 4200... |
|
[2025-02-18 13:02:29,221][00882] Num frames 4300... |
|
[2025-02-18 13:02:29,352][00882] Num frames 4400... |
|
[2025-02-18 13:02:29,482][00882] Num frames 4500... |
|
[2025-02-18 13:02:29,612][00882] Num frames 4600... |
|
[2025-02-18 13:02:29,742][00882] Num frames 4700... |
|
[2025-02-18 13:02:29,882][00882] Num frames 4800... |
|
[2025-02-18 13:02:30,018][00882] Num frames 4900... |
|
[2025-02-18 13:02:30,150][00882] Num frames 5000... |
|
[2025-02-18 13:02:30,244][00882] Avg episode rewards: #0: 15.882, true rewards: #0: 8.382 |
|
[2025-02-18 13:02:30,246][00882] Avg episode reward: 15.882, avg true_objective: 8.382 |
|
[2025-02-18 13:02:30,339][00882] Num frames 5100... |
|
[2025-02-18 13:02:30,469][00882] Num frames 5200... |
|
[2025-02-18 13:02:30,608][00882] Num frames 5300... |
|
[2025-02-18 13:02:30,737][00882] Num frames 5400... |
|
[2025-02-18 13:02:30,875][00882] Num frames 5500... |
|
[2025-02-18 13:02:31,040][00882] Avg episode rewards: #0: 14.676, true rewards: #0: 7.961 |
|
[2025-02-18 13:02:31,042][00882] Avg episode reward: 14.676, avg true_objective: 7.961 |
|
[2025-02-18 13:02:31,079][00882] Num frames 5600... |
|
[2025-02-18 13:02:31,208][00882] Num frames 5700... |
|
[2025-02-18 13:02:31,339][00882] Num frames 5800... |
|
[2025-02-18 13:02:31,463][00882] Num frames 5900... |
|
[2025-02-18 13:02:31,632][00882] Avg episode rewards: #0: 13.611, true rewards: #0: 7.486 |
|
[2025-02-18 13:02:31,635][00882] Avg episode reward: 13.611, avg true_objective: 7.486 |
|
[2025-02-18 13:02:31,651][00882] Num frames 6000... |
|
[2025-02-18 13:02:31,775][00882] Num frames 6100... |
|
[2025-02-18 13:02:31,915][00882] Num frames 6200... |
|
[2025-02-18 13:02:32,051][00882] Num frames 6300... |
|
[2025-02-18 13:02:32,187][00882] Num frames 6400... |
|
[2025-02-18 13:02:32,310][00882] Avg episode rewards: #0: 12.837, true rewards: #0: 7.170 |
|
[2025-02-18 13:02:32,312][00882] Avg episode reward: 12.837, avg true_objective: 7.170 |
|
[2025-02-18 13:02:32,375][00882] Num frames 6500... |
|
[2025-02-18 13:02:32,502][00882] Num frames 6600... |
|
[2025-02-18 13:02:32,630][00882] Num frames 6700... |
|
[2025-02-18 13:02:32,755][00882] Num frames 6800... |
|
[2025-02-18 13:02:32,889][00882] Num frames 6900... |
|
[2025-02-18 13:02:33,023][00882] Num frames 7000... |
|
[2025-02-18 13:02:33,160][00882] Num frames 7100... |
|
[2025-02-18 13:02:33,339][00882] Avg episode rewards: #0: 12.789, true rewards: #0: 7.189 |
|
[2025-02-18 13:02:33,340][00882] Avg episode reward: 12.789, avg true_objective: 7.189 |
|
[2025-02-18 13:03:16,631][00882] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-18 13:04:32,401][00882] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-18 13:04:32,403][00882] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-18 13:04:32,405][00882] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-18 13:04:32,407][00882] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-18 13:04:32,408][00882] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:04:32,410][00882] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-18 13:04:32,412][00882] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-18 13:04:32,413][00882] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-18 13:04:32,414][00882] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-18 13:04:32,415][00882] Adding new argument 'hf_repository'='GatinhoEducado/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-18 13:04:32,416][00882] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-18 13:04:32,417][00882] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-18 13:04:32,418][00882] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-18 13:04:32,418][00882] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-18 13:04:32,419][00882] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-18 13:04:32,451][00882] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:04:32,452][00882] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:04:32,465][00882] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:04:32,502][00882] Conv encoder output size: 512 |
|
[2025-02-18 13:04:32,504][00882] Policy head output size: 512 |
|
[2025-02-18 13:04:32,523][00882] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth... |
|
[2025-02-18 13:04:32,961][00882] Num frames 100... |
|
[2025-02-18 13:04:33,097][00882] Num frames 200... |
|
[2025-02-18 13:04:33,222][00882] Num frames 300... |
|
[2025-02-18 13:04:33,350][00882] Num frames 400... |
|
[2025-02-18 13:04:33,473][00882] Num frames 500... |
|
[2025-02-18 13:04:33,600][00882] Num frames 600... |
|
[2025-02-18 13:04:33,735][00882] Num frames 700... |
|
[2025-02-18 13:04:33,797][00882] Avg episode rewards: #0: 12.040, true rewards: #0: 7.040 |
|
[2025-02-18 13:04:33,798][00882] Avg episode reward: 12.040, avg true_objective: 7.040 |
|
[2025-02-18 13:04:33,936][00882] Num frames 800... |
|
[2025-02-18 13:04:34,061][00882] Num frames 900... |
|
[2025-02-18 13:04:34,195][00882] Num frames 1000... |
|
[2025-02-18 13:04:34,325][00882] Num frames 1100... |
|
[2025-02-18 13:04:34,449][00882] Num frames 1200... |
|
[2025-02-18 13:04:34,577][00882] Num frames 1300... |
|
[2025-02-18 13:04:34,648][00882] Avg episode rewards: #0: 12.560, true rewards: #0: 6.560 |
|
[2025-02-18 13:04:34,650][00882] Avg episode reward: 12.560, avg true_objective: 6.560 |
|
[2025-02-18 13:04:34,760][00882] Num frames 1400... |
|
[2025-02-18 13:04:34,893][00882] Num frames 1500... |
|
[2025-02-18 13:04:35,027][00882] Num frames 1600... |
|
[2025-02-18 13:04:35,213][00882] Avg episode rewards: #0: 10.320, true rewards: #0: 5.653 |
|
[2025-02-18 13:04:35,215][00882] Avg episode reward: 10.320, avg true_objective: 5.653 |
|
[2025-02-18 13:04:35,225][00882] Num frames 1700... |
|
[2025-02-18 13:04:35,358][00882] Num frames 1800... |
|
[2025-02-18 13:04:35,487][00882] Num frames 1900... |
|
[2025-02-18 13:04:35,612][00882] Num frames 2000... |
|
[2025-02-18 13:04:35,742][00882] Num frames 2100... |
|
[2025-02-18 13:04:35,876][00882] Num frames 2200... |
|
[2025-02-18 13:04:36,002][00882] Num frames 2300... |
|
[2025-02-18 13:04:36,129][00882] Num frames 2400... |
|
[2025-02-18 13:04:36,301][00882] Avg episode rewards: #0: 11.468, true rewards: #0: 6.217 |
|
[2025-02-18 13:04:36,302][00882] Avg episode reward: 11.468, avg true_objective: 6.217 |
|
[2025-02-18 13:04:36,324][00882] Num frames 2500... |
|
[2025-02-18 13:04:36,446][00882] Num frames 2600... |
|
[2025-02-18 13:04:36,573][00882] Num frames 2700... |
|
[2025-02-18 13:04:36,698][00882] Num frames 2800... |
|
[2025-02-18 13:04:36,831][00882] Num frames 2900... |
|
[2025-02-18 13:04:36,966][00882] Num frames 3000... |
|
[2025-02-18 13:04:37,098][00882] Num frames 3100... |
|
[2025-02-18 13:04:37,234][00882] Num frames 3200... |
|
[2025-02-18 13:04:37,376][00882] Num frames 3300... |
|
[2025-02-18 13:04:37,571][00882] Num frames 3400... |
|
[2025-02-18 13:04:37,766][00882] Num frames 3500... |
|
[2025-02-18 13:04:37,938][00882] Num frames 3600... |
|
[2025-02-18 13:04:38,105][00882] Num frames 3700... |
|
[2025-02-18 13:04:38,279][00882] Num frames 3800... |
|
[2025-02-18 13:04:38,451][00882] Num frames 3900... |
|
[2025-02-18 13:04:38,618][00882] Num frames 4000... |
|
[2025-02-18 13:04:38,714][00882] Avg episode rewards: #0: 16.646, true rewards: #0: 8.046 |
|
[2025-02-18 13:04:38,719][00882] Avg episode reward: 16.646, avg true_objective: 8.046 |
|
[2025-02-18 13:04:38,862][00882] Num frames 4100... |
|
[2025-02-18 13:04:39,057][00882] Num frames 4200... |
|
[2025-02-18 13:04:39,231][00882] Num frames 4300... |
|
[2025-02-18 13:04:39,422][00882] Num frames 4400... |
|
[2025-02-18 13:04:39,605][00882] Num frames 4500... |
|
[2025-02-18 13:04:39,737][00882] Num frames 4600... |
|
[2025-02-18 13:04:39,873][00882] Num frames 4700... |
|
[2025-02-18 13:04:40,008][00882] Num frames 4800... |
|
[2025-02-18 13:04:40,177][00882] Avg episode rewards: #0: 16.812, true rewards: #0: 8.145 |
|
[2025-02-18 13:04:40,178][00882] Avg episode reward: 16.812, avg true_objective: 8.145 |
|
[2025-02-18 13:04:40,199][00882] Num frames 4900... |
|
[2025-02-18 13:04:40,340][00882] Num frames 5000... |
|
[2025-02-18 13:04:40,465][00882] Num frames 5100... |
|
[2025-02-18 13:04:40,595][00882] Num frames 5200... |
|
[2025-02-18 13:04:40,723][00882] Num frames 5300... |
|
[2025-02-18 13:04:40,783][00882] Avg episode rewards: #0: 15.290, true rewards: #0: 7.576 |
|
[2025-02-18 13:04:40,786][00882] Avg episode reward: 15.290, avg true_objective: 7.576 |
|
[2025-02-18 13:04:40,921][00882] Num frames 5400... |
|
[2025-02-18 13:04:41,049][00882] Num frames 5500... |
|
[2025-02-18 13:04:41,181][00882] Num frames 5600... |
|
[2025-02-18 13:04:41,310][00882] Num frames 5700... |
|
[2025-02-18 13:04:41,448][00882] Num frames 5800... |
|
[2025-02-18 13:04:41,574][00882] Num frames 5900... |
|
[2025-02-18 13:04:41,702][00882] Num frames 6000... |
|
[2025-02-18 13:04:41,831][00882] Num frames 6100... |
|
[2025-02-18 13:04:42,018][00882] Avg episode rewards: #0: 15.499, true rewards: #0: 7.749 |
|
[2025-02-18 13:04:42,021][00882] Avg episode reward: 15.499, avg true_objective: 7.749 |
|
[2025-02-18 13:04:42,024][00882] Num frames 6200... |
|
[2025-02-18 13:04:42,153][00882] Num frames 6300... |
|
[2025-02-18 13:04:42,288][00882] Num frames 6400... |
|
[2025-02-18 13:04:42,425][00882] Num frames 6500... |
|
[2025-02-18 13:04:42,556][00882] Num frames 6600... |
|
[2025-02-18 13:04:42,686][00882] Num frames 6700... |
|
[2025-02-18 13:04:42,791][00882] Avg episode rewards: #0: 14.709, true rewards: #0: 7.487 |
|
[2025-02-18 13:04:42,792][00882] Avg episode reward: 14.709, avg true_objective: 7.487 |
|
[2025-02-18 13:04:42,883][00882] Num frames 6800... |
|
[2025-02-18 13:04:43,017][00882] Num frames 6900... |
|
[2025-02-18 13:04:43,151][00882] Num frames 7000... |
|
[2025-02-18 13:04:43,301][00882] Num frames 7100... |
|
[2025-02-18 13:04:43,441][00882] Num frames 7200... |
|
[2025-02-18 13:04:43,573][00882] Num frames 7300... |
|
[2025-02-18 13:04:43,701][00882] Num frames 7400... |
|
[2025-02-18 13:04:43,834][00882] Num frames 7500... |
|
[2025-02-18 13:04:43,964][00882] Num frames 7600... |
|
[2025-02-18 13:04:44,094][00882] Num frames 7700... |
|
[2025-02-18 13:04:44,189][00882] Avg episode rewards: #0: 15.230, true rewards: #0: 7.730 |
|
[2025-02-18 13:04:44,190][00882] Avg episode reward: 15.230, avg true_objective: 7.730 |
|
[2025-02-18 13:05:31,111][00882] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-18 13:05:36,683][00882] The model has been pushed to https://huggingface.co/GatinhoEducado/rl_course_vizdoom_health_gathering_supreme |
|
[2025-02-18 13:06:47,378][00882] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-18 13:06:47,380][00882] Overriding arg 'train_for_env_steps' with value 9000000 passed from command line |
|
[2025-02-18 13:06:47,387][00882] Experiment dir /content/train_dir/default_experiment already exists! |
|
[2025-02-18 13:06:47,388][00882] Resuming existing experiment from /content/train_dir/default_experiment... |
|
[2025-02-18 13:06:47,390][00882] Weights and Biases integration disabled |
|
[2025-02-18 13:06:47,394][00882] Environment var CUDA_VISIBLE_DEVICES is 0 |
|
|
|
[2025-02-18 13:06:49,777][00882] Starting experiment with the following configuration: |
|
help=False |
|
algo=APPO |
|
env=doom_health_gathering_supreme |
|
experiment=default_experiment |
|
train_dir=/content/train_dir |
|
restart_behavior=resume |
|
device=gpu |
|
seed=None |
|
num_policies=1 |
|
async_rl=True |
|
serial_mode=False |
|
batched_sampling=False |
|
num_batches_to_accumulate=2 |
|
worker_num_splits=2 |
|
policy_workers_per_policy=1 |
|
max_policy_lag=1000 |
|
num_workers=8 |
|
num_envs_per_worker=4 |
|
batch_size=1024 |
|
num_batches_per_epoch=1 |
|
num_epochs=1 |
|
rollout=32 |
|
recurrence=32 |
|
shuffle_minibatches=False |
|
gamma=0.99 |
|
reward_scale=1.0 |
|
reward_clip=1000.0 |
|
value_bootstrap=False |
|
normalize_returns=True |
|
exploration_loss_coeff=0.001 |
|
value_loss_coeff=0.5 |
|
kl_loss_coeff=0.0 |
|
exploration_loss=symmetric_kl |
|
gae_lambda=0.95 |
|
ppo_clip_ratio=0.1 |
|
ppo_clip_value=0.2 |
|
with_vtrace=False |
|
vtrace_rho=1.0 |
|
vtrace_c=1.0 |
|
optimizer=adam |
|
adam_eps=1e-06 |
|
adam_beta1=0.9 |
|
adam_beta2=0.999 |
|
max_grad_norm=4.0 |
|
learning_rate=0.0001 |
|
lr_schedule=constant |
|
lr_schedule_kl_threshold=0.008 |
|
lr_adaptive_min=1e-06 |
|
lr_adaptive_max=0.01 |
|
obs_subtract_mean=0.0 |
|
obs_scale=255.0 |
|
normalize_input=True |
|
normalize_input_keys=None |
|
decorrelate_experience_max_seconds=0 |
|
decorrelate_envs_on_one_worker=True |
|
actor_worker_gpus=[] |
|
set_workers_cpu_affinity=True |
|
force_envs_single_thread=False |
|
default_niceness=0 |
|
log_to_file=True |
|
experiment_summaries_interval=10 |
|
flush_summaries_interval=30 |
|
stats_avg=100 |
|
summaries_use_frameskip=True |
|
heartbeat_interval=20 |
|
heartbeat_reporting_interval=600 |
|
train_for_env_steps=9000000 |
|
train_for_seconds=10000000000 |
|
save_every_sec=120 |
|
keep_checkpoints=2 |
|
load_checkpoint_kind=latest |
|
save_milestones_sec=-1 |
|
save_best_every_sec=5 |
|
save_best_metric=reward |
|
save_best_after=100000 |
|
benchmark=False |
|
encoder_mlp_layers=[512, 512] |
|
encoder_conv_architecture=convnet_simple |
|
encoder_conv_mlp_layers=[512] |
|
use_rnn=True |
|
rnn_size=512 |
|
rnn_type=gru |
|
rnn_num_layers=1 |
|
decoder_mlp_layers=[] |
|
nonlinearity=elu |
|
policy_initialization=orthogonal |
|
policy_init_gain=1.0 |
|
actor_critic_share_weights=True |
|
adaptive_stddev=True |
|
continuous_tanh_scale=0.0 |
|
initial_stddev=1.0 |
|
use_env_info_cache=False |
|
env_gpu_actions=False |
|
env_gpu_observations=True |
|
env_frameskip=4 |
|
env_framestack=1 |
|
pixel_format=CHW |
|
use_record_episode_statistics=False |
|
with_wandb=False |
|
wandb_user=None |
|
wandb_project=sample_factory |
|
wandb_group=None |
|
wandb_job_type=SF |
|
wandb_tags=[] |
|
with_pbt=False |
|
pbt_mix_policies_in_one_env=True |
|
pbt_period_env_steps=5000000 |
|
pbt_start_mutation=20000000 |
|
pbt_replace_fraction=0.3 |
|
pbt_mutation_rate=0.15 |
|
pbt_replace_reward_gap=0.1 |
|
pbt_replace_reward_gap_absolute=1e-06 |
|
pbt_optimize_gamma=False |
|
pbt_target_objective=true_objective |
|
pbt_perturb_min=1.1 |
|
pbt_perturb_max=1.5 |
|
num_agents=-1 |
|
num_humans=0 |
|
num_bots=-1 |
|
start_bot_difficulty=None |
|
timelimit=None |
|
res_w=128 |
|
res_h=72 |
|
wide_aspect_ratio=False |
|
eval_env_frameskip=1 |
|
fps=35 |
|
command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=3000000 |
|
cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 3000000} |
|
git_hash=unknown |
|
git_repo_name=not a git repository |
|
[2025-02-18 13:06:49,779][00882] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2025-02-18 13:06:49,781][00882] Rollout worker 0 uses device cpu |
|
[2025-02-18 13:06:49,783][00882] Rollout worker 1 uses device cpu |
|
[2025-02-18 13:06:49,784][00882] Rollout worker 2 uses device cpu |
|
[2025-02-18 13:06:49,785][00882] Rollout worker 3 uses device cpu |
|
[2025-02-18 13:06:49,786][00882] Rollout worker 4 uses device cpu |
|
[2025-02-18 13:06:49,787][00882] Rollout worker 5 uses device cpu |
|
[2025-02-18 13:06:49,788][00882] Rollout worker 6 uses device cpu |
|
[2025-02-18 13:06:49,789][00882] Rollout worker 7 uses device cpu |
|
[2025-02-18 13:06:49,905][00882] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 13:06:49,907][00882] InferenceWorker_p0-w0: min num requests: 2 |
|
[2025-02-18 13:06:49,948][00882] Starting all processes... |
|
[2025-02-18 13:06:49,950][00882] Starting process learner_proc0 |
|
[2025-02-18 13:06:50,020][00882] Starting all processes... |
|
[2025-02-18 13:06:50,030][00882] Starting process inference_proc0-0 |
|
[2025-02-18 13:06:50,031][00882] Starting process rollout_proc0 |
|
[2025-02-18 13:06:50,032][00882] Starting process rollout_proc1 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc2 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc3 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc4 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc5 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc6 |
|
[2025-02-18 13:06:50,034][00882] Starting process rollout_proc7 |
|
[2025-02-18 13:07:05,273][19749] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 13:07:05,276][19749] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2025-02-18 13:07:05,401][19749] Num visible devices: 1 |
|
[2025-02-18 13:07:05,439][19749] Starting seed is not provided |
|
[2025-02-18 13:07:05,439][19749] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 13:07:05,440][19749] Initializing actor-critic model on device cuda:0 |
|
[2025-02-18 13:07:05,441][19749] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:07:05,443][19749] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:07:05,527][19749] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:07:05,893][19763] Worker 0 uses CPU cores [0] |
|
[2025-02-18 13:07:05,898][19762] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 13:07:05,901][19762] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2025-02-18 13:07:05,961][19767] Worker 4 uses CPU cores [0] |
|
[2025-02-18 13:07:05,993][19762] Num visible devices: 1 |
|
[2025-02-18 13:07:06,155][19764] Worker 1 uses CPU cores [1] |
|
[2025-02-18 13:07:06,197][19774] Worker 7 uses CPU cores [1] |
|
[2025-02-18 13:07:06,205][19773] Worker 6 uses CPU cores [0] |
|
[2025-02-18 13:07:06,215][19766] Worker 3 uses CPU cores [1] |
|
[2025-02-18 13:07:06,250][19765] Worker 2 uses CPU cores [0] |
|
[2025-02-18 13:07:06,271][19749] Conv encoder output size: 512 |
|
[2025-02-18 13:07:06,272][19772] Worker 5 uses CPU cores [1] |
|
[2025-02-18 13:07:06,273][19749] Policy head output size: 512 |
|
[2025-02-18 13:07:06,292][19749] Created Actor Critic model with architecture: |
|
[2025-02-18 13:07:06,293][19749] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2025-02-18 13:07:06,429][19749] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2025-02-18 13:07:07,376][19749] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth... |
|
[2025-02-18 13:07:07,418][19749] Loading model from checkpoint |
|
[2025-02-18 13:07:07,419][19749] Loaded experiment state at self.train_step=734, self.env_steps=3006464 |
|
[2025-02-18 13:07:07,420][19749] Initialized policy 0 weights for model version 734 |
|
[2025-02-18 13:07:07,422][19749] LearnerWorker_p0 finished initialization! |
|
[2025-02-18 13:07:07,425][19749] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2025-02-18 13:07:07,566][19762] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:07:07,567][19762] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:07:07,578][19762] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:07:07,679][19762] Conv encoder output size: 512 |
|
[2025-02-18 13:07:07,679][19762] Policy head output size: 512 |
|
[2025-02-18 13:07:07,714][00882] Inference worker 0-0 is ready! |
|
[2025-02-18 13:07:07,716][00882] All inference workers are ready! Signal rollout workers to start! |
|
[2025-02-18 13:07:07,956][19772] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:07,976][19766] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,017][19765] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,053][19763] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,092][19767] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,097][19774] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,103][19764] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,109][19773] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2025-02-18 13:07:08,699][19765] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:09,102][19772] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:09,104][19766] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:09,157][19764] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:09,504][19765] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:09,844][19772] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:09,892][00882] Heartbeat connected on Batcher_0 |
|
[2025-02-18 13:07:09,899][00882] Heartbeat connected on LearnerWorker_p0 |
|
[2025-02-18 13:07:09,912][19764] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:09,935][00882] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2025-02-18 13:07:10,035][19773] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:10,429][19773] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:10,943][19773] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:11,146][19774] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:11,205][19772] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:11,283][19764] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:11,595][19773] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:11,723][00882] Heartbeat connected on RolloutWorker_w6 |
|
[2025-02-18 13:07:11,870][19765] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:12,394][00882] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 3006464. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-18 13:07:12,719][19774] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:12,881][19772] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:12,952][19764] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:13,096][00882] Heartbeat connected on RolloutWorker_w5 |
|
[2025-02-18 13:07:13,180][00882] Heartbeat connected on RolloutWorker_w1 |
|
[2025-02-18 13:07:13,456][19763] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:13,630][19765] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:13,938][00882] Heartbeat connected on RolloutWorker_w2 |
|
[2025-02-18 13:07:16,290][19763] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:16,891][19774] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:17,395][00882] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 3006464. Throughput: 0: 330.4. Samples: 1652. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2025-02-18 13:07:17,401][00882] Avg episode reward: [(0, '4.547')] |
|
[2025-02-18 13:07:19,069][19766] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:21,154][19774] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:21,453][00882] Heartbeat connected on RolloutWorker_w7 |
|
[2025-02-18 13:07:22,200][19766] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:22,394][00882] Fps is (10 sec: 1228.8, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 3018752. Throughput: 0: 238.0. Samples: 2380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:07:22,399][00882] Avg episode reward: [(0, '7.009')] |
|
[2025-02-18 13:07:22,466][19763] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:23,423][19767] Decorrelating experience for 0 frames... |
|
[2025-02-18 13:07:24,556][19766] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:25,074][00882] Heartbeat connected on RolloutWorker_w3 |
|
[2025-02-18 13:07:25,583][19763] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:25,889][00882] Heartbeat connected on RolloutWorker_w0 |
|
[2025-02-18 13:07:25,924][19767] Decorrelating experience for 32 frames... |
|
[2025-02-18 13:07:27,394][00882] Fps is (10 sec: 2867.3, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 3035136. Throughput: 0: 480.0. Samples: 7200. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:07:27,399][00882] Avg episode reward: [(0, '9.426')] |
|
[2025-02-18 13:07:28,781][19767] Decorrelating experience for 64 frames... |
|
[2025-02-18 13:07:29,302][19762] Updated weights for policy 0, policy_version 744 (0.0013) |
|
[2025-02-18 13:07:31,465][19767] Decorrelating experience for 96 frames... |
|
[2025-02-18 13:07:32,012][00882] Heartbeat connected on RolloutWorker_w4 |
|
[2025-02-18 13:07:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 3055616. Throughput: 0: 631.6. Samples: 12632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:07:32,398][00882] Avg episode reward: [(0, '12.195')] |
|
[2025-02-18 13:07:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 3072000. Throughput: 0: 586.1. Samples: 14652. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:07:37,396][00882] Avg episode reward: [(0, '13.366')] |
|
[2025-02-18 13:07:40,259][19762] Updated weights for policy 0, policy_version 754 (0.0024) |
|
[2025-02-18 13:07:42,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 3096576. Throughput: 0: 720.7. Samples: 21622. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:07:42,400][00882] Avg episode reward: [(0, '14.408')] |
|
[2025-02-18 13:07:47,396][00882] Fps is (10 sec: 4504.9, 60 sec: 3159.6, 300 sec: 3159.6). Total num frames: 3117056. Throughput: 0: 793.3. Samples: 27766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:07:47,399][00882] Avg episode reward: [(0, '17.143')] |
|
[2025-02-18 13:07:51,150][19762] Updated weights for policy 0, policy_version 764 (0.0020) |
|
[2025-02-18 13:07:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3174.4, 300 sec: 3174.4). Total num frames: 3133440. Throughput: 0: 747.5. Samples: 29900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:07:52,396][00882] Avg episode reward: [(0, '18.267')] |
|
[2025-02-18 13:07:57,394][00882] Fps is (10 sec: 4096.7, 60 sec: 3367.8, 300 sec: 3367.8). Total num frames: 3158016. Throughput: 0: 813.9. Samples: 36626. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:07:57,396][00882] Avg episode reward: [(0, '18.926')] |
|
[2025-02-18 13:08:00,197][19762] Updated weights for policy 0, policy_version 774 (0.0013) |
|
[2025-02-18 13:08:02,397][00882] Fps is (10 sec: 4094.9, 60 sec: 3358.5, 300 sec: 3358.5). Total num frames: 3174400. Throughput: 0: 911.4. Samples: 42668. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:08:02,404][00882] Avg episode reward: [(0, '19.701')] |
|
[2025-02-18 13:08:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3425.7, 300 sec: 3425.7). Total num frames: 3194880. Throughput: 0: 947.6. Samples: 45024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:08:07,397][00882] Avg episode reward: [(0, '22.523')] |
|
[2025-02-18 13:08:07,402][19749] Saving new best policy, reward=22.523! |
|
[2025-02-18 13:08:10,457][19762] Updated weights for policy 0, policy_version 784 (0.0021) |
|
[2025-02-18 13:08:12,394][00882] Fps is (10 sec: 4506.8, 60 sec: 3549.9, 300 sec: 3549.9). Total num frames: 3219456. Throughput: 0: 994.4. Samples: 51950. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:08:12,396][00882] Avg episode reward: [(0, '21.739')] |
|
[2025-02-18 13:08:17,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3528.9). Total num frames: 3235840. Throughput: 0: 999.6. Samples: 57616. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:08:17,398][00882] Avg episode reward: [(0, '22.491')] |
|
[2025-02-18 13:08:21,449][19762] Updated weights for policy 0, policy_version 794 (0.0023) |
|
[2025-02-18 13:08:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3569.4). Total num frames: 3256320. Throughput: 0: 1009.4. Samples: 60074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:08:22,397][00882] Avg episode reward: [(0, '22.467')] |
|
[2025-02-18 13:08:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3604.5). Total num frames: 3276800. Throughput: 0: 1008.4. Samples: 67000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:08:27,400][00882] Avg episode reward: [(0, '21.771')] |
|
[2025-02-18 13:08:31,219][19762] Updated weights for policy 0, policy_version 804 (0.0012) |
|
[2025-02-18 13:08:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3584.0). Total num frames: 3293184. Throughput: 0: 994.0. Samples: 72496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:08:32,397][00882] Avg episode reward: [(0, '19.324')] |
|
[2025-02-18 13:08:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3614.1). Total num frames: 3313664. Throughput: 0: 1007.3. Samples: 75228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:08:37,401][00882] Avg episode reward: [(0, '18.638')] |
|
[2025-02-18 13:08:40,885][19762] Updated weights for policy 0, policy_version 814 (0.0012) |
|
[2025-02-18 13:08:42,394][00882] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3686.4). Total num frames: 3338240. Throughput: 0: 1012.8. Samples: 82202. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:08:42,397][00882] Avg episode reward: [(0, '18.486')] |
|
[2025-02-18 13:08:47,400][00882] Fps is (10 sec: 4093.5, 60 sec: 3959.2, 300 sec: 3664.6). Total num frames: 3354624. Throughput: 0: 997.8. Samples: 87572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:08:47,403][00882] Avg episode reward: [(0, '19.587')] |
|
[2025-02-18 13:08:47,411][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000819_3354624.pth... |
|
[2025-02-18 13:08:47,597][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000729_2985984.pth |
|
[2025-02-18 13:08:51,878][19762] Updated weights for policy 0, policy_version 824 (0.0024) |
|
[2025-02-18 13:08:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3686.4). Total num frames: 3375104. Throughput: 0: 1008.0. Samples: 90384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:08:52,400][00882] Avg episode reward: [(0, '20.314')] |
|
[2025-02-18 13:08:57,394][00882] Fps is (10 sec: 4508.4, 60 sec: 4027.7, 300 sec: 3744.9). Total num frames: 3399680. Throughput: 0: 1005.0. Samples: 97176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:08:57,396][00882] Avg episode reward: [(0, '22.685')] |
|
[2025-02-18 13:08:57,401][19749] Saving new best policy, reward=22.685! |
|
[2025-02-18 13:09:02,362][19762] Updated weights for policy 0, policy_version 834 (0.0016) |
|
[2025-02-18 13:09:02,396][00882] Fps is (10 sec: 4095.2, 60 sec: 4027.8, 300 sec: 3723.6). Total num frames: 3416064. Throughput: 0: 996.9. Samples: 102480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:09:02,403][00882] Avg episode reward: [(0, '23.570')] |
|
[2025-02-18 13:09:02,408][19749] Saving new best policy, reward=23.570! |
|
[2025-02-18 13:09:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3739.8). Total num frames: 3436544. Throughput: 0: 1006.2. Samples: 105354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:09:07,404][00882] Avg episode reward: [(0, '23.080')] |
|
[2025-02-18 13:09:11,469][19762] Updated weights for policy 0, policy_version 844 (0.0013) |
|
[2025-02-18 13:09:12,394][00882] Fps is (10 sec: 4096.9, 60 sec: 3959.5, 300 sec: 3754.7). Total num frames: 3457024. Throughput: 0: 1005.5. Samples: 112246. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:09:12,398][00882] Avg episode reward: [(0, '22.810')] |
|
[2025-02-18 13:09:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3735.6). Total num frames: 3473408. Throughput: 0: 999.1. Samples: 117454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:09:17,397][00882] Avg episode reward: [(0, '22.423')] |
|
[2025-02-18 13:09:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3749.4). Total num frames: 3493888. Throughput: 0: 1004.0. Samples: 120408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:09:22,397][00882] Avg episode reward: [(0, '20.823')] |
|
[2025-02-18 13:09:22,440][19762] Updated weights for policy 0, policy_version 854 (0.0013) |
|
[2025-02-18 13:09:27,394][00882] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3792.6). Total num frames: 3518464. Throughput: 0: 1002.7. Samples: 127322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:09:27,396][00882] Avg episode reward: [(0, '22.768')] |
|
[2025-02-18 13:09:32,394][00882] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3774.2). Total num frames: 3534848. Throughput: 0: 996.0. Samples: 132386. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:09:32,397][00882] Avg episode reward: [(0, '23.410')] |
|
[2025-02-18 13:09:33,141][19762] Updated weights for policy 0, policy_version 864 (0.0021) |
|
[2025-02-18 13:09:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3785.3). Total num frames: 3555328. Throughput: 0: 1006.3. Samples: 135666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:09:37,399][00882] Avg episode reward: [(0, '23.201')] |
|
[2025-02-18 13:09:41,799][19762] Updated weights for policy 0, policy_version 874 (0.0017) |
|
[2025-02-18 13:09:42,394][00882] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3822.9). Total num frames: 3579904. Throughput: 0: 1010.9. Samples: 142668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:09:42,397][00882] Avg episode reward: [(0, '23.998')] |
|
[2025-02-18 13:09:42,400][19749] Saving new best policy, reward=23.998! |
|
[2025-02-18 13:09:47,394][00882] Fps is (10 sec: 4095.9, 60 sec: 4028.1, 300 sec: 3805.3). Total num frames: 3596288. Throughput: 0: 998.8. Samples: 147426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:09:47,396][00882] Avg episode reward: [(0, '24.576')] |
|
[2025-02-18 13:09:47,404][19749] Saving new best policy, reward=24.576! |
|
[2025-02-18 13:09:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3814.4). Total num frames: 3616768. Throughput: 0: 1008.6. Samples: 150742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:09:52,407][00882] Avg episode reward: [(0, '24.515')] |
|
[2025-02-18 13:09:52,751][19762] Updated weights for policy 0, policy_version 884 (0.0021) |
|
[2025-02-18 13:09:57,395][00882] Fps is (10 sec: 4505.1, 60 sec: 4027.6, 300 sec: 3847.7). Total num frames: 3641344. Throughput: 0: 1007.6. Samples: 157590. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:09:57,402][00882] Avg episode reward: [(0, '25.071')] |
|
[2025-02-18 13:09:57,409][19749] Saving new best policy, reward=25.071! |
|
[2025-02-18 13:10:02,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3959.6, 300 sec: 3806.9). Total num frames: 3653632. Throughput: 0: 996.0. Samples: 162276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:10:02,398][00882] Avg episode reward: [(0, '27.069')] |
|
[2025-02-18 13:10:02,400][19749] Saving new best policy, reward=27.069! |
|
[2025-02-18 13:10:03,558][19762] Updated weights for policy 0, policy_version 894 (0.0012) |
|
[2025-02-18 13:10:07,394][00882] Fps is (10 sec: 3686.9, 60 sec: 4027.7, 300 sec: 3838.5). Total num frames: 3678208. Throughput: 0: 1007.3. Samples: 165738. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:10:07,398][00882] Avg episode reward: [(0, '26.619')] |
|
[2025-02-18 13:10:12,396][00882] Fps is (10 sec: 4504.7, 60 sec: 4027.6, 300 sec: 3845.6). Total num frames: 3698688. Throughput: 0: 1005.9. Samples: 172590. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:10:12,400][00882] Avg episode reward: [(0, '26.753')] |
|
[2025-02-18 13:10:13,199][19762] Updated weights for policy 0, policy_version 904 (0.0012) |
|
[2025-02-18 13:10:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3830.3). Total num frames: 3715072. Throughput: 0: 995.6. Samples: 177188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:10:17,396][00882] Avg episode reward: [(0, '26.823')] |
|
[2025-02-18 13:10:22,394][00882] Fps is (10 sec: 3687.2, 60 sec: 4027.7, 300 sec: 3837.3). Total num frames: 3735552. Throughput: 0: 996.8. Samples: 180522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:10:22,401][00882] Avg episode reward: [(0, '26.055')] |
|
[2025-02-18 13:10:23,634][19762] Updated weights for policy 0, policy_version 914 (0.0012) |
|
[2025-02-18 13:10:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3843.9). Total num frames: 3756032. Throughput: 0: 988.4. Samples: 187148. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:10:27,396][00882] Avg episode reward: [(0, '24.898')] |
|
[2025-02-18 13:10:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3829.8). Total num frames: 3772416. Throughput: 0: 985.6. Samples: 191780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:10:32,401][00882] Avg episode reward: [(0, '24.057')] |
|
[2025-02-18 13:10:34,787][19762] Updated weights for policy 0, policy_version 924 (0.0037) |
|
[2025-02-18 13:10:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3836.3). Total num frames: 3792896. Throughput: 0: 986.2. Samples: 195122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:10:37,398][00882] Avg episode reward: [(0, '24.109')] |
|
[2025-02-18 13:10:42,396][00882] Fps is (10 sec: 4095.1, 60 sec: 3891.1, 300 sec: 3842.4). Total num frames: 3813376. Throughput: 0: 983.2. Samples: 201834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:10:42,399][00882] Avg episode reward: [(0, '23.963')] |
|
[2025-02-18 13:10:45,761][19762] Updated weights for policy 0, policy_version 934 (0.0022) |
|
[2025-02-18 13:10:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3829.3). Total num frames: 3829760. Throughput: 0: 981.8. Samples: 206458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:10:47,398][00882] Avg episode reward: [(0, '24.148')] |
|
[2025-02-18 13:10:47,408][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000935_3829760.pth... |
|
[2025-02-18 13:10:47,544][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000734_3006464.pth |
|
[2025-02-18 13:10:52,394][00882] Fps is (10 sec: 4096.9, 60 sec: 3959.5, 300 sec: 3854.0). Total num frames: 3854336. Throughput: 0: 979.0. Samples: 209792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:10:52,396][00882] Avg episode reward: [(0, '24.171')] |
|
[2025-02-18 13:10:55,079][19762] Updated weights for policy 0, policy_version 944 (0.0022) |
|
[2025-02-18 13:10:57,402][00882] Fps is (10 sec: 4501.9, 60 sec: 3890.7, 300 sec: 3859.2). Total num frames: 3874816. Throughput: 0: 972.5. Samples: 216358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:10:57,405][00882] Avg episode reward: [(0, '24.658')] |
|
[2025-02-18 13:11:02,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3846.7). Total num frames: 3891200. Throughput: 0: 972.1. Samples: 220932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:11:02,401][00882] Avg episode reward: [(0, '24.236')] |
|
[2025-02-18 13:11:06,172][19762] Updated weights for policy 0, policy_version 954 (0.0013) |
|
[2025-02-18 13:11:07,394][00882] Fps is (10 sec: 3689.5, 60 sec: 3891.2, 300 sec: 3852.0). Total num frames: 3911680. Throughput: 0: 972.6. Samples: 224290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:11:07,397][00882] Avg episode reward: [(0, '23.812')] |
|
[2025-02-18 13:11:12,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3891.3, 300 sec: 3857.1). Total num frames: 3932160. Throughput: 0: 972.8. Samples: 230924. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:11:12,396][00882] Avg episode reward: [(0, '24.625')] |
|
[2025-02-18 13:11:17,179][19762] Updated weights for policy 0, policy_version 964 (0.0016) |
|
[2025-02-18 13:11:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3845.2). Total num frames: 3948544. Throughput: 0: 975.2. Samples: 235664. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:11:17,401][00882] Avg episode reward: [(0, '25.530')] |
|
[2025-02-18 13:11:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3850.2). Total num frames: 3969024. Throughput: 0: 975.2. Samples: 239008. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:11:22,401][00882] Avg episode reward: [(0, '25.527')] |
|
[2025-02-18 13:11:27,066][19762] Updated weights for policy 0, policy_version 974 (0.0013) |
|
[2025-02-18 13:11:27,397][00882] Fps is (10 sec: 4094.7, 60 sec: 3891.0, 300 sec: 3855.0). Total num frames: 3989504. Throughput: 0: 967.0. Samples: 245352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:11:27,403][00882] Avg episode reward: [(0, '26.967')] |
|
[2025-02-18 13:11:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3843.9). Total num frames: 4005888. Throughput: 0: 970.8. Samples: 250146. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:11:32,400][00882] Avg episode reward: [(0, '26.381')] |
|
[2025-02-18 13:11:37,394][00882] Fps is (10 sec: 3687.6, 60 sec: 3891.2, 300 sec: 3848.7). Total num frames: 4026368. Throughput: 0: 970.3. Samples: 253454. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:11:37,397][00882] Avg episode reward: [(0, '26.973')] |
|
[2025-02-18 13:11:37,640][19762] Updated weights for policy 0, policy_version 984 (0.0012) |
|
[2025-02-18 13:11:42,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.3, 300 sec: 3853.3). Total num frames: 4046848. Throughput: 0: 964.6. Samples: 259756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:11:42,399][00882] Avg episode reward: [(0, '24.927')] |
|
[2025-02-18 13:11:47,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3842.8). Total num frames: 4063232. Throughput: 0: 974.4. Samples: 264778. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:11:47,400][00882] Avg episode reward: [(0, '25.646')] |
|
[2025-02-18 13:11:48,459][19762] Updated weights for policy 0, policy_version 994 (0.0018) |
|
[2025-02-18 13:11:52,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3861.9). Total num frames: 4087808. Throughput: 0: 974.7. Samples: 268150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:11:52,397][00882] Avg episode reward: [(0, '24.381')] |
|
[2025-02-18 13:11:57,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3823.5, 300 sec: 3851.7). Total num frames: 4104192. Throughput: 0: 959.8. Samples: 274116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:11:57,396][00882] Avg episode reward: [(0, '24.342')] |
|
[2025-02-18 13:11:59,904][19762] Updated weights for policy 0, policy_version 1004 (0.0021) |
|
[2025-02-18 13:12:02,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3841.8). Total num frames: 4120576. Throughput: 0: 968.1. Samples: 279230. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:12:02,399][00882] Avg episode reward: [(0, '23.851')] |
|
[2025-02-18 13:12:07,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 4145152. Throughput: 0: 966.8. Samples: 282512. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:12:07,401][00882] Avg episode reward: [(0, '23.957')] |
|
[2025-02-18 13:12:08,764][19762] Updated weights for policy 0, policy_version 1014 (0.0013) |
|
[2025-02-18 13:12:12,398][00882] Fps is (10 sec: 4094.2, 60 sec: 3822.7, 300 sec: 3915.4). Total num frames: 4161536. Throughput: 0: 963.6. Samples: 288714. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:12:12,401][00882] Avg episode reward: [(0, '24.982')] |
|
[2025-02-18 13:12:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 4182016. Throughput: 0: 973.5. Samples: 293952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:12:17,397][00882] Avg episode reward: [(0, '24.208')] |
|
[2025-02-18 13:12:19,924][19762] Updated weights for policy 0, policy_version 1024 (0.0021) |
|
[2025-02-18 13:12:22,394][00882] Fps is (10 sec: 4097.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4202496. Throughput: 0: 974.9. Samples: 297324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:12:22,400][00882] Avg episode reward: [(0, '24.515')] |
|
[2025-02-18 13:12:27,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3823.1, 300 sec: 3943.3). Total num frames: 4218880. Throughput: 0: 967.5. Samples: 303296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:12:27,400][00882] Avg episode reward: [(0, '24.674')] |
|
[2025-02-18 13:12:30,904][19762] Updated weights for policy 0, policy_version 1034 (0.0027) |
|
[2025-02-18 13:12:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 4239360. Throughput: 0: 975.1. Samples: 308658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:12:32,398][00882] Avg episode reward: [(0, '25.261')] |
|
[2025-02-18 13:12:37,394][00882] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 4263936. Throughput: 0: 974.0. Samples: 311982. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:12:37,400][00882] Avg episode reward: [(0, '25.360')] |
|
[2025-02-18 13:12:40,817][19762] Updated weights for policy 0, policy_version 1044 (0.0013) |
|
[2025-02-18 13:12:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3929.4). Total num frames: 4276224. Throughput: 0: 974.3. Samples: 317960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:12:42,400][00882] Avg episode reward: [(0, '25.652')] |
|
[2025-02-18 13:12:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 4300800. Throughput: 0: 981.5. Samples: 323398. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:12:47,401][00882] Avg episode reward: [(0, '25.309')] |
|
[2025-02-18 13:12:47,409][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001050_4300800.pth... |
|
[2025-02-18 13:12:47,535][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000819_3354624.pth |
|
[2025-02-18 13:12:50,962][19762] Updated weights for policy 0, policy_version 1054 (0.0017) |
|
[2025-02-18 13:12:52,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 4321280. Throughput: 0: 983.6. Samples: 326772. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:12:52,405][00882] Avg episode reward: [(0, '25.352')] |
|
[2025-02-18 13:12:57,395][00882] Fps is (10 sec: 3686.0, 60 sec: 3891.1, 300 sec: 3943.3). Total num frames: 4337664. Throughput: 0: 972.5. Samples: 332472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:12:57,399][00882] Avg episode reward: [(0, '25.160')] |
|
[2025-02-18 13:13:02,071][19762] Updated weights for policy 0, policy_version 1064 (0.0012) |
|
[2025-02-18 13:13:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 4358144. Throughput: 0: 982.0. Samples: 338140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:13:02,399][00882] Avg episode reward: [(0, '25.279')] |
|
[2025-02-18 13:13:07,394][00882] Fps is (10 sec: 4096.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 4378624. Throughput: 0: 981.1. Samples: 341474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:13:07,396][00882] Avg episode reward: [(0, '23.780')] |
|
[2025-02-18 13:13:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.5, 300 sec: 3929.4). Total num frames: 4395008. Throughput: 0: 977.3. Samples: 347274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:13:12,401][00882] Avg episode reward: [(0, '22.908')] |
|
[2025-02-18 13:13:12,854][19762] Updated weights for policy 0, policy_version 1074 (0.0014) |
|
[2025-02-18 13:13:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 4415488. Throughput: 0: 982.3. Samples: 352862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:13:17,402][00882] Avg episode reward: [(0, '24.001')] |
|
[2025-02-18 13:13:22,170][19762] Updated weights for policy 0, policy_version 1084 (0.0014) |
|
[2025-02-18 13:13:22,394][00882] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 4440064. Throughput: 0: 983.0. Samples: 356216. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:13:22,401][00882] Avg episode reward: [(0, '24.520')] |
|
[2025-02-18 13:13:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 4452352. Throughput: 0: 974.9. Samples: 361830. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:13:27,399][00882] Avg episode reward: [(0, '23.926')] |
|
[2025-02-18 13:13:32,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 4476928. Throughput: 0: 983.0. Samples: 367634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:13:32,396][00882] Avg episode reward: [(0, '24.812')] |
|
[2025-02-18 13:13:33,103][19762] Updated weights for policy 0, policy_version 1094 (0.0016) |
|
[2025-02-18 13:13:37,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 4497408. Throughput: 0: 985.1. Samples: 371100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:13:37,396][00882] Avg episode reward: [(0, '25.706')] |
|
[2025-02-18 13:13:42,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3929.5). Total num frames: 4513792. Throughput: 0: 979.2. Samples: 376536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:13:42,397][00882] Avg episode reward: [(0, '27.101')] |
|
[2025-02-18 13:13:42,405][19749] Saving new best policy, reward=27.101! |
|
[2025-02-18 13:13:44,292][19762] Updated weights for policy 0, policy_version 1104 (0.0036) |
|
[2025-02-18 13:13:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 4534272. Throughput: 0: 983.6. Samples: 382404. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:13:47,397][00882] Avg episode reward: [(0, '27.270')] |
|
[2025-02-18 13:13:47,408][19749] Saving new best policy, reward=27.270! |
|
[2025-02-18 13:13:52,394][00882] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 4558848. Throughput: 0: 982.5. Samples: 385688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:13:52,399][00882] Avg episode reward: [(0, '26.851')] |
|
[2025-02-18 13:13:53,628][19762] Updated weights for policy 0, policy_version 1114 (0.0015) |
|
[2025-02-18 13:13:57,395][00882] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4571136. Throughput: 0: 972.8. Samples: 391050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:13:57,400][00882] Avg episode reward: [(0, '27.276')] |
|
[2025-02-18 13:13:57,417][19749] Saving new best policy, reward=27.276! |
|
[2025-02-18 13:14:02,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4591616. Throughput: 0: 976.5. Samples: 396804. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:14:02,400][00882] Avg episode reward: [(0, '28.766')] |
|
[2025-02-18 13:14:02,404][19749] Saving new best policy, reward=28.766! |
|
[2025-02-18 13:14:04,645][19762] Updated weights for policy 0, policy_version 1124 (0.0017) |
|
[2025-02-18 13:14:07,394][00882] Fps is (10 sec: 4096.2, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4612096. Throughput: 0: 972.7. Samples: 399988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:14:07,400][00882] Avg episode reward: [(0, '28.561')] |
|
[2025-02-18 13:14:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4628480. Throughput: 0: 970.8. Samples: 405518. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:14:12,399][00882] Avg episode reward: [(0, '27.928')] |
|
[2025-02-18 13:14:15,811][19762] Updated weights for policy 0, policy_version 1134 (0.0013) |
|
[2025-02-18 13:14:17,397][00882] Fps is (10 sec: 3685.3, 60 sec: 3891.0, 300 sec: 3915.5). Total num frames: 4648960. Throughput: 0: 974.3. Samples: 411482. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-18 13:14:17,403][00882] Avg episode reward: [(0, '27.698')] |
|
[2025-02-18 13:14:22,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4673536. Throughput: 0: 973.9. Samples: 414926. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-18 13:14:22,402][00882] Avg episode reward: [(0, '29.552')] |
|
[2025-02-18 13:14:22,409][19749] Saving new best policy, reward=29.552! |
|
[2025-02-18 13:14:26,235][19762] Updated weights for policy 0, policy_version 1144 (0.0024) |
|
[2025-02-18 13:14:27,394][00882] Fps is (10 sec: 3687.5, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4685824. Throughput: 0: 968.8. Samples: 420130. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:14:27,403][00882] Avg episode reward: [(0, '29.144')] |
|
[2025-02-18 13:14:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 4710400. Throughput: 0: 972.6. Samples: 426170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:14:32,400][00882] Avg episode reward: [(0, '28.336')] |
|
[2025-02-18 13:14:35,889][19762] Updated weights for policy 0, policy_version 1154 (0.0020) |
|
[2025-02-18 13:14:37,394][00882] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4730880. Throughput: 0: 976.1. Samples: 429614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:14:37,396][00882] Avg episode reward: [(0, '27.966')] |
|
[2025-02-18 13:14:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4747264. Throughput: 0: 974.6. Samples: 434908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:14:42,398][00882] Avg episode reward: [(0, '29.131')] |
|
[2025-02-18 13:14:46,687][19762] Updated weights for policy 0, policy_version 1164 (0.0016) |
|
[2025-02-18 13:14:47,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4767744. Throughput: 0: 983.2. Samples: 441046. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:14:47,401][00882] Avg episode reward: [(0, '28.066')] |
|
[2025-02-18 13:14:47,410][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001164_4767744.pth... |
|
[2025-02-18 13:14:47,566][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000935_3829760.pth |
|
[2025-02-18 13:14:52,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4792320. Throughput: 0: 987.0. Samples: 444404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:14:52,399][00882] Avg episode reward: [(0, '27.629')] |
|
[2025-02-18 13:14:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4804608. Throughput: 0: 979.0. Samples: 449572. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:14:57,399][00882] Avg episode reward: [(0, '27.870')] |
|
[2025-02-18 13:14:57,580][19762] Updated weights for policy 0, policy_version 1174 (0.0032) |
|
[2025-02-18 13:15:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 4829184. Throughput: 0: 983.5. Samples: 455736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:15:02,397][00882] Avg episode reward: [(0, '28.592')] |
|
[2025-02-18 13:15:06,764][19762] Updated weights for policy 0, policy_version 1184 (0.0018) |
|
[2025-02-18 13:15:07,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 4849664. Throughput: 0: 982.0. Samples: 459116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:15:07,398][00882] Avg episode reward: [(0, '27.831')] |
|
[2025-02-18 13:15:12,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 4861952. Throughput: 0: 978.6. Samples: 464166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:15:12,396][00882] Avg episode reward: [(0, '27.515')] |
|
[2025-02-18 13:15:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3901.6). Total num frames: 4886528. Throughput: 0: 981.3. Samples: 470328. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:15:17,397][00882] Avg episode reward: [(0, '27.394')] |
|
[2025-02-18 13:15:17,829][19762] Updated weights for policy 0, policy_version 1194 (0.0030) |
|
[2025-02-18 13:15:22,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4907008. Throughput: 0: 976.7. Samples: 473564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:15:22,400][00882] Avg episode reward: [(0, '26.004')] |
|
[2025-02-18 13:15:27,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 4919296. Throughput: 0: 969.7. Samples: 478546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:15:27,396][00882] Avg episode reward: [(0, '25.606')] |
|
[2025-02-18 13:15:29,289][19762] Updated weights for policy 0, policy_version 1204 (0.0019) |
|
[2025-02-18 13:15:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4943872. Throughput: 0: 972.8. Samples: 484820. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:15:32,397][00882] Avg episode reward: [(0, '25.884')] |
|
[2025-02-18 13:15:37,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4964352. Throughput: 0: 971.1. Samples: 488102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:15:37,398][00882] Avg episode reward: [(0, '25.097')] |
|
[2025-02-18 13:15:39,237][19762] Updated weights for policy 0, policy_version 1214 (0.0013) |
|
[2025-02-18 13:15:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 4980736. Throughput: 0: 965.9. Samples: 493038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:15:42,401][00882] Avg episode reward: [(0, '25.059')] |
|
[2025-02-18 13:15:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5001216. Throughput: 0: 967.3. Samples: 499264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:15:47,396][00882] Avg episode reward: [(0, '26.387')] |
|
[2025-02-18 13:15:49,787][19762] Updated weights for policy 0, policy_version 1224 (0.0015) |
|
[2025-02-18 13:15:52,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.8). Total num frames: 5021696. Throughput: 0: 962.2. Samples: 502416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:15:52,397][00882] Avg episode reward: [(0, '26.320')] |
|
[2025-02-18 13:15:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5038080. Throughput: 0: 953.8. Samples: 507088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:15:57,396][00882] Avg episode reward: [(0, '26.210')] |
|
[2025-02-18 13:16:01,131][19762] Updated weights for policy 0, policy_version 1234 (0.0016) |
|
[2025-02-18 13:16:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 5058560. Throughput: 0: 950.6. Samples: 513106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:16:02,396][00882] Avg episode reward: [(0, '27.193')] |
|
[2025-02-18 13:16:07,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 5079040. Throughput: 0: 951.4. Samples: 516378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:16:07,396][00882] Avg episode reward: [(0, '28.749')] |
|
[2025-02-18 13:16:12,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 5091328. Throughput: 0: 947.1. Samples: 521166. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:16:12,396][00882] Avg episode reward: [(0, '28.504')] |
|
[2025-02-18 13:16:12,414][19762] Updated weights for policy 0, policy_version 1244 (0.0014) |
|
[2025-02-18 13:16:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 5115904. Throughput: 0: 947.1. Samples: 527440. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:16:17,396][00882] Avg episode reward: [(0, '27.977')] |
|
[2025-02-18 13:16:22,309][19762] Updated weights for policy 0, policy_version 1254 (0.0013) |
|
[2025-02-18 13:16:22,397][00882] Fps is (10 sec: 4504.1, 60 sec: 3822.7, 300 sec: 3887.7). Total num frames: 5136384. Throughput: 0: 945.4. Samples: 530648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:16:22,400][00882] Avg episode reward: [(0, '27.501')] |
|
[2025-02-18 13:16:27,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 5148672. Throughput: 0: 937.1. Samples: 535206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:16:27,401][00882] Avg episode reward: [(0, '27.443')] |
|
[2025-02-18 13:16:32,394][00882] Fps is (10 sec: 3277.8, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5169152. Throughput: 0: 937.1. Samples: 541434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:16:32,397][00882] Avg episode reward: [(0, '27.637')] |
|
[2025-02-18 13:16:33,439][19762] Updated weights for policy 0, policy_version 1264 (0.0022) |
|
[2025-02-18 13:16:37,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5189632. Throughput: 0: 939.5. Samples: 544692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:16:37,397][00882] Avg episode reward: [(0, '25.559')] |
|
[2025-02-18 13:16:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5206016. Throughput: 0: 939.8. Samples: 549378. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:16:42,400][00882] Avg episode reward: [(0, '26.294')] |
|
[2025-02-18 13:16:44,572][19762] Updated weights for policy 0, policy_version 1274 (0.0024) |
|
[2025-02-18 13:16:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 5226496. Throughput: 0: 947.3. Samples: 555734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:16:47,401][00882] Avg episode reward: [(0, '26.277')] |
|
[2025-02-18 13:16:47,409][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001276_5226496.pth... |
|
[2025-02-18 13:16:47,542][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001050_4300800.pth |
|
[2025-02-18 13:16:52,395][00882] Fps is (10 sec: 4095.5, 60 sec: 3754.6, 300 sec: 3873.8). Total num frames: 5246976. Throughput: 0: 944.9. Samples: 558898. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:16:52,400][00882] Avg episode reward: [(0, '27.182')] |
|
[2025-02-18 13:16:56,347][19762] Updated weights for policy 0, policy_version 1284 (0.0012) |
|
[2025-02-18 13:16:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5263360. Throughput: 0: 937.9. Samples: 563370. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:16:57,400][00882] Avg episode reward: [(0, '26.843')] |
|
[2025-02-18 13:17:02,394][00882] Fps is (10 sec: 3686.8, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 5283840. Throughput: 0: 937.8. Samples: 569642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:17:02,397][00882] Avg episode reward: [(0, '24.808')] |
|
[2025-02-18 13:17:05,843][19762] Updated weights for policy 0, policy_version 1294 (0.0015) |
|
[2025-02-18 13:17:07,398][00882] Fps is (10 sec: 4094.4, 60 sec: 3754.4, 300 sec: 3873.8). Total num frames: 5304320. Throughput: 0: 939.1. Samples: 572908. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:17:07,400][00882] Avg episode reward: [(0, '27.169')] |
|
[2025-02-18 13:17:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 5320704. Throughput: 0: 938.9. Samples: 577456. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:17:12,403][00882] Avg episode reward: [(0, '27.007')] |
|
[2025-02-18 13:17:16,824][19762] Updated weights for policy 0, policy_version 1304 (0.0014) |
|
[2025-02-18 13:17:17,394][00882] Fps is (10 sec: 3687.8, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 5341184. Throughput: 0: 951.6. Samples: 584258. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:17:17,401][00882] Avg episode reward: [(0, '26.424')] |
|
[2025-02-18 13:17:22,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3754.9, 300 sec: 3873.8). Total num frames: 5361664. Throughput: 0: 954.2. Samples: 587632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:17:22,396][00882] Avg episode reward: [(0, '26.401')] |
|
[2025-02-18 13:17:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 5378048. Throughput: 0: 951.5. Samples: 592194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:17:27,402][00882] Avg episode reward: [(0, '28.098')] |
|
[2025-02-18 13:17:28,015][19762] Updated weights for policy 0, policy_version 1314 (0.0019) |
|
[2025-02-18 13:17:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 5398528. Throughput: 0: 957.1. Samples: 598804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:17:32,404][00882] Avg episode reward: [(0, '29.215')] |
|
[2025-02-18 13:17:37,399][00882] Fps is (10 sec: 4093.9, 60 sec: 3822.6, 300 sec: 3873.8). Total num frames: 5419008. Throughput: 0: 961.3. Samples: 602160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:17:37,402][00882] Avg episode reward: [(0, '28.217')] |
|
[2025-02-18 13:17:37,793][19762] Updated weights for policy 0, policy_version 1324 (0.0018) |
|
[2025-02-18 13:17:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 5435392. Throughput: 0: 964.4. Samples: 606770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:17:42,398][00882] Avg episode reward: [(0, '27.603')] |
|
[2025-02-18 13:17:47,394][00882] Fps is (10 sec: 4098.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5459968. Throughput: 0: 976.7. Samples: 613594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:17:47,400][00882] Avg episode reward: [(0, '26.935')] |
|
[2025-02-18 13:17:48,087][19762] Updated weights for policy 0, policy_version 1334 (0.0018) |
|
[2025-02-18 13:17:52,395][00882] Fps is (10 sec: 4095.8, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 5476352. Throughput: 0: 979.1. Samples: 616966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:17:52,400][00882] Avg episode reward: [(0, '24.438')] |
|
[2025-02-18 13:17:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5496832. Throughput: 0: 980.2. Samples: 621566. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:17:57,397][00882] Avg episode reward: [(0, '22.148')] |
|
[2025-02-18 13:17:59,262][19762] Updated weights for policy 0, policy_version 1344 (0.0024) |
|
[2025-02-18 13:18:02,394][00882] Fps is (10 sec: 4096.2, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5517312. Throughput: 0: 977.1. Samples: 628228. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:18:02,400][00882] Avg episode reward: [(0, '20.986')] |
|
[2025-02-18 13:18:07,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3873.8). Total num frames: 5537792. Throughput: 0: 972.9. Samples: 631412. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:18:07,396][00882] Avg episode reward: [(0, '21.278')] |
|
[2025-02-18 13:18:10,067][19762] Updated weights for policy 0, policy_version 1354 (0.0014) |
|
[2025-02-18 13:18:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5554176. Throughput: 0: 975.2. Samples: 636080. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:18:12,400][00882] Avg episode reward: [(0, '21.902')] |
|
[2025-02-18 13:18:17,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 5578752. Throughput: 0: 978.1. Samples: 642820. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:18:17,396][00882] Avg episode reward: [(0, '22.815')] |
|
[2025-02-18 13:18:19,156][19762] Updated weights for policy 0, policy_version 1364 (0.0015) |
|
[2025-02-18 13:18:22,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5595136. Throughput: 0: 980.2. Samples: 646262. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:18:22,396][00882] Avg episode reward: [(0, '23.790')] |
|
[2025-02-18 13:18:27,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5611520. Throughput: 0: 979.4. Samples: 650844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:18:27,397][00882] Avg episode reward: [(0, '24.635')] |
|
[2025-02-18 13:18:30,201][19762] Updated weights for policy 0, policy_version 1374 (0.0014) |
|
[2025-02-18 13:18:32,395][00882] Fps is (10 sec: 4095.5, 60 sec: 3959.4, 300 sec: 3859.9). Total num frames: 5636096. Throughput: 0: 977.4. Samples: 657578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:18:32,400][00882] Avg episode reward: [(0, '25.269')] |
|
[2025-02-18 13:18:37,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3891.5, 300 sec: 3860.0). Total num frames: 5652480. Throughput: 0: 974.2. Samples: 660804. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:18:37,401][00882] Avg episode reward: [(0, '25.040')] |
|
[2025-02-18 13:18:41,324][19762] Updated weights for policy 0, policy_version 1384 (0.0028) |
|
[2025-02-18 13:18:42,394][00882] Fps is (10 sec: 3686.8, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 5672960. Throughput: 0: 977.0. Samples: 665530. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:18:42,396][00882] Avg episode reward: [(0, '26.197')] |
|
[2025-02-18 13:18:47,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5693440. Throughput: 0: 979.8. Samples: 672318. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:18:47,396][00882] Avg episode reward: [(0, '25.261')] |
|
[2025-02-18 13:18:47,462][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001391_5697536.pth... |
|
[2025-02-18 13:18:47,592][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001164_4767744.pth |
|
[2025-02-18 13:18:50,963][19762] Updated weights for policy 0, policy_version 1394 (0.0017) |
|
[2025-02-18 13:18:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5709824. Throughput: 0: 979.7. Samples: 675500. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2025-02-18 13:18:52,401][00882] Avg episode reward: [(0, '25.778')] |
|
[2025-02-18 13:18:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5730304. Throughput: 0: 983.3. Samples: 680328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:18:57,397][00882] Avg episode reward: [(0, '25.560')] |
|
[2025-02-18 13:19:01,518][19762] Updated weights for policy 0, policy_version 1404 (0.0018) |
|
[2025-02-18 13:19:02,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5750784. Throughput: 0: 978.6. Samples: 686856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:19:02,396][00882] Avg episode reward: [(0, '27.157')] |
|
[2025-02-18 13:19:07,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 5767168. Throughput: 0: 969.7. Samples: 689898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:19:07,400][00882] Avg episode reward: [(0, '25.402')] |
|
[2025-02-18 13:19:12,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5787648. Throughput: 0: 973.7. Samples: 694660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:19:12,401][00882] Avg episode reward: [(0, '25.093')] |
|
[2025-02-18 13:19:12,777][19762] Updated weights for policy 0, policy_version 1414 (0.0015) |
|
[2025-02-18 13:19:17,394][00882] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5812224. Throughput: 0: 972.5. Samples: 701340. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:19:17,398][00882] Avg episode reward: [(0, '25.760')] |
|
[2025-02-18 13:19:22,395][00882] Fps is (10 sec: 3686.2, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 5824512. Throughput: 0: 964.3. Samples: 704198. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:19:22,397][00882] Avg episode reward: [(0, '24.884')] |
|
[2025-02-18 13:19:23,919][19762] Updated weights for policy 0, policy_version 1424 (0.0014) |
|
[2025-02-18 13:19:27,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5844992. Throughput: 0: 970.8. Samples: 709214. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-18 13:19:27,397][00882] Avg episode reward: [(0, '25.231')] |
|
[2025-02-18 13:19:32,394][00882] Fps is (10 sec: 4505.9, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 5869568. Throughput: 0: 965.3. Samples: 715758. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:19:32,396][00882] Avg episode reward: [(0, '24.976')] |
|
[2025-02-18 13:19:33,210][19762] Updated weights for policy 0, policy_version 1434 (0.0018) |
|
[2025-02-18 13:19:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 5881856. Throughput: 0: 955.8. Samples: 718510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:19:37,397][00882] Avg episode reward: [(0, '26.211')] |
|
[2025-02-18 13:19:42,394][00882] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 5902336. Throughput: 0: 964.5. Samples: 723732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:19:42,397][00882] Avg episode reward: [(0, '27.039')] |
|
[2025-02-18 13:19:44,307][19762] Updated weights for policy 0, policy_version 1444 (0.0017) |
|
[2025-02-18 13:19:47,395][00882] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5926912. Throughput: 0: 969.2. Samples: 730472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:19:47,402][00882] Avg episode reward: [(0, '25.490')] |
|
[2025-02-18 13:19:52,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 5943296. Throughput: 0: 961.9. Samples: 733182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:19:52,406][00882] Avg episode reward: [(0, '26.259')] |
|
[2025-02-18 13:19:55,241][19762] Updated weights for policy 0, policy_version 1454 (0.0019) |
|
[2025-02-18 13:19:57,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5963776. Throughput: 0: 972.9. Samples: 738440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:19:57,396][00882] Avg episode reward: [(0, '25.065')] |
|
[2025-02-18 13:20:02,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 5984256. Throughput: 0: 972.4. Samples: 745098. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:20:02,396][00882] Avg episode reward: [(0, '26.388')] |
|
[2025-02-18 13:20:05,438][19762] Updated weights for policy 0, policy_version 1464 (0.0014) |
|
[2025-02-18 13:20:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6000640. Throughput: 0: 963.8. Samples: 747570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:20:07,401][00882] Avg episode reward: [(0, '26.617')] |
|
[2025-02-18 13:20:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 6021120. Throughput: 0: 976.0. Samples: 753134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:20:12,396][00882] Avg episode reward: [(0, '26.967')] |
|
[2025-02-18 13:20:15,313][19762] Updated weights for policy 0, policy_version 1474 (0.0013) |
|
[2025-02-18 13:20:17,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6045696. Throughput: 0: 979.3. Samples: 759826. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:20:17,401][00882] Avg episode reward: [(0, '28.615')] |
|
[2025-02-18 13:20:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 6057984. Throughput: 0: 969.9. Samples: 762154. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:20:22,400][00882] Avg episode reward: [(0, '27.894')] |
|
[2025-02-18 13:20:26,222][19762] Updated weights for policy 0, policy_version 1484 (0.0021) |
|
[2025-02-18 13:20:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 6082560. Throughput: 0: 980.9. Samples: 767872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:20:27,396][00882] Avg episode reward: [(0, '27.787')] |
|
[2025-02-18 13:20:32,394][00882] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6103040. Throughput: 0: 978.9. Samples: 774524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:20:32,402][00882] Avg episode reward: [(0, '25.057')] |
|
[2025-02-18 13:20:37,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 6115328. Throughput: 0: 966.0. Samples: 776650. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:20:37,396][00882] Avg episode reward: [(0, '26.671')] |
|
[2025-02-18 13:20:37,552][19762] Updated weights for policy 0, policy_version 1494 (0.0012) |
|
[2025-02-18 13:20:42,399][00882] Fps is (10 sec: 3684.5, 60 sec: 3959.1, 300 sec: 3859.9). Total num frames: 6139904. Throughput: 0: 977.6. Samples: 782436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:20:42,402][00882] Avg episode reward: [(0, '26.262')] |
|
[2025-02-18 13:20:46,658][19762] Updated weights for policy 0, policy_version 1504 (0.0017) |
|
[2025-02-18 13:20:47,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6160384. Throughput: 0: 979.6. Samples: 789180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:20:47,400][00882] Avg episode reward: [(0, '25.392')] |
|
[2025-02-18 13:20:47,414][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001504_6160384.pth... |
|
[2025-02-18 13:20:47,587][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001276_5226496.pth |
|
[2025-02-18 13:20:52,394][00882] Fps is (10 sec: 3688.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6176768. Throughput: 0: 968.5. Samples: 791154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:20:52,396][00882] Avg episode reward: [(0, '26.031')] |
|
[2025-02-18 13:20:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6197248. Throughput: 0: 978.9. Samples: 797186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:20:57,397][00882] Avg episode reward: [(0, '26.709')] |
|
[2025-02-18 13:20:57,656][19762] Updated weights for policy 0, policy_version 1514 (0.0013) |
|
[2025-02-18 13:21:02,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6217728. Throughput: 0: 973.7. Samples: 803644. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:21:02,399][00882] Avg episode reward: [(0, '27.383')] |
|
[2025-02-18 13:21:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 6234112. Throughput: 0: 967.2. Samples: 805678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:21:07,403][00882] Avg episode reward: [(0, '26.566')] |
|
[2025-02-18 13:21:08,681][19762] Updated weights for policy 0, policy_version 1524 (0.0019) |
|
[2025-02-18 13:21:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 6254592. Throughput: 0: 976.3. Samples: 811804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:21:12,401][00882] Avg episode reward: [(0, '26.620')] |
|
[2025-02-18 13:21:17,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 6275072. Throughput: 0: 973.2. Samples: 818318. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:21:17,404][00882] Avg episode reward: [(0, '27.132')] |
|
[2025-02-18 13:21:18,872][19762] Updated weights for policy 0, policy_version 1534 (0.0018) |
|
[2025-02-18 13:21:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 6291456. Throughput: 0: 969.0. Samples: 820256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:21:22,401][00882] Avg episode reward: [(0, '25.957')] |
|
[2025-02-18 13:21:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6316032. Throughput: 0: 980.8. Samples: 826566. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:21:27,400][00882] Avg episode reward: [(0, '25.274')] |
|
[2025-02-18 13:21:29,040][19762] Updated weights for policy 0, policy_version 1544 (0.0013) |
|
[2025-02-18 13:21:32,396][00882] Fps is (10 sec: 4504.8, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 6336512. Throughput: 0: 968.1. Samples: 832746. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:21:32,405][00882] Avg episode reward: [(0, '26.579')] |
|
[2025-02-18 13:21:37,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 6348800. Throughput: 0: 967.6. Samples: 834696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:21:37,396][00882] Avg episode reward: [(0, '25.707')] |
|
[2025-02-18 13:21:40,208][19762] Updated weights for policy 0, policy_version 1554 (0.0021) |
|
[2025-02-18 13:21:42,394][00882] Fps is (10 sec: 3687.0, 60 sec: 3891.6, 300 sec: 3887.7). Total num frames: 6373376. Throughput: 0: 969.2. Samples: 840798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:21:42,405][00882] Avg episode reward: [(0, '26.284')] |
|
[2025-02-18 13:21:47,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6393856. Throughput: 0: 963.1. Samples: 846984. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2025-02-18 13:21:47,398][00882] Avg episode reward: [(0, '26.983')] |
|
[2025-02-18 13:21:51,349][19762] Updated weights for policy 0, policy_version 1564 (0.0015) |
|
[2025-02-18 13:21:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6410240. Throughput: 0: 962.2. Samples: 848976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:21:52,402][00882] Avg episode reward: [(0, '27.127')] |
|
[2025-02-18 13:21:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6430720. Throughput: 0: 968.7. Samples: 855396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:21:57,396][00882] Avg episode reward: [(0, '27.726')] |
|
[2025-02-18 13:22:00,594][19762] Updated weights for policy 0, policy_version 1574 (0.0015) |
|
[2025-02-18 13:22:02,396][00882] Fps is (10 sec: 4095.2, 60 sec: 3891.1, 300 sec: 3887.8). Total num frames: 6451200. Throughput: 0: 958.3. Samples: 861444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:22:02,401][00882] Avg episode reward: [(0, '26.956')] |
|
[2025-02-18 13:22:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6467584. Throughput: 0: 958.7. Samples: 863396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:22:07,398][00882] Avg episode reward: [(0, '28.631')] |
|
[2025-02-18 13:22:11,728][19762] Updated weights for policy 0, policy_version 1584 (0.0025) |
|
[2025-02-18 13:22:12,397][00882] Fps is (10 sec: 3686.1, 60 sec: 3891.0, 300 sec: 3887.7). Total num frames: 6488064. Throughput: 0: 964.1. Samples: 869954. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:22:12,402][00882] Avg episode reward: [(0, '28.626')] |
|
[2025-02-18 13:22:17,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6508544. Throughput: 0: 963.0. Samples: 876078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:22:17,397][00882] Avg episode reward: [(0, '28.020')] |
|
[2025-02-18 13:22:22,394][00882] Fps is (10 sec: 3687.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6524928. Throughput: 0: 964.4. Samples: 878096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:22:22,401][00882] Avg episode reward: [(0, '28.930')] |
|
[2025-02-18 13:22:22,707][19762] Updated weights for policy 0, policy_version 1594 (0.0021) |
|
[2025-02-18 13:22:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 6549504. Throughput: 0: 974.7. Samples: 884660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:22:27,401][00882] Avg episode reward: [(0, '29.021')] |
|
[2025-02-18 13:22:32,399][00882] Fps is (10 sec: 4093.8, 60 sec: 3822.7, 300 sec: 3887.7). Total num frames: 6565888. Throughput: 0: 968.6. Samples: 890576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:22:32,402][00882] Avg episode reward: [(0, '29.220')] |
|
[2025-02-18 13:22:32,999][19762] Updated weights for policy 0, policy_version 1604 (0.0014) |
|
[2025-02-18 13:22:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 6586368. Throughput: 0: 969.4. Samples: 892598. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:22:37,399][00882] Avg episode reward: [(0, '28.028')] |
|
[2025-02-18 13:22:42,394][00882] Fps is (10 sec: 4098.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6606848. Throughput: 0: 980.0. Samples: 899496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:22:42,396][00882] Avg episode reward: [(0, '27.609')] |
|
[2025-02-18 13:22:42,785][19762] Updated weights for policy 0, policy_version 1614 (0.0021) |
|
[2025-02-18 13:22:47,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 6627328. Throughput: 0: 979.6. Samples: 905526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:22:47,397][00882] Avg episode reward: [(0, '28.847')] |
|
[2025-02-18 13:22:47,407][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001618_6627328.pth... |
|
[2025-02-18 13:22:47,542][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001391_5697536.pth |
|
[2025-02-18 13:22:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6643712. Throughput: 0: 979.9. Samples: 907490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:22:52,401][00882] Avg episode reward: [(0, '29.879')] |
|
[2025-02-18 13:22:52,404][19749] Saving new best policy, reward=29.879! |
|
[2025-02-18 13:22:53,823][19762] Updated weights for policy 0, policy_version 1624 (0.0021) |
|
[2025-02-18 13:22:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6664192. Throughput: 0: 980.1. Samples: 914058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:22:57,396][00882] Avg episode reward: [(0, '29.281')] |
|
[2025-02-18 13:23:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3873.8). Total num frames: 6680576. Throughput: 0: 967.2. Samples: 919602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:23:02,398][00882] Avg episode reward: [(0, '30.049')] |
|
[2025-02-18 13:23:02,405][19749] Saving new best policy, reward=30.049! |
|
[2025-02-18 13:23:05,212][19762] Updated weights for policy 0, policy_version 1634 (0.0028) |
|
[2025-02-18 13:23:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6701056. Throughput: 0: 968.5. Samples: 921680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:23:07,397][00882] Avg episode reward: [(0, '29.151')] |
|
[2025-02-18 13:23:12,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3873.8). Total num frames: 6721536. Throughput: 0: 973.2. Samples: 928456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:23:12,396][00882] Avg episode reward: [(0, '28.744')] |
|
[2025-02-18 13:23:14,287][19762] Updated weights for policy 0, policy_version 1644 (0.0014) |
|
[2025-02-18 13:23:17,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6742016. Throughput: 0: 966.7. Samples: 934072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:23:17,396][00882] Avg episode reward: [(0, '28.605')] |
|
[2025-02-18 13:23:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6758400. Throughput: 0: 972.9. Samples: 936378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:23:22,396][00882] Avg episode reward: [(0, '27.687')] |
|
[2025-02-18 13:23:25,497][19762] Updated weights for policy 0, policy_version 1654 (0.0019) |
|
[2025-02-18 13:23:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6782976. Throughput: 0: 968.9. Samples: 943098. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:23:27,399][00882] Avg episode reward: [(0, '28.450')] |
|
[2025-02-18 13:23:32,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3891.5, 300 sec: 3887.7). Total num frames: 6799360. Throughput: 0: 958.6. Samples: 948662. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:23:32,399][00882] Avg episode reward: [(0, '28.290')] |
|
[2025-02-18 13:23:36,558][19762] Updated weights for policy 0, policy_version 1664 (0.0020) |
|
[2025-02-18 13:23:37,394][00882] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6815744. Throughput: 0: 962.5. Samples: 950804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:23:37,405][00882] Avg episode reward: [(0, '28.406')] |
|
[2025-02-18 13:23:42,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 6840320. Throughput: 0: 962.8. Samples: 957382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:23:42,396][00882] Avg episode reward: [(0, '29.352')] |
|
[2025-02-18 13:23:47,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 6852608. Throughput: 0: 957.2. Samples: 962674. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:23:47,398][00882] Avg episode reward: [(0, '29.286')] |
|
[2025-02-18 13:23:47,639][19762] Updated weights for policy 0, policy_version 1674 (0.0017) |
|
[2025-02-18 13:23:52,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6873088. Throughput: 0: 965.2. Samples: 965112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:23:52,402][00882] Avg episode reward: [(0, '29.722')] |
|
[2025-02-18 13:23:57,282][19762] Updated weights for policy 0, policy_version 1684 (0.0024) |
|
[2025-02-18 13:23:57,395][00882] Fps is (10 sec: 4505.1, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 6897664. Throughput: 0: 960.3. Samples: 971670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:23:57,398][00882] Avg episode reward: [(0, '29.453')] |
|
[2025-02-18 13:24:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6909952. Throughput: 0: 947.4. Samples: 976706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:24:02,396][00882] Avg episode reward: [(0, '29.666')] |
|
[2025-02-18 13:24:07,394][00882] Fps is (10 sec: 3277.2, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6930432. Throughput: 0: 950.4. Samples: 979148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:24:07,396][00882] Avg episode reward: [(0, '30.028')] |
|
[2025-02-18 13:24:08,846][19762] Updated weights for policy 0, policy_version 1694 (0.0017) |
|
[2025-02-18 13:24:12,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 6950912. Throughput: 0: 945.3. Samples: 985638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:24:12,398][00882] Avg episode reward: [(0, '28.809')] |
|
[2025-02-18 13:24:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 6967296. Throughput: 0: 937.5. Samples: 990848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:24:17,399][00882] Avg episode reward: [(0, '28.813')] |
|
[2025-02-18 13:24:20,098][19762] Updated weights for policy 0, policy_version 1704 (0.0031) |
|
[2025-02-18 13:24:22,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 6987776. Throughput: 0: 950.0. Samples: 993552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:24:22,397][00882] Avg episode reward: [(0, '28.664')] |
|
[2025-02-18 13:24:27,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 7008256. Throughput: 0: 950.1. Samples: 1000136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:24:27,396][00882] Avg episode reward: [(0, '27.526')] |
|
[2025-02-18 13:24:29,804][19762] Updated weights for policy 0, policy_version 1714 (0.0013) |
|
[2025-02-18 13:24:32,397][00882] Fps is (10 sec: 3685.5, 60 sec: 3754.5, 300 sec: 3873.8). Total num frames: 7024640. Throughput: 0: 946.6. Samples: 1005274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:24:32,404][00882] Avg episode reward: [(0, '25.492')] |
|
[2025-02-18 13:24:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 7045120. Throughput: 0: 949.9. Samples: 1007858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:24:37,401][00882] Avg episode reward: [(0, '25.418')] |
|
[2025-02-18 13:24:40,463][19762] Updated weights for policy 0, policy_version 1724 (0.0015) |
|
[2025-02-18 13:24:42,394][00882] Fps is (10 sec: 4506.8, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 7069696. Throughput: 0: 953.2. Samples: 1014564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:24:42,401][00882] Avg episode reward: [(0, '26.320')] |
|
[2025-02-18 13:24:47,397][00882] Fps is (10 sec: 3685.3, 60 sec: 3822.7, 300 sec: 3859.9). Total num frames: 7081984. Throughput: 0: 955.8. Samples: 1019722. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:24:47,400][00882] Avg episode reward: [(0, '26.781')] |
|
[2025-02-18 13:24:47,415][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001729_7081984.pth... |
|
[2025-02-18 13:24:47,568][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001504_6160384.pth |
|
[2025-02-18 13:24:51,729][19762] Updated weights for policy 0, policy_version 1734 (0.0032) |
|
[2025-02-18 13:24:52,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 7102464. Throughput: 0: 961.5. Samples: 1022416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:24:52,397][00882] Avg episode reward: [(0, '25.909')] |
|
[2025-02-18 13:24:57,394][00882] Fps is (10 sec: 4507.0, 60 sec: 3823.0, 300 sec: 3873.8). Total num frames: 7127040. Throughput: 0: 967.0. Samples: 1029154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:24:57,398][00882] Avg episode reward: [(0, '25.728')] |
|
[2025-02-18 13:25:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 7139328. Throughput: 0: 959.8. Samples: 1034038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:25:02,396][00882] Avg episode reward: [(0, '27.939')] |
|
[2025-02-18 13:25:02,913][19762] Updated weights for policy 0, policy_version 1744 (0.0024) |
|
[2025-02-18 13:25:07,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 7159808. Throughput: 0: 961.5. Samples: 1036820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:25:07,397][00882] Avg episode reward: [(0, '30.006')] |
|
[2025-02-18 13:25:12,039][19762] Updated weights for policy 0, policy_version 1754 (0.0022) |
|
[2025-02-18 13:25:12,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7184384. Throughput: 0: 962.6. Samples: 1043454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:25:12,397][00882] Avg episode reward: [(0, '31.155')] |
|
[2025-02-18 13:25:12,400][19749] Saving new best policy, reward=31.155! |
|
[2025-02-18 13:25:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 7196672. Throughput: 0: 957.2. Samples: 1048344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:25:17,402][00882] Avg episode reward: [(0, '30.064')] |
|
[2025-02-18 13:25:22,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7217152. Throughput: 0: 962.8. Samples: 1051186. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:25:22,406][00882] Avg episode reward: [(0, '31.593')] |
|
[2025-02-18 13:25:22,407][19749] Saving new best policy, reward=31.593! |
|
[2025-02-18 13:25:23,405][19762] Updated weights for policy 0, policy_version 1764 (0.0021) |
|
[2025-02-18 13:25:27,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7241728. Throughput: 0: 962.5. Samples: 1057878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:25:27,401][00882] Avg episode reward: [(0, '31.061')] |
|
[2025-02-18 13:25:32,395][00882] Fps is (10 sec: 3686.2, 60 sec: 3823.1, 300 sec: 3860.0). Total num frames: 7254016. Throughput: 0: 956.6. Samples: 1062766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:25:32,397][00882] Avg episode reward: [(0, '30.711')] |
|
[2025-02-18 13:25:34,548][19762] Updated weights for policy 0, policy_version 1774 (0.0022) |
|
[2025-02-18 13:25:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7278592. Throughput: 0: 962.3. Samples: 1065718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:25:37,401][00882] Avg episode reward: [(0, '29.645')] |
|
[2025-02-18 13:25:42,394][00882] Fps is (10 sec: 4505.9, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 7299072. Throughput: 0: 962.7. Samples: 1072474. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:25:42,397][00882] Avg episode reward: [(0, '28.720')] |
|
[2025-02-18 13:25:44,384][19762] Updated weights for policy 0, policy_version 1784 (0.0018) |
|
[2025-02-18 13:25:47,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3846.1). Total num frames: 7311360. Throughput: 0: 960.6. Samples: 1077266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:25:47,400][00882] Avg episode reward: [(0, '29.337')] |
|
[2025-02-18 13:25:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7335936. Throughput: 0: 964.8. Samples: 1080234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:25:52,397][00882] Avg episode reward: [(0, '28.580')] |
|
[2025-02-18 13:25:54,923][19762] Updated weights for policy 0, policy_version 1794 (0.0013) |
|
[2025-02-18 13:25:57,396][00882] Fps is (10 sec: 4504.6, 60 sec: 3822.8, 300 sec: 3859.9). Total num frames: 7356416. Throughput: 0: 965.6. Samples: 1086906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:25:57,398][00882] Avg episode reward: [(0, '29.961')] |
|
[2025-02-18 13:26:02,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7368704. Throughput: 0: 958.4. Samples: 1091470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:02,401][00882] Avg episode reward: [(0, '29.181')] |
|
[2025-02-18 13:26:06,312][19762] Updated weights for policy 0, policy_version 1804 (0.0026) |
|
[2025-02-18 13:26:07,394][00882] Fps is (10 sec: 3687.2, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7393280. Throughput: 0: 962.2. Samples: 1094486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:26:07,399][00882] Avg episode reward: [(0, '29.477')] |
|
[2025-02-18 13:26:12,395][00882] Fps is (10 sec: 4505.1, 60 sec: 3822.9, 300 sec: 3859.9). Total num frames: 7413760. Throughput: 0: 961.7. Samples: 1101154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:12,397][00882] Avg episode reward: [(0, '28.568')] |
|
[2025-02-18 13:26:17,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7426048. Throughput: 0: 956.9. Samples: 1105826. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:17,396][00882] Avg episode reward: [(0, '28.658')] |
|
[2025-02-18 13:26:17,603][19762] Updated weights for policy 0, policy_version 1814 (0.0028) |
|
[2025-02-18 13:26:22,394][00882] Fps is (10 sec: 3686.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7450624. Throughput: 0: 961.3. Samples: 1108976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:22,402][00882] Avg episode reward: [(0, '28.158')] |
|
[2025-02-18 13:26:26,704][19762] Updated weights for policy 0, policy_version 1824 (0.0018) |
|
[2025-02-18 13:26:27,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7471104. Throughput: 0: 961.3. Samples: 1115734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:27,399][00882] Avg episode reward: [(0, '25.610')] |
|
[2025-02-18 13:26:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7487488. Throughput: 0: 958.3. Samples: 1120390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:32,405][00882] Avg episode reward: [(0, '25.574')] |
|
[2025-02-18 13:26:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7507968. Throughput: 0: 960.8. Samples: 1123468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:26:37,396][00882] Avg episode reward: [(0, '26.286')] |
|
[2025-02-18 13:26:37,986][19762] Updated weights for policy 0, policy_version 1834 (0.0012) |
|
[2025-02-18 13:26:42,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7528448. Throughput: 0: 960.9. Samples: 1130144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:42,403][00882] Avg episode reward: [(0, '27.789')] |
|
[2025-02-18 13:26:47,397][00882] Fps is (10 sec: 3685.2, 60 sec: 3891.0, 300 sec: 3846.0). Total num frames: 7544832. Throughput: 0: 963.0. Samples: 1134810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:26:47,401][00882] Avg episode reward: [(0, '27.093')] |
|
[2025-02-18 13:26:47,413][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001842_7544832.pth... |
|
[2025-02-18 13:26:47,537][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001618_6627328.pth |
|
[2025-02-18 13:26:49,057][19762] Updated weights for policy 0, policy_version 1844 (0.0019) |
|
[2025-02-18 13:26:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7565312. Throughput: 0: 967.4. Samples: 1138018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:52,396][00882] Avg episode reward: [(0, '27.226')] |
|
[2025-02-18 13:26:57,394][00882] Fps is (10 sec: 4097.3, 60 sec: 3823.1, 300 sec: 3846.1). Total num frames: 7585792. Throughput: 0: 968.2. Samples: 1144722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:26:57,398][00882] Avg episode reward: [(0, '28.112')] |
|
[2025-02-18 13:26:58,966][19762] Updated weights for policy 0, policy_version 1854 (0.0017) |
|
[2025-02-18 13:27:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7602176. Throughput: 0: 964.2. Samples: 1149216. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:27:02,402][00882] Avg episode reward: [(0, '28.682')] |
|
[2025-02-18 13:27:07,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7622656. Throughput: 0: 966.1. Samples: 1152452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:27:07,400][00882] Avg episode reward: [(0, '28.113')] |
|
[2025-02-18 13:27:09,498][19762] Updated weights for policy 0, policy_version 1864 (0.0028) |
|
[2025-02-18 13:27:12,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 7647232. Throughput: 0: 964.2. Samples: 1159122. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:27:12,400][00882] Avg episode reward: [(0, '27.462')] |
|
[2025-02-18 13:27:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7659520. Throughput: 0: 962.5. Samples: 1163702. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:27:17,396][00882] Avg episode reward: [(0, '28.566')] |
|
[2025-02-18 13:27:20,642][19762] Updated weights for policy 0, policy_version 1874 (0.0030) |
|
[2025-02-18 13:27:22,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 7680000. Throughput: 0: 967.6. Samples: 1167010. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:27:22,396][00882] Avg episode reward: [(0, '28.701')] |
|
[2025-02-18 13:27:27,394][00882] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7704576. Throughput: 0: 969.5. Samples: 1173770. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2025-02-18 13:27:27,403][00882] Avg episode reward: [(0, '29.244')] |
|
[2025-02-18 13:27:31,752][19762] Updated weights for policy 0, policy_version 1884 (0.0019) |
|
[2025-02-18 13:27:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 7716864. Throughput: 0: 967.3. Samples: 1178336. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:27:32,397][00882] Avg episode reward: [(0, '29.275')] |
|
[2025-02-18 13:27:37,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7741440. Throughput: 0: 964.2. Samples: 1181406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:27:37,401][00882] Avg episode reward: [(0, '30.169')] |
|
[2025-02-18 13:27:41,081][19762] Updated weights for policy 0, policy_version 1894 (0.0015) |
|
[2025-02-18 13:27:42,399][00882] Fps is (10 sec: 4094.1, 60 sec: 3822.6, 300 sec: 3832.1). Total num frames: 7757824. Throughput: 0: 964.2. Samples: 1188116. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-18 13:27:42,402][00882] Avg episode reward: [(0, '30.462')] |
|
[2025-02-18 13:27:47,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.1, 300 sec: 3832.2). Total num frames: 7774208. Throughput: 0: 965.8. Samples: 1192678. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:27:47,402][00882] Avg episode reward: [(0, '30.092')] |
|
[2025-02-18 13:27:52,077][19762] Updated weights for policy 0, policy_version 1904 (0.0030) |
|
[2025-02-18 13:27:52,394][00882] Fps is (10 sec: 4097.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7798784. Throughput: 0: 967.9. Samples: 1196008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:27:52,402][00882] Avg episode reward: [(0, '31.081')] |
|
[2025-02-18 13:27:57,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 7819264. Throughput: 0: 969.7. Samples: 1202760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:27:57,402][00882] Avg episode reward: [(0, '32.172')] |
|
[2025-02-18 13:27:57,408][19749] Saving new best policy, reward=32.172! |
|
[2025-02-18 13:28:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7835648. Throughput: 0: 968.3. Samples: 1207276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:28:02,396][00882] Avg episode reward: [(0, '31.380')] |
|
[2025-02-18 13:28:03,321][19762] Updated weights for policy 0, policy_version 1914 (0.0014) |
|
[2025-02-18 13:28:07,400][00882] Fps is (10 sec: 3684.2, 60 sec: 3890.8, 300 sec: 3846.0). Total num frames: 7856128. Throughput: 0: 965.9. Samples: 1210482. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:28:07,402][00882] Avg episode reward: [(0, '33.628')] |
|
[2025-02-18 13:28:07,414][19749] Saving new best policy, reward=33.628! |
|
[2025-02-18 13:28:12,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7876608. Throughput: 0: 962.4. Samples: 1217078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:28:12,396][00882] Avg episode reward: [(0, '32.900')] |
|
[2025-02-18 13:28:13,793][19762] Updated weights for policy 0, policy_version 1924 (0.0020) |
|
[2025-02-18 13:28:17,394][00882] Fps is (10 sec: 3688.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 7892992. Throughput: 0: 963.0. Samples: 1221670. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2025-02-18 13:28:17,396][00882] Avg episode reward: [(0, '32.560')] |
|
[2025-02-18 13:28:22,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 7913472. Throughput: 0: 971.1. Samples: 1225104. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:28:22,396][00882] Avg episode reward: [(0, '32.014')] |
|
[2025-02-18 13:28:23,497][19762] Updated weights for policy 0, policy_version 1934 (0.0013) |
|
[2025-02-18 13:28:27,394][00882] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 7933952. Throughput: 0: 969.4. Samples: 1231736. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:28:27,399][00882] Avg episode reward: [(0, '29.635')] |
|
[2025-02-18 13:28:32,401][00882] Fps is (10 sec: 3684.0, 60 sec: 3890.8, 300 sec: 3846.0). Total num frames: 7950336. Throughput: 0: 970.3. Samples: 1236348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:28:32,406][00882] Avg episode reward: [(0, '31.443')] |
|
[2025-02-18 13:28:34,884][19762] Updated weights for policy 0, policy_version 1944 (0.0017) |
|
[2025-02-18 13:28:37,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 7970816. Throughput: 0: 966.2. Samples: 1239486. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:28:37,400][00882] Avg episode reward: [(0, '30.417')] |
|
[2025-02-18 13:28:42,394][00882] Fps is (10 sec: 4098.6, 60 sec: 3891.5, 300 sec: 3860.0). Total num frames: 7991296. Throughput: 0: 961.2. Samples: 1246016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:28:42,397][00882] Avg episode reward: [(0, '29.334')] |
|
[2025-02-18 13:28:45,979][19762] Updated weights for policy 0, policy_version 1954 (0.0014) |
|
[2025-02-18 13:28:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8007680. Throughput: 0: 966.5. Samples: 1250770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:28:47,397][00882] Avg episode reward: [(0, '31.741')] |
|
[2025-02-18 13:28:47,404][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth... |
|
[2025-02-18 13:28:47,527][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001729_7081984.pth |
|
[2025-02-18 13:28:52,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 8028160. Throughput: 0: 966.4. Samples: 1253964. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:28:52,402][00882] Avg episode reward: [(0, '32.537')] |
|
[2025-02-18 13:28:55,267][19762] Updated weights for policy 0, policy_version 1964 (0.0042) |
|
[2025-02-18 13:28:57,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8048640. Throughput: 0: 963.2. Samples: 1260422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:28:57,400][00882] Avg episode reward: [(0, '32.432')] |
|
[2025-02-18 13:29:02,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8065024. Throughput: 0: 965.3. Samples: 1265110. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:29:02,403][00882] Avg episode reward: [(0, '33.364')] |
|
[2025-02-18 13:29:06,408][19762] Updated weights for policy 0, policy_version 1974 (0.0021) |
|
[2025-02-18 13:29:07,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.6, 300 sec: 3860.0). Total num frames: 8089600. Throughput: 0: 959.6. Samples: 1268286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:29:07,397][00882] Avg episode reward: [(0, '32.560')] |
|
[2025-02-18 13:29:12,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8105984. Throughput: 0: 952.7. Samples: 1274608. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:29:12,399][00882] Avg episode reward: [(0, '32.709')] |
|
[2025-02-18 13:29:17,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3846.1). Total num frames: 8122368. Throughput: 0: 959.4. Samples: 1279516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:29:17,397][00882] Avg episode reward: [(0, '31.320')] |
|
[2025-02-18 13:29:17,618][19762] Updated weights for policy 0, policy_version 1984 (0.0021) |
|
[2025-02-18 13:29:22,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8146944. Throughput: 0: 963.4. Samples: 1282840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:29:22,402][00882] Avg episode reward: [(0, '29.178')] |
|
[2025-02-18 13:29:27,395][00882] Fps is (10 sec: 4095.5, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8163328. Throughput: 0: 956.8. Samples: 1289072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:29:27,398][00882] Avg episode reward: [(0, '29.853')] |
|
[2025-02-18 13:29:28,341][19762] Updated weights for policy 0, policy_version 1994 (0.0016) |
|
[2025-02-18 13:29:32,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3823.4, 300 sec: 3846.1). Total num frames: 8179712. Throughput: 0: 960.9. Samples: 1294012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:29:32,399][00882] Avg episode reward: [(0, '29.474')] |
|
[2025-02-18 13:29:37,394][00882] Fps is (10 sec: 3686.9, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 8200192. Throughput: 0: 957.7. Samples: 1297062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:29:37,399][00882] Avg episode reward: [(0, '29.113')] |
|
[2025-02-18 13:29:38,398][19762] Updated weights for policy 0, policy_version 2004 (0.0020) |
|
[2025-02-18 13:29:42,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8220672. Throughput: 0: 950.8. Samples: 1303206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:29:42,396][00882] Avg episode reward: [(0, '29.326')] |
|
[2025-02-18 13:29:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8237056. Throughput: 0: 957.9. Samples: 1308214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:29:47,398][00882] Avg episode reward: [(0, '28.616')] |
|
[2025-02-18 13:29:49,473][19762] Updated weights for policy 0, policy_version 2014 (0.0027) |
|
[2025-02-18 13:29:52,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8261632. Throughput: 0: 962.5. Samples: 1311598. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:29:52,402][00882] Avg episode reward: [(0, '29.217')] |
|
[2025-02-18 13:29:57,395][00882] Fps is (10 sec: 4095.5, 60 sec: 3822.9, 300 sec: 3859.9). Total num frames: 8278016. Throughput: 0: 958.8. Samples: 1317754. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:29:57,401][00882] Avg episode reward: [(0, '29.678')] |
|
[2025-02-18 13:30:00,495][19762] Updated weights for policy 0, policy_version 2024 (0.0028) |
|
[2025-02-18 13:30:02,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8298496. Throughput: 0: 964.7. Samples: 1322928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:30:02,401][00882] Avg episode reward: [(0, '29.080')] |
|
[2025-02-18 13:30:07,394][00882] Fps is (10 sec: 4096.5, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8318976. Throughput: 0: 961.6. Samples: 1326114. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:30:07,401][00882] Avg episode reward: [(0, '29.617')] |
|
[2025-02-18 13:30:10,297][19762] Updated weights for policy 0, policy_version 2034 (0.0027) |
|
[2025-02-18 13:30:12,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8335360. Throughput: 0: 955.2. Samples: 1332056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:30:12,402][00882] Avg episode reward: [(0, '29.871')] |
|
[2025-02-18 13:30:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8355840. Throughput: 0: 962.8. Samples: 1337336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:30:17,396][00882] Avg episode reward: [(0, '30.035')] |
|
[2025-02-18 13:30:20,931][19762] Updated weights for policy 0, policy_version 2044 (0.0022) |
|
[2025-02-18 13:30:22,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8376320. Throughput: 0: 971.4. Samples: 1340776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:30:22,402][00882] Avg episode reward: [(0, '29.857')] |
|
[2025-02-18 13:30:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 8392704. Throughput: 0: 965.6. Samples: 1346660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:30:27,399][00882] Avg episode reward: [(0, '30.510')] |
|
[2025-02-18 13:30:31,881][19762] Updated weights for policy 0, policy_version 2054 (0.0014) |
|
[2025-02-18 13:30:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8413184. Throughput: 0: 975.3. Samples: 1352104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:30:32,400][00882] Avg episode reward: [(0, '30.073')] |
|
[2025-02-18 13:30:37,394][00882] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8433664. Throughput: 0: 970.5. Samples: 1355270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:30:37,397][00882] Avg episode reward: [(0, '29.686')] |
|
[2025-02-18 13:30:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 8450048. Throughput: 0: 962.4. Samples: 1361060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:30:42,400][00882] Avg episode reward: [(0, '28.666')] |
|
[2025-02-18 13:30:42,832][19762] Updated weights for policy 0, policy_version 2064 (0.0025) |
|
[2025-02-18 13:30:47,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 8470528. Throughput: 0: 969.4. Samples: 1366550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:30:47,401][00882] Avg episode reward: [(0, '27.442')] |
|
[2025-02-18 13:30:47,411][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002068_8470528.pth... |
|
[2025-02-18 13:30:47,553][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001842_7544832.pth |
|
[2025-02-18 13:30:52,307][19762] Updated weights for policy 0, policy_version 2074 (0.0019) |
|
[2025-02-18 13:30:52,397][00882] Fps is (10 sec: 4504.2, 60 sec: 3891.0, 300 sec: 3859.9). Total num frames: 8495104. Throughput: 0: 970.6. Samples: 1369792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:30:52,402][00882] Avg episode reward: [(0, '25.873')] |
|
[2025-02-18 13:30:57,397][00882] Fps is (10 sec: 3686.3, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 8507392. Throughput: 0: 966.0. Samples: 1375526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:30:57,402][00882] Avg episode reward: [(0, '26.267')] |
|
[2025-02-18 13:31:02,394][00882] Fps is (10 sec: 3277.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 8527872. Throughput: 0: 973.1. Samples: 1381124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:31:02,401][00882] Avg episode reward: [(0, '26.406')] |
|
[2025-02-18 13:31:03,426][19762] Updated weights for policy 0, policy_version 2084 (0.0030) |
|
[2025-02-18 13:31:07,394][00882] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8552448. Throughput: 0: 967.5. Samples: 1384312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:31:07,401][00882] Avg episode reward: [(0, '27.039')] |
|
[2025-02-18 13:31:12,396][00882] Fps is (10 sec: 3685.6, 60 sec: 3822.8, 300 sec: 3859.9). Total num frames: 8564736. Throughput: 0: 961.9. Samples: 1389946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:31:12,398][00882] Avg episode reward: [(0, '27.100')] |
|
[2025-02-18 13:31:14,662][19762] Updated weights for policy 0, policy_version 2094 (0.0015) |
|
[2025-02-18 13:31:17,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8589312. Throughput: 0: 967.4. Samples: 1395638. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:31:17,397][00882] Avg episode reward: [(0, '27.911')] |
|
[2025-02-18 13:31:22,394][00882] Fps is (10 sec: 4506.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8609792. Throughput: 0: 972.5. Samples: 1399032. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:31:22,397][00882] Avg episode reward: [(0, '29.980')] |
|
[2025-02-18 13:31:24,200][19762] Updated weights for policy 0, policy_version 2104 (0.0016) |
|
[2025-02-18 13:31:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8626176. Throughput: 0: 964.5. Samples: 1404462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:31:27,397][00882] Avg episode reward: [(0, '28.913')] |
|
[2025-02-18 13:31:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8646656. Throughput: 0: 973.6. Samples: 1410362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:31:32,397][00882] Avg episode reward: [(0, '28.261')] |
|
[2025-02-18 13:31:34,953][19762] Updated weights for policy 0, policy_version 2114 (0.0015) |
|
[2025-02-18 13:31:37,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8667136. Throughput: 0: 971.8. Samples: 1413520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:31:37,401][00882] Avg episode reward: [(0, '28.456')] |
|
[2025-02-18 13:31:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8683520. Throughput: 0: 961.5. Samples: 1418794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:31:42,400][00882] Avg episode reward: [(0, '27.447')] |
|
[2025-02-18 13:31:46,103][19762] Updated weights for policy 0, policy_version 2124 (0.0026) |
|
[2025-02-18 13:31:47,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8704000. Throughput: 0: 969.1. Samples: 1424734. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:31:47,400][00882] Avg episode reward: [(0, '27.256')] |
|
[2025-02-18 13:31:52,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3823.1, 300 sec: 3860.0). Total num frames: 8724480. Throughput: 0: 972.7. Samples: 1428084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:31:52,396][00882] Avg episode reward: [(0, '26.143')] |
|
[2025-02-18 13:31:56,984][19762] Updated weights for policy 0, policy_version 2134 (0.0020) |
|
[2025-02-18 13:31:57,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8740864. Throughput: 0: 964.4. Samples: 1433344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2025-02-18 13:31:57,396][00882] Avg episode reward: [(0, '27.268')] |
|
[2025-02-18 13:32:02,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 8765440. Throughput: 0: 975.1. Samples: 1439516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:32:02,396][00882] Avg episode reward: [(0, '27.534')] |
|
[2025-02-18 13:32:06,140][19762] Updated weights for policy 0, policy_version 2144 (0.0013) |
|
[2025-02-18 13:32:07,397][00882] Fps is (10 sec: 4504.2, 60 sec: 3891.0, 300 sec: 3859.9). Total num frames: 8785920. Throughput: 0: 971.5. Samples: 1442752. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2025-02-18 13:32:07,404][00882] Avg episode reward: [(0, '26.156')] |
|
[2025-02-18 13:32:12,394][00882] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 8798208. Throughput: 0: 964.2. Samples: 1447850. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:32:12,396][00882] Avg episode reward: [(0, '27.220')] |
|
[2025-02-18 13:32:17,324][19762] Updated weights for policy 0, policy_version 2154 (0.0034) |
|
[2025-02-18 13:32:17,394][00882] Fps is (10 sec: 3687.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 8822784. Throughput: 0: 973.2. Samples: 1454156. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2025-02-18 13:32:17,401][00882] Avg episode reward: [(0, '27.863')] |
|
[2025-02-18 13:32:22,394][00882] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8843264. Throughput: 0: 977.6. Samples: 1457514. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:32:22,400][00882] Avg episode reward: [(0, '29.836')] |
|
[2025-02-18 13:32:27,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 8859648. Throughput: 0: 971.7. Samples: 1462520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:32:27,396][00882] Avg episode reward: [(0, '28.467')] |
|
[2025-02-18 13:32:28,145][19762] Updated weights for policy 0, policy_version 2164 (0.0016) |
|
[2025-02-18 13:32:32,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8880128. Throughput: 0: 982.8. Samples: 1468962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:32:32,396][00882] Avg episode reward: [(0, '27.228')] |
|
[2025-02-18 13:32:37,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 8900608. Throughput: 0: 981.8. Samples: 1472264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:32:37,397][00882] Avg episode reward: [(0, '28.767')] |
|
[2025-02-18 13:32:37,933][19762] Updated weights for policy 0, policy_version 2174 (0.0019) |
|
[2025-02-18 13:32:42,394][00882] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 8916992. Throughput: 0: 970.8. Samples: 1477028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:32:42,397][00882] Avg episode reward: [(0, '28.606')] |
|
[2025-02-18 13:32:47,394][00882] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8937472. Throughput: 0: 976.7. Samples: 1483468. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2025-02-18 13:32:47,401][00882] Avg episode reward: [(0, '28.388')] |
|
[2025-02-18 13:32:47,412][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002182_8937472.pth... |
|
[2025-02-18 13:32:47,552][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth |
|
[2025-02-18 13:32:48,491][19762] Updated weights for policy 0, policy_version 2184 (0.0026) |
|
[2025-02-18 13:32:52,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8957952. Throughput: 0: 977.4. Samples: 1486730. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2025-02-18 13:32:52,397][00882] Avg episode reward: [(0, '28.144')] |
|
[2025-02-18 13:32:57,394][00882] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 8974336. Throughput: 0: 971.0. Samples: 1491546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2025-02-18 13:32:57,396][00882] Avg episode reward: [(0, '28.603')] |
|
[2025-02-18 13:32:59,551][19762] Updated weights for policy 0, policy_version 2194 (0.0025) |
|
[2025-02-18 13:33:02,394][00882] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 8998912. Throughput: 0: 976.6. Samples: 1498104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2025-02-18 13:33:02,400][00882] Avg episode reward: [(0, '30.579')] |
|
[2025-02-18 13:33:04,130][19749] Stopping Batcher_0... |
|
[2025-02-18 13:33:04,131][00882] Component Batcher_0 stopped! |
|
[2025-02-18 13:33:04,131][19749] Loop batcher_evt_loop terminating... |
|
[2025-02-18 13:33:04,133][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002199_9007104.pth... |
|
[2025-02-18 13:33:04,198][19762] Weights refcount: 2 0 |
|
[2025-02-18 13:33:04,202][00882] Component InferenceWorker_p0-w0 stopped! |
|
[2025-02-18 13:33:04,201][19762] Stopping InferenceWorker_p0-w0... |
|
[2025-02-18 13:33:04,206][19762] Loop inference_proc0-0_evt_loop terminating... |
|
[2025-02-18 13:33:04,268][19749] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002068_8470528.pth |
|
[2025-02-18 13:33:04,296][19749] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002199_9007104.pth... |
|
[2025-02-18 13:33:04,484][00882] Component LearnerWorker_p0 stopped! |
|
[2025-02-18 13:33:04,488][19749] Stopping LearnerWorker_p0... |
|
[2025-02-18 13:33:04,489][19749] Loop learner_proc0_evt_loop terminating... |
|
[2025-02-18 13:33:04,581][19767] Stopping RolloutWorker_w4... |
|
[2025-02-18 13:33:04,581][00882] Component RolloutWorker_w4 stopped! |
|
[2025-02-18 13:33:04,589][19765] Stopping RolloutWorker_w2... |
|
[2025-02-18 13:33:04,582][19767] Loop rollout_proc4_evt_loop terminating... |
|
[2025-02-18 13:33:04,589][00882] Component RolloutWorker_w2 stopped! |
|
[2025-02-18 13:33:04,590][19765] Loop rollout_proc2_evt_loop terminating... |
|
[2025-02-18 13:33:04,600][19773] Stopping RolloutWorker_w6... |
|
[2025-02-18 13:33:04,600][00882] Component RolloutWorker_w6 stopped! |
|
[2025-02-18 13:33:04,601][19773] Loop rollout_proc6_evt_loop terminating... |
|
[2025-02-18 13:33:04,623][19763] Stopping RolloutWorker_w0... |
|
[2025-02-18 13:33:04,623][00882] Component RolloutWorker_w0 stopped! |
|
[2025-02-18 13:33:04,636][19763] Loop rollout_proc0_evt_loop terminating... |
|
[2025-02-18 13:33:04,639][00882] Component RolloutWorker_w1 stopped! |
|
[2025-02-18 13:33:04,642][19764] Stopping RolloutWorker_w1... |
|
[2025-02-18 13:33:04,643][19764] Loop rollout_proc1_evt_loop terminating... |
|
[2025-02-18 13:33:04,646][00882] Component RolloutWorker_w5 stopped! |
|
[2025-02-18 13:33:04,649][19772] Stopping RolloutWorker_w5... |
|
[2025-02-18 13:33:04,649][19772] Loop rollout_proc5_evt_loop terminating... |
|
[2025-02-18 13:33:04,656][00882] Component RolloutWorker_w7 stopped! |
|
[2025-02-18 13:33:04,658][19774] Stopping RolloutWorker_w7... |
|
[2025-02-18 13:33:04,666][19774] Loop rollout_proc7_evt_loop terminating... |
|
[2025-02-18 13:33:04,668][00882] Component RolloutWorker_w3 stopped! |
|
[2025-02-18 13:33:04,671][00882] Waiting for process learner_proc0 to stop... |
|
[2025-02-18 13:33:04,673][19766] Stopping RolloutWorker_w3... |
|
[2025-02-18 13:33:04,674][19766] Loop rollout_proc3_evt_loop terminating... |
|
[2025-02-18 13:33:06,497][00882] Waiting for process inference_proc0-0 to join... |
|
[2025-02-18 13:33:06,503][00882] Waiting for process rollout_proc0 to join... |
|
[2025-02-18 13:33:09,754][00882] Waiting for process rollout_proc1 to join... |
|
[2025-02-18 13:33:09,757][00882] Waiting for process rollout_proc2 to join... |
|
[2025-02-18 13:33:09,759][00882] Waiting for process rollout_proc3 to join... |
|
[2025-02-18 13:33:09,762][00882] Waiting for process rollout_proc4 to join... |
|
[2025-02-18 13:33:09,764][00882] Waiting for process rollout_proc5 to join... |
|
[2025-02-18 13:33:09,765][00882] Waiting for process rollout_proc6 to join... |
|
[2025-02-18 13:33:09,767][00882] Waiting for process rollout_proc7 to join... |
|
[2025-02-18 13:33:09,768][00882] Batcher 0 profile tree view: |
|
batching: 39.2571, releasing_batches: 0.0393 |
|
[2025-02-18 13:33:09,769][00882] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 638.5486 |
|
update_model: 12.4469 |
|
weight_update: 0.0038 |
|
one_step: 0.0027 |
|
handle_policy_step: 851.8659 |
|
deserialize: 20.6511, stack: 4.6524, obs_to_device_normalize: 179.5719, forward: 440.6098, send_messages: 41.4941 |
|
prepare_outputs: 128.0023 |
|
to_cpu: 77.8064 |
|
[2025-02-18 13:33:09,771][00882] Learner 0 profile tree view: |
|
misc: 0.0062, prepare_batch: 17.7580 |
|
train: 108.6128 |
|
epoch_init: 0.0076, minibatch_init: 0.0085, losses_postprocess: 0.9177, kl_divergence: 0.9045, after_optimizer: 4.7299 |
|
calculate_losses: 36.7994 |
|
losses_init: 0.0051, forward_head: 1.8694, bptt_initial: 24.0245, tail: 1.6018, advantages_returns: 0.4757, losses: 5.5581 |
|
bptt: 2.8871 |
|
bptt_forward_core: 2.7518 |
|
update: 64.2991 |
|
clip: 1.3486 |
|
[2025-02-18 13:33:09,772][00882] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.4529, enqueue_policy_requests: 158.3186, env_step: 1224.6760, overhead: 19.8639, complete_rollouts: 10.4238 |
|
save_policy_outputs: 30.0309 |
|
split_output_tensors: 12.1983 |
|
[2025-02-18 13:33:09,773][00882] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.4356, enqueue_policy_requests: 161.1468, env_step: 1226.1006, overhead: 20.1116, complete_rollouts: 10.4193 |
|
save_policy_outputs: 28.8588 |
|
split_output_tensors: 10.9629 |
|
[2025-02-18 13:33:09,775][00882] Loop Runner_EvtLoop terminating... |
|
[2025-02-18 13:33:09,783][00882] Runner profile tree view: |
|
main_loop: 1579.8349 |
|
[2025-02-18 13:33:09,784][00882] Collected {0: 9007104}, FPS: 3798.3 |
|
[2025-02-18 13:34:27,883][00882] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-18 13:34:27,887][00882] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-18 13:34:27,890][00882] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-18 13:34:27,895][00882] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-18 13:34:27,897][00882] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:34:27,899][00882] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-18 13:34:27,900][00882] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:34:27,904][00882] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-18 13:34:27,905][00882] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2025-02-18 13:34:27,906][00882] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2025-02-18 13:34:27,908][00882] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-18 13:34:27,909][00882] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-18 13:34:27,910][00882] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-18 13:34:27,912][00882] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-18 13:34:27,913][00882] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-18 13:34:27,980][00882] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:34:27,986][00882] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:34:28,006][00882] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:34:28,060][00882] Conv encoder output size: 512 |
|
[2025-02-18 13:34:28,062][00882] Policy head output size: 512 |
|
[2025-02-18 13:34:28,095][00882] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002199_9007104.pth... |
|
[2025-02-18 13:34:28,702][00882] Num frames 100... |
|
[2025-02-18 13:34:28,905][00882] Num frames 200... |
|
[2025-02-18 13:34:29,098][00882] Num frames 300... |
|
[2025-02-18 13:34:29,277][00882] Num frames 400... |
|
[2025-02-18 13:34:29,454][00882] Num frames 500... |
|
[2025-02-18 13:34:29,604][00882] Num frames 600... |
|
[2025-02-18 13:34:29,731][00882] Num frames 700... |
|
[2025-02-18 13:34:29,865][00882] Num frames 800... |
|
[2025-02-18 13:34:30,001][00882] Num frames 900... |
|
[2025-02-18 13:34:30,135][00882] Num frames 1000... |
|
[2025-02-18 13:34:30,266][00882] Num frames 1100... |
|
[2025-02-18 13:34:30,399][00882] Num frames 1200... |
|
[2025-02-18 13:34:30,528][00882] Num frames 1300... |
|
[2025-02-18 13:34:30,662][00882] Num frames 1400... |
|
[2025-02-18 13:34:30,800][00882] Num frames 1500... |
|
[2025-02-18 13:34:30,950][00882] Num frames 1600... |
|
[2025-02-18 13:34:31,087][00882] Num frames 1700... |
|
[2025-02-18 13:34:31,232][00882] Num frames 1800... |
|
[2025-02-18 13:34:31,366][00882] Num frames 1900... |
|
[2025-02-18 13:34:31,506][00882] Num frames 2000... |
|
[2025-02-18 13:34:31,641][00882] Num frames 2100... |
|
[2025-02-18 13:34:31,693][00882] Avg episode rewards: #0: 58.999, true rewards: #0: 21.000 |
|
[2025-02-18 13:34:31,695][00882] Avg episode reward: 58.999, avg true_objective: 21.000 |
|
[2025-02-18 13:34:31,828][00882] Num frames 2200... |
|
[2025-02-18 13:34:31,972][00882] Num frames 2300... |
|
[2025-02-18 13:34:32,111][00882] Num frames 2400... |
|
[2025-02-18 13:34:32,245][00882] Num frames 2500... |
|
[2025-02-18 13:34:32,380][00882] Num frames 2600... |
|
[2025-02-18 13:34:32,513][00882] Num frames 2700... |
|
[2025-02-18 13:34:32,647][00882] Num frames 2800... |
|
[2025-02-18 13:34:32,777][00882] Num frames 2900... |
|
[2025-02-18 13:34:32,912][00882] Num frames 3000... |
|
[2025-02-18 13:34:33,056][00882] Num frames 3100... |
|
[2025-02-18 13:34:33,190][00882] Num frames 3200... |
|
[2025-02-18 13:34:33,326][00882] Num frames 3300... |
|
[2025-02-18 13:34:33,454][00882] Num frames 3400... |
|
[2025-02-18 13:34:33,587][00882] Num frames 3500... |
|
[2025-02-18 13:34:33,718][00882] Num frames 3600... |
|
[2025-02-18 13:34:33,864][00882] Num frames 3700... |
|
[2025-02-18 13:34:34,002][00882] Num frames 3800... |
|
[2025-02-18 13:34:34,138][00882] Num frames 3900... |
|
[2025-02-18 13:34:34,270][00882] Num frames 4000... |
|
[2025-02-18 13:34:34,401][00882] Num frames 4100... |
|
[2025-02-18 13:34:34,539][00882] Num frames 4200... |
|
[2025-02-18 13:34:34,591][00882] Avg episode rewards: #0: 57.999, true rewards: #0: 21.000 |
|
[2025-02-18 13:34:34,593][00882] Avg episode reward: 57.999, avg true_objective: 21.000 |
|
[2025-02-18 13:34:34,728][00882] Num frames 4300... |
|
[2025-02-18 13:34:34,864][00882] Num frames 4400... |
|
[2025-02-18 13:34:34,997][00882] Num frames 4500... |
|
[2025-02-18 13:34:35,136][00882] Num frames 4600... |
|
[2025-02-18 13:34:35,269][00882] Num frames 4700... |
|
[2025-02-18 13:34:35,378][00882] Avg episode rewards: #0: 42.449, true rewards: #0: 15.783 |
|
[2025-02-18 13:34:35,380][00882] Avg episode reward: 42.449, avg true_objective: 15.783 |
|
[2025-02-18 13:34:35,469][00882] Num frames 4800... |
|
[2025-02-18 13:34:35,598][00882] Num frames 4900... |
|
[2025-02-18 13:34:35,730][00882] Num frames 5000... |
|
[2025-02-18 13:34:35,872][00882] Num frames 5100... |
|
[2025-02-18 13:34:36,019][00882] Num frames 5200... |
|
[2025-02-18 13:34:36,071][00882] Avg episode rewards: #0: 33.749, true rewards: #0: 13.000 |
|
[2025-02-18 13:34:36,074][00882] Avg episode reward: 33.749, avg true_objective: 13.000 |
|
[2025-02-18 13:34:36,210][00882] Num frames 5300... |
|
[2025-02-18 13:34:36,347][00882] Num frames 5400... |
|
[2025-02-18 13:34:36,482][00882] Num frames 5500... |
|
[2025-02-18 13:34:36,613][00882] Num frames 5600... |
|
[2025-02-18 13:34:36,744][00882] Num frames 5700... |
|
[2025-02-18 13:34:36,886][00882] Num frames 5800... |
|
[2025-02-18 13:34:37,021][00882] Num frames 5900... |
|
[2025-02-18 13:34:37,161][00882] Num frames 6000... |
|
[2025-02-18 13:34:37,300][00882] Num frames 6100... |
|
[2025-02-18 13:34:37,430][00882] Num frames 6200... |
|
[2025-02-18 13:34:37,563][00882] Num frames 6300... |
|
[2025-02-18 13:34:37,698][00882] Num frames 6400... |
|
[2025-02-18 13:34:37,832][00882] Num frames 6500... |
|
[2025-02-18 13:34:37,999][00882] Avg episode rewards: #0: 33.552, true rewards: #0: 13.152 |
|
[2025-02-18 13:34:38,001][00882] Avg episode reward: 33.552, avg true_objective: 13.152 |
|
[2025-02-18 13:34:38,037][00882] Num frames 6600... |
|
[2025-02-18 13:34:38,175][00882] Num frames 6700... |
|
[2025-02-18 13:34:38,311][00882] Num frames 6800... |
|
[2025-02-18 13:34:38,444][00882] Num frames 6900... |
|
[2025-02-18 13:34:38,620][00882] Avg episode rewards: #0: 28.986, true rewards: #0: 11.653 |
|
[2025-02-18 13:34:38,622][00882] Avg episode reward: 28.986, avg true_objective: 11.653 |
|
[2025-02-18 13:34:38,637][00882] Num frames 7000... |
|
[2025-02-18 13:34:38,768][00882] Num frames 7100... |
|
[2025-02-18 13:34:38,907][00882] Num frames 7200... |
|
[2025-02-18 13:34:39,046][00882] Num frames 7300... |
|
[2025-02-18 13:34:39,190][00882] Num frames 7400... |
|
[2025-02-18 13:34:39,328][00882] Num frames 7500... |
|
[2025-02-18 13:34:39,464][00882] Num frames 7600... |
|
[2025-02-18 13:34:39,629][00882] Num frames 7700... |
|
[2025-02-18 13:34:39,820][00882] Num frames 7800... |
|
[2025-02-18 13:34:40,015][00882] Num frames 7900... |
|
[2025-02-18 13:34:40,188][00882] Num frames 8000... |
|
[2025-02-18 13:34:40,366][00882] Num frames 8100... |
|
[2025-02-18 13:34:40,544][00882] Num frames 8200... |
|
[2025-02-18 13:34:40,715][00882] Num frames 8300... |
|
[2025-02-18 13:34:40,901][00882] Num frames 8400... |
|
[2025-02-18 13:34:41,109][00882] Num frames 8500... |
|
[2025-02-18 13:34:41,297][00882] Num frames 8600... |
|
[2025-02-18 13:34:41,480][00882] Num frames 8700... |
|
[2025-02-18 13:34:41,673][00882] Num frames 8800... |
|
[2025-02-18 13:34:41,858][00882] Num frames 8900... |
|
[2025-02-18 13:34:41,993][00882] Num frames 9000... |
|
[2025-02-18 13:34:42,172][00882] Avg episode rewards: #0: 33.131, true rewards: #0: 12.989 |
|
[2025-02-18 13:34:42,175][00882] Avg episode reward: 33.131, avg true_objective: 12.989 |
|
[2025-02-18 13:34:42,190][00882] Num frames 9100... |
|
[2025-02-18 13:34:42,335][00882] Num frames 9200... |
|
[2025-02-18 13:34:42,465][00882] Num frames 9300... |
|
[2025-02-18 13:34:42,596][00882] Num frames 9400... |
|
[2025-02-18 13:34:42,728][00882] Num frames 9500... |
|
[2025-02-18 13:34:42,870][00882] Num frames 9600... |
|
[2025-02-18 13:34:43,012][00882] Num frames 9700... |
|
[2025-02-18 13:34:43,152][00882] Num frames 9800... |
|
[2025-02-18 13:34:43,317][00882] Num frames 9900... |
|
[2025-02-18 13:34:43,452][00882] Num frames 10000... |
|
[2025-02-18 13:34:43,587][00882] Num frames 10100... |
|
[2025-02-18 13:34:43,723][00882] Num frames 10200... |
|
[2025-02-18 13:34:43,865][00882] Num frames 10300... |
|
[2025-02-18 13:34:44,000][00882] Num frames 10400... |
|
[2025-02-18 13:34:44,133][00882] Num frames 10500... |
|
[2025-02-18 13:34:44,253][00882] Avg episode rewards: #0: 33.682, true rewards: #0: 13.183 |
|
[2025-02-18 13:34:44,255][00882] Avg episode reward: 33.682, avg true_objective: 13.183 |
|
[2025-02-18 13:34:44,337][00882] Num frames 10600... |
|
[2025-02-18 13:34:44,469][00882] Num frames 10700... |
|
[2025-02-18 13:34:44,601][00882] Num frames 10800... |
|
[2025-02-18 13:34:44,734][00882] Num frames 10900... |
|
[2025-02-18 13:34:44,877][00882] Num frames 11000... |
|
[2025-02-18 13:34:45,010][00882] Num frames 11100... |
|
[2025-02-18 13:34:45,143][00882] Num frames 11200... |
|
[2025-02-18 13:34:45,281][00882] Num frames 11300... |
|
[2025-02-18 13:34:45,422][00882] Num frames 11400... |
|
[2025-02-18 13:34:45,563][00882] Num frames 11500... |
|
[2025-02-18 13:34:45,699][00882] Num frames 11600... |
|
[2025-02-18 13:34:45,843][00882] Num frames 11700... |
|
[2025-02-18 13:34:45,989][00882] Num frames 11800... |
|
[2025-02-18 13:34:46,135][00882] Num frames 11900... |
|
[2025-02-18 13:34:46,295][00882] Num frames 12000... |
|
[2025-02-18 13:34:46,438][00882] Num frames 12100... |
|
[2025-02-18 13:34:46,573][00882] Num frames 12200... |
|
[2025-02-18 13:34:46,706][00882] Num frames 12300... |
|
[2025-02-18 13:34:46,849][00882] Num frames 12400... |
|
[2025-02-18 13:34:46,977][00882] Avg episode rewards: #0: 36.056, true rewards: #0: 13.834 |
|
[2025-02-18 13:34:46,978][00882] Avg episode reward: 36.056, avg true_objective: 13.834 |
|
[2025-02-18 13:34:47,050][00882] Num frames 12500... |
|
[2025-02-18 13:34:47,180][00882] Num frames 12600... |
|
[2025-02-18 13:34:47,318][00882] Num frames 12700... |
|
[2025-02-18 13:34:47,455][00882] Num frames 12800... |
|
[2025-02-18 13:34:47,592][00882] Num frames 12900... |
|
[2025-02-18 13:34:47,725][00882] Num frames 13000... |
|
[2025-02-18 13:34:47,868][00882] Num frames 13100... |
|
[2025-02-18 13:34:48,002][00882] Num frames 13200... |
|
[2025-02-18 13:34:48,136][00882] Num frames 13300... |
|
[2025-02-18 13:34:48,270][00882] Num frames 13400... |
|
[2025-02-18 13:34:48,345][00882] Avg episode rewards: #0: 34.211, true rewards: #0: 13.411 |
|
[2025-02-18 13:34:48,348][00882] Avg episode reward: 34.211, avg true_objective: 13.411 |
|
[2025-02-18 13:36:11,553][00882] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2025-02-18 13:36:52,218][00882] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2025-02-18 13:36:52,219][00882] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2025-02-18 13:36:52,221][00882] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2025-02-18 13:36:52,223][00882] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2025-02-18 13:36:52,225][00882] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2025-02-18 13:36:52,226][00882] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2025-02-18 13:36:52,228][00882] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2025-02-18 13:36:52,230][00882] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2025-02-18 13:36:52,231][00882] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2025-02-18 13:36:52,233][00882] Adding new argument 'hf_repository'='GatinhoEducado/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2025-02-18 13:36:52,235][00882] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2025-02-18 13:36:52,236][00882] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2025-02-18 13:36:52,238][00882] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2025-02-18 13:36:52,240][00882] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2025-02-18 13:36:52,241][00882] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2025-02-18 13:36:52,270][00882] RunningMeanStd input shape: (3, 72, 128) |
|
[2025-02-18 13:36:52,273][00882] RunningMeanStd input shape: (1,) |
|
[2025-02-18 13:36:52,295][00882] ConvEncoder: input_channels=3 |
|
[2025-02-18 13:36:52,329][00882] Conv encoder output size: 512 |
|
[2025-02-18 13:36:52,331][00882] Policy head output size: 512 |
|
[2025-02-18 13:36:52,350][00882] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000002199_9007104.pth... |
|
[2025-02-18 13:36:52,780][00882] Num frames 100... |
|
[2025-02-18 13:36:52,934][00882] Num frames 200... |
|
[2025-02-18 13:36:53,061][00882] Num frames 300... |
|
[2025-02-18 13:36:53,189][00882] Num frames 400... |
|
[2025-02-18 13:36:53,325][00882] Num frames 500... |
|
[2025-02-18 13:36:53,452][00882] Num frames 600... |
|
[2025-02-18 13:36:53,577][00882] Num frames 700... |
|
[2025-02-18 13:36:53,703][00882] Num frames 800... |
|
[2025-02-18 13:36:53,841][00882] Num frames 900... |
|
[2025-02-18 13:36:54,009][00882] Avg episode rewards: #0: 24.870, true rewards: #0: 9.870 |
|
[2025-02-18 13:36:54,011][00882] Avg episode reward: 24.870, avg true_objective: 9.870 |
|
[2025-02-18 13:36:54,032][00882] Num frames 1000... |
|
[2025-02-18 13:36:54,155][00882] Num frames 1100... |
|
[2025-02-18 13:36:54,278][00882] Num frames 1200... |
|
[2025-02-18 13:36:54,416][00882] Num frames 1300... |
|
[2025-02-18 13:36:54,545][00882] Num frames 1400... |
|
[2025-02-18 13:36:54,672][00882] Num frames 1500... |
|
[2025-02-18 13:36:54,796][00882] Num frames 1600... |
|
[2025-02-18 13:36:54,930][00882] Num frames 1700... |
|
[2025-02-18 13:36:55,063][00882] Num frames 1800... |
|
[2025-02-18 13:36:55,190][00882] Num frames 1900... |
|
[2025-02-18 13:36:55,319][00882] Num frames 2000... |
|
[2025-02-18 13:36:55,505][00882] Num frames 2100... |
|
[2025-02-18 13:36:55,684][00882] Num frames 2200... |
|
[2025-02-18 13:36:55,855][00882] Num frames 2300... |
|
[2025-02-18 13:36:56,053][00882] Num frames 2400... |
|
[2025-02-18 13:36:56,224][00882] Num frames 2500... |
|
[2025-02-18 13:36:56,395][00882] Num frames 2600... |
|
[2025-02-18 13:36:56,573][00882] Avg episode rewards: #0: 34.340, true rewards: #0: 13.340 |
|
[2025-02-18 13:36:56,578][00882] Avg episode reward: 34.340, avg true_objective: 13.340 |
|
[2025-02-18 13:36:56,639][00882] Num frames 2700... |
|
[2025-02-18 13:36:56,813][00882] Num frames 2800... |
|
[2025-02-18 13:36:57,016][00882] Num frames 2900... |
|
[2025-02-18 13:36:57,193][00882] Num frames 3000... |
|
[2025-02-18 13:36:57,377][00882] Num frames 3100... |
|
[2025-02-18 13:36:57,570][00882] Num frames 3200... |
|
[2025-02-18 13:36:57,699][00882] Num frames 3300... |
|
[2025-02-18 13:36:57,830][00882] Num frames 3400... |
|
[2025-02-18 13:36:57,959][00882] Num frames 3500... |
|
[2025-02-18 13:36:58,094][00882] Num frames 3600... |
|
[2025-02-18 13:36:58,222][00882] Num frames 3700... |
|
[2025-02-18 13:36:58,296][00882] Avg episode rewards: #0: 31.713, true rewards: #0: 12.380 |
|
[2025-02-18 13:36:58,299][00882] Avg episode reward: 31.713, avg true_objective: 12.380 |
|
[2025-02-18 13:36:58,407][00882] Num frames 3800... |
|
[2025-02-18 13:36:58,542][00882] Num frames 3900... |
|
[2025-02-18 13:36:58,671][00882] Num frames 4000... |
|
[2025-02-18 13:36:58,802][00882] Num frames 4100... |
|
[2025-02-18 13:36:58,937][00882] Num frames 4200... |
|
[2025-02-18 13:36:59,069][00882] Num frames 4300... |
|
[2025-02-18 13:36:59,135][00882] Avg episode rewards: #0: 27.020, true rewards: #0: 10.770 |
|
[2025-02-18 13:36:59,137][00882] Avg episode reward: 27.020, avg true_objective: 10.770 |
|
[2025-02-18 13:36:59,260][00882] Num frames 4400... |
|
[2025-02-18 13:36:59,384][00882] Num frames 4500... |
|
[2025-02-18 13:36:59,512][00882] Num frames 4600... |
|
[2025-02-18 13:36:59,644][00882] Num frames 4700... |
|
[2025-02-18 13:36:59,774][00882] Num frames 4800... |
|
[2025-02-18 13:36:59,914][00882] Num frames 4900... |
|
[2025-02-18 13:37:00,044][00882] Num frames 5000... |
|
[2025-02-18 13:37:00,168][00882] Num frames 5100... |
|
[2025-02-18 13:37:00,298][00882] Num frames 5200... |
|
[2025-02-18 13:37:00,428][00882] Num frames 5300... |
|
[2025-02-18 13:37:00,560][00882] Num frames 5400... |
|
[2025-02-18 13:37:00,689][00882] Num frames 5500... |
|
[2025-02-18 13:37:00,820][00882] Num frames 5600... |
|
[2025-02-18 13:37:00,967][00882] Num frames 5700... |
|
[2025-02-18 13:37:01,127][00882] Avg episode rewards: #0: 30.160, true rewards: #0: 11.560 |
|
[2025-02-18 13:37:01,129][00882] Avg episode reward: 30.160, avg true_objective: 11.560 |
|
[2025-02-18 13:37:01,159][00882] Num frames 5800... |
|
[2025-02-18 13:37:01,289][00882] Num frames 5900... |
|
[2025-02-18 13:37:01,415][00882] Num frames 6000... |
|
[2025-02-18 13:37:01,543][00882] Num frames 6100... |
|
[2025-02-18 13:37:01,678][00882] Num frames 6200... |
|
[2025-02-18 13:37:01,805][00882] Num frames 6300... |
|
[2025-02-18 13:37:01,938][00882] Num frames 6400... |
|
[2025-02-18 13:37:02,071][00882] Num frames 6500... |
|
[2025-02-18 13:37:02,199][00882] Num frames 6600... |
|
[2025-02-18 13:37:02,329][00882] Num frames 6700... |
|
[2025-02-18 13:37:02,459][00882] Num frames 6800... |
|
[2025-02-18 13:37:02,522][00882] Avg episode rewards: #0: 28.840, true rewards: #0: 11.340 |
|
[2025-02-18 13:37:02,524][00882] Avg episode reward: 28.840, avg true_objective: 11.340 |
|
[2025-02-18 13:37:02,656][00882] Num frames 6900... |
|
[2025-02-18 13:37:02,787][00882] Num frames 7000... |
|
[2025-02-18 13:37:02,926][00882] Num frames 7100... |
|
[2025-02-18 13:37:03,061][00882] Num frames 7200... |
|
[2025-02-18 13:37:03,190][00882] Num frames 7300... |
|
[2025-02-18 13:37:03,323][00882] Num frames 7400... |
|
[2025-02-18 13:37:03,451][00882] Num frames 7500... |
|
[2025-02-18 13:37:03,583][00882] Num frames 7600... |
|
[2025-02-18 13:37:03,721][00882] Num frames 7700... |
|
[2025-02-18 13:37:03,864][00882] Num frames 7800... |
|
[2025-02-18 13:37:03,995][00882] Num frames 7900... |
|
[2025-02-18 13:37:04,130][00882] Num frames 8000... |
|
[2025-02-18 13:37:04,262][00882] Num frames 8100... |
|
[2025-02-18 13:37:04,391][00882] Num frames 8200... |
|
[2025-02-18 13:37:04,522][00882] Num frames 8300... |
|
[2025-02-18 13:37:04,652][00882] Num frames 8400... |
|
[2025-02-18 13:37:04,789][00882] Num frames 8500... |
|
[2025-02-18 13:37:04,926][00882] Num frames 8600... |
|
[2025-02-18 13:37:05,061][00882] Num frames 8700... |
|
[2025-02-18 13:37:05,187][00882] Num frames 8800... |
|
[2025-02-18 13:37:05,319][00882] Num frames 8900... |
|
[2025-02-18 13:37:05,381][00882] Avg episode rewards: #0: 33.291, true rewards: #0: 12.720 |
|
[2025-02-18 13:37:05,382][00882] Avg episode reward: 33.291, avg true_objective: 12.720 |
|
[2025-02-18 13:37:05,509][00882] Num frames 9000... |
|
[2025-02-18 13:37:05,637][00882] Num frames 9100... |
|
[2025-02-18 13:37:05,772][00882] Num frames 9200... |
|
[2025-02-18 13:37:05,905][00882] Num frames 9300... |
|
[2025-02-18 13:37:06,065][00882] Num frames 9400... |
|
[2025-02-18 13:37:06,196][00882] Num frames 9500... |
|
[2025-02-18 13:37:06,336][00882] Num frames 9600... |
|
[2025-02-18 13:37:06,464][00882] Num frames 9700... |
|
[2025-02-18 13:37:06,592][00882] Num frames 9800... |
|
[2025-02-18 13:37:06,730][00882] Num frames 9900... |
|
[2025-02-18 13:37:06,870][00882] Num frames 10000... |
|
[2025-02-18 13:37:07,000][00882] Num frames 10100... |
|
[2025-02-18 13:37:07,138][00882] Num frames 10200... |
|
[2025-02-18 13:37:07,267][00882] Num frames 10300... |
|
[2025-02-18 13:37:07,399][00882] Num frames 10400... |
|
[2025-02-18 13:37:07,528][00882] Num frames 10500... |
|
[2025-02-18 13:37:07,712][00882] Avg episode rewards: #0: 35.217, true rewards: #0: 13.217 |
|
[2025-02-18 13:37:07,714][00882] Avg episode reward: 35.217, avg true_objective: 13.217 |
|
[2025-02-18 13:37:07,768][00882] Num frames 10600... |
|
[2025-02-18 13:37:07,958][00882] Num frames 10700... |
|
[2025-02-18 13:37:08,128][00882] Num frames 10800... |
|
[2025-02-18 13:37:08,293][00882] Num frames 10900... |
|
[2025-02-18 13:37:08,466][00882] Num frames 11000... |
|
[2025-02-18 13:37:08,631][00882] Num frames 11100... |
|
[2025-02-18 13:37:08,801][00882] Num frames 11200... |
|
[2025-02-18 13:37:09,015][00882] Avg episode rewards: #0: 33.086, true rewards: #0: 12.531 |
|
[2025-02-18 13:37:09,017][00882] Avg episode reward: 33.086, avg true_objective: 12.531 |
|
[2025-02-18 13:37:09,062][00882] Num frames 11300... |
|
[2025-02-18 13:37:09,244][00882] Num frames 11400... |
|
[2025-02-18 13:37:09,432][00882] Num frames 11500... |
|
[2025-02-18 13:37:09,613][00882] Num frames 11600... |
|
[2025-02-18 13:37:09,784][00882] Num frames 11700... |
|
[2025-02-18 13:37:09,931][00882] Num frames 11800... |
|
[2025-02-18 13:37:10,029][00882] Avg episode rewards: #0: 30.930, true rewards: #0: 11.830 |
|
[2025-02-18 13:37:10,031][00882] Avg episode reward: 30.930, avg true_objective: 11.830 |
|
[2025-02-18 13:38:20,628][00882] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|