|
[2024-08-26 13:28:56,354][67293] Saving configuration to /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/config.json... |
|
[2024-08-26 13:28:56,354][67293] Rollout worker 0 uses device cpu |
|
[2024-08-26 13:28:56,354][67293] Rollout worker 1 uses device cpu |
|
[2024-08-26 13:28:56,354][67293] Rollout worker 2 uses device cpu |
|
[2024-08-26 13:28:56,354][67293] Rollout worker 3 uses device cpu |
|
[2024-08-26 13:28:56,355][67293] Rollout worker 4 uses device cpu |
|
[2024-08-26 13:28:56,355][67293] Rollout worker 5 uses device cpu |
|
[2024-08-26 13:28:56,355][67293] Rollout worker 6 uses device cpu |
|
[2024-08-26 13:28:56,355][67293] Rollout worker 7 uses device cpu |
|
[2024-08-26 13:28:56,399][67293] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-26 13:28:56,399][67293] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-08-26 13:28:56,410][67293] Starting all processes... |
|
[2024-08-26 13:28:56,411][67293] Starting process learner_proc0 |
|
[2024-08-26 13:28:57,277][67293] Starting all processes... |
|
[2024-08-26 13:28:57,279][67417] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-26 13:28:57,279][67417] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-08-26 13:28:57,287][67293] Starting process inference_proc0-0 |
|
[2024-08-26 13:28:57,287][67293] Starting process rollout_proc0 |
|
[2024-08-26 13:28:57,287][67293] Starting process rollout_proc1 |
|
[2024-08-26 13:28:57,288][67293] Starting process rollout_proc2 |
|
[2024-08-26 13:28:57,288][67293] Starting process rollout_proc3 |
|
[2024-08-26 13:28:57,289][67293] Starting process rollout_proc4 |
|
[2024-08-26 13:28:57,289][67293] Starting process rollout_proc5 |
|
[2024-08-26 13:28:57,290][67293] Starting process rollout_proc6 |
|
[2024-08-26 13:28:57,290][67293] Starting process rollout_proc7 |
|
[2024-08-26 13:28:57,331][67417] Num visible devices: 1 |
|
[2024-08-26 13:28:57,443][67417] Starting seed is not provided |
|
[2024-08-26 13:28:57,443][67417] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-26 13:28:57,443][67417] Initializing actor-critic model on device cuda:0 |
|
[2024-08-26 13:28:57,443][67417] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-26 13:28:57,446][67417] RunningMeanStd input shape: (1,) |
|
[2024-08-26 13:28:57,455][67417] ConvEncoder: input_channels=3 |
|
[2024-08-26 13:28:57,583][67417] Conv encoder output size: 512 |
|
[2024-08-26 13:28:57,584][67417] Policy head output size: 512 |
|
[2024-08-26 13:28:57,603][67417] Created Actor Critic model with architecture: |
|
[2024-08-26 13:28:57,603][67417] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-08-26 13:28:57,843][67417] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-08-26 13:28:58,206][67471] Worker 7 uses CPU cores [28, 29, 30, 31] |
|
[2024-08-26 13:28:58,218][67467] Worker 2 uses CPU cores [8, 9, 10, 11] |
|
[2024-08-26 13:28:58,252][67468] Worker 3 uses CPU cores [12, 13, 14, 15] |
|
[2024-08-26 13:28:58,254][67466] Worker 1 uses CPU cores [4, 5, 6, 7] |
|
[2024-08-26 13:28:58,255][67470] Worker 5 uses CPU cores [20, 21, 22, 23] |
|
[2024-08-26 13:28:58,265][67465] Worker 0 uses CPU cores [0, 1, 2, 3] |
|
[2024-08-26 13:28:58,269][67469] Worker 4 uses CPU cores [16, 17, 18, 19] |
|
[2024-08-26 13:28:58,301][67464] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-26 13:28:58,301][67464] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-08-26 13:28:58,317][67464] Num visible devices: 1 |
|
[2024-08-26 13:28:58,342][67472] Worker 6 uses CPU cores [24, 25, 26, 27] |
|
[2024-08-26 13:28:58,429][67417] No checkpoints found |
|
[2024-08-26 13:28:58,430][67417] Did not load from checkpoint, starting from scratch! |
|
[2024-08-26 13:28:58,430][67417] Initialized policy 0 weights for model version 0 |
|
[2024-08-26 13:28:58,519][67417] LearnerWorker_p0 finished initialization! |
|
[2024-08-26 13:28:58,520][67417] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-26 13:28:58,649][67464] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-26 13:28:58,649][67464] RunningMeanStd input shape: (1,) |
|
[2024-08-26 13:28:58,654][67464] ConvEncoder: input_channels=3 |
|
[2024-08-26 13:28:58,694][67464] Conv encoder output size: 512 |
|
[2024-08-26 13:28:58,694][67464] Policy head output size: 512 |
|
[2024-08-26 13:28:58,719][67293] Inference worker 0-0 is ready! |
|
[2024-08-26 13:28:58,719][67293] All inference workers are ready! Signal rollout workers to start! |
|
[2024-08-26 13:28:58,734][67466] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,735][67469] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,735][67467] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,736][67465] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,737][67470] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,737][67468] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,737][67472] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,739][67471] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-26 13:28:58,955][67465] Decorrelating experience for 0 frames... |
|
[2024-08-26 13:28:59,063][67465] Decorrelating experience for 32 frames... |
|
[2024-08-26 13:28:59,196][67465] Decorrelating experience for 64 frames... |
|
[2024-08-26 13:28:59,318][67465] Decorrelating experience for 96 frames... |
|
[2024-08-26 13:28:59,785][67293] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-08-26 13:29:01,293][67417] Signal inference workers to stop experience collection... |
|
[2024-08-26 13:29:01,295][67417] Signal inference workers to resume experience collection... |
|
[2024-08-26 13:29:01,371][67464] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-08-26 13:29:01,375][67464] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-08-26 13:29:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 24576. Throughput: 0: 1392.0. Samples: 6960. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:04,785][67293] Avg episode reward: [(0, '4.202')] |
|
[2024-08-26 13:29:07,505][67464] Updated weights for policy 0, policy_version 10 (0.0077) |
|
[2024-08-26 13:29:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5324.8). Total num frames: 53248. Throughput: 0: 1062.8. Samples: 10628. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:29:09,785][67293] Avg episode reward: [(0, '4.452')] |
|
[2024-08-26 13:29:14,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5188.2, 300 sec: 5188.2). Total num frames: 77824. Throughput: 0: 1209.0. Samples: 18136. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:29:14,786][67293] Avg episode reward: [(0, '4.595')] |
|
[2024-08-26 13:29:15,706][67464] Updated weights for policy 0, policy_version 20 (0.0005) |
|
[2024-08-26 13:29:16,396][67293] Heartbeat connected on Batcher_0 |
|
[2024-08-26 13:29:16,397][67293] Heartbeat connected on LearnerWorker_p0 |
|
[2024-08-26 13:29:16,401][67293] Heartbeat connected on RolloutWorker_w0 |
|
[2024-08-26 13:29:16,408][67293] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-08-26 13:29:19,785][67293] Fps is (10 sec: 4915.1, 60 sec: 5120.0, 300 sec: 5120.0). Total num frames: 102400. Throughput: 0: 1293.4. Samples: 25868. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:19,785][67293] Avg episode reward: [(0, '4.558')] |
|
[2024-08-26 13:29:19,785][67417] Saving new best policy, reward=4.558! |
|
[2024-08-26 13:29:23,406][67464] Updated weights for policy 0, policy_version 30 (0.0005) |
|
[2024-08-26 13:29:24,785][67293] Fps is (10 sec: 4915.3, 60 sec: 5079.0, 300 sec: 5079.0). Total num frames: 126976. Throughput: 0: 1192.6. Samples: 29816. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:24,785][67293] Avg episode reward: [(0, '4.234')] |
|
[2024-08-26 13:29:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5051.7). Total num frames: 151552. Throughput: 0: 1238.9. Samples: 37168. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:29:29,785][67293] Avg episode reward: [(0, '4.436')] |
|
[2024-08-26 13:29:31,953][67464] Updated weights for policy 0, policy_version 40 (0.0005) |
|
[2024-08-26 13:29:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5032.2, 300 sec: 5032.2). Total num frames: 176128. Throughput: 0: 1268.9. Samples: 44412. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:29:34,785][67293] Avg episode reward: [(0, '4.606')] |
|
[2024-08-26 13:29:34,793][67417] Saving new best policy, reward=4.606! |
|
[2024-08-26 13:29:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5017.6, 300 sec: 5017.6). Total num frames: 200704. Throughput: 0: 1204.1. Samples: 48164. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:39,785][67293] Avg episode reward: [(0, '4.245')] |
|
[2024-08-26 13:29:39,958][67464] Updated weights for policy 0, policy_version 50 (0.0005) |
|
[2024-08-26 13:29:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5097.2, 300 sec: 5097.2). Total num frames: 229376. Throughput: 0: 1252.0. Samples: 56340. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:44,785][67293] Avg episode reward: [(0, '4.123')] |
|
[2024-08-26 13:29:47,577][67464] Updated weights for policy 0, policy_version 60 (0.0005) |
|
[2024-08-26 13:29:49,785][67293] Fps is (10 sec: 5734.2, 60 sec: 5160.9, 300 sec: 5160.9). Total num frames: 258048. Throughput: 0: 1279.3. Samples: 64528. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:49,785][67293] Avg episode reward: [(0, '4.091')] |
|
[2024-08-26 13:29:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5138.6, 300 sec: 5138.6). Total num frames: 282624. Throughput: 0: 1289.3. Samples: 68648. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:29:54,785][67293] Avg episode reward: [(0, '4.289')] |
|
[2024-08-26 13:29:54,902][67464] Updated weights for policy 0, policy_version 70 (0.0005) |
|
[2024-08-26 13:29:59,785][67293] Fps is (10 sec: 5325.0, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 311296. Throughput: 0: 1304.4. Samples: 76832. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:29:59,785][67293] Avg episode reward: [(0, '4.424')] |
|
[2024-08-26 13:30:02,382][67464] Updated weights for policy 0, policy_version 80 (0.0006) |
|
[2024-08-26 13:30:04,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5256.5, 300 sec: 5230.3). Total num frames: 339968. Throughput: 0: 1316.4. Samples: 85104. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:04,785][67293] Avg episode reward: [(0, '4.276')] |
|
[2024-08-26 13:30:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5207.8). Total num frames: 364544. Throughput: 0: 1321.2. Samples: 89268. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:09,785][67293] Avg episode reward: [(0, '4.377')] |
|
[2024-08-26 13:30:09,969][67464] Updated weights for policy 0, policy_version 90 (0.0005) |
|
[2024-08-26 13:30:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.6, 300 sec: 5242.9). Total num frames: 393216. Throughput: 0: 1338.8. Samples: 97412. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:14,785][67293] Avg episode reward: [(0, '4.563')] |
|
[2024-08-26 13:30:17,639][67464] Updated weights for policy 0, policy_version 100 (0.0005) |
|
[2024-08-26 13:30:19,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5273.6). Total num frames: 421888. Throughput: 0: 1358.4. Samples: 105540. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:30:19,785][67293] Avg episode reward: [(0, '4.529')] |
|
[2024-08-26 13:30:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5252.5). Total num frames: 446464. Throughput: 0: 1365.0. Samples: 109588. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:30:24,785][67293] Avg episode reward: [(0, '4.788')] |
|
[2024-08-26 13:30:24,793][67417] Saving new best policy, reward=4.788! |
|
[2024-08-26 13:30:24,978][67464] Updated weights for policy 0, policy_version 110 (0.0005) |
|
[2024-08-26 13:30:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5279.3). Total num frames: 475136. Throughput: 0: 1365.8. Samples: 117800. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:30:29,785][67293] Avg episode reward: [(0, '4.672')] |
|
[2024-08-26 13:30:32,372][67464] Updated weights for policy 0, policy_version 120 (0.0005) |
|
[2024-08-26 13:30:34,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5303.2). Total num frames: 503808. Throughput: 0: 1366.9. Samples: 126036. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:34,785][67293] Avg episode reward: [(0, '4.595')] |
|
[2024-08-26 13:30:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5283.8). Total num frames: 528384. Throughput: 0: 1364.8. Samples: 130064. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:39,785][67293] Avg episode reward: [(0, '4.477')] |
|
[2024-08-26 13:30:40,036][67464] Updated weights for policy 0, policy_version 130 (0.0005) |
|
[2024-08-26 13:30:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5305.3). Total num frames: 557056. Throughput: 0: 1365.7. Samples: 138288. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:30:44,785][67293] Avg episode reward: [(0, '4.385')] |
|
[2024-08-26 13:30:47,745][67464] Updated weights for policy 0, policy_version 140 (0.0005) |
|
[2024-08-26 13:30:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5287.6). Total num frames: 581632. Throughput: 0: 1354.1. Samples: 146040. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:30:49,785][67293] Avg episode reward: [(0, '4.383')] |
|
[2024-08-26 13:30:54,787][67293] Fps is (10 sec: 5323.9, 60 sec: 5461.2, 300 sec: 5306.9). Total num frames: 610304. Throughput: 0: 1346.6. Samples: 149868. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:54,787][67293] Avg episode reward: [(0, '4.397')] |
|
[2024-08-26 13:30:54,789][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000149_610304.pth... |
|
[2024-08-26 13:30:55,570][67464] Updated weights for policy 0, policy_version 150 (0.0006) |
|
[2024-08-26 13:30:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5290.7). Total num frames: 634880. Throughput: 0: 1334.4. Samples: 157460. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:30:59,785][67293] Avg episode reward: [(0, '4.389')] |
|
[2024-08-26 13:31:03,667][67464] Updated weights for policy 0, policy_version 160 (0.0005) |
|
[2024-08-26 13:31:04,785][67293] Fps is (10 sec: 4916.1, 60 sec: 5324.8, 300 sec: 5275.6). Total num frames: 659456. Throughput: 0: 1327.7. Samples: 165288. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:04,785][67293] Avg episode reward: [(0, '4.218')] |
|
[2024-08-26 13:31:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5261.8). Total num frames: 684032. Throughput: 0: 1319.0. Samples: 168944. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:09,785][67293] Avg episode reward: [(0, '4.338')] |
|
[2024-08-26 13:31:11,798][67464] Updated weights for policy 0, policy_version 170 (0.0006) |
|
[2024-08-26 13:31:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5248.9). Total num frames: 708608. Throughput: 0: 1306.5. Samples: 176592. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:14,785][67293] Avg episode reward: [(0, '4.531')] |
|
[2024-08-26 13:31:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5237.0). Total num frames: 733184. Throughput: 0: 1283.4. Samples: 183788. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:19,785][67293] Avg episode reward: [(0, '4.642')] |
|
[2024-08-26 13:31:20,388][67464] Updated weights for policy 0, policy_version 180 (0.0006) |
|
[2024-08-26 13:31:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5225.9). Total num frames: 757760. Throughput: 0: 1276.9. Samples: 187524. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:24,785][67293] Avg episode reward: [(0, '4.429')] |
|
[2024-08-26 13:31:28,316][67464] Updated weights for policy 0, policy_version 190 (0.0006) |
|
[2024-08-26 13:31:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5215.6). Total num frames: 782336. Throughput: 0: 1262.8. Samples: 195116. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:29,785][67293] Avg episode reward: [(0, '4.381')] |
|
[2024-08-26 13:31:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5205.9). Total num frames: 806912. Throughput: 0: 1252.6. Samples: 202408. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:31:34,785][67293] Avg episode reward: [(0, '4.317')] |
|
[2024-08-26 13:31:36,552][67464] Updated weights for policy 0, policy_version 200 (0.0005) |
|
[2024-08-26 13:31:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5196.8). Total num frames: 831488. Throughput: 0: 1254.2. Samples: 206304. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:39,785][67293] Avg episode reward: [(0, '4.343')] |
|
[2024-08-26 13:31:44,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5188.3). Total num frames: 856064. Throughput: 0: 1243.8. Samples: 213432. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:31:44,785][67293] Avg episode reward: [(0, '4.702')] |
|
[2024-08-26 13:31:44,926][67464] Updated weights for policy 0, policy_version 210 (0.0005) |
|
[2024-08-26 13:31:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5051.7, 300 sec: 5204.3). Total num frames: 884736. Throughput: 0: 1250.5. Samples: 221560. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:49,785][67293] Avg episode reward: [(0, '4.394')] |
|
[2024-08-26 13:31:52,635][67464] Updated weights for policy 0, policy_version 220 (0.0006) |
|
[2024-08-26 13:31:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.6, 300 sec: 5196.1). Total num frames: 909312. Throughput: 0: 1255.3. Samples: 225432. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:54,785][67293] Avg episode reward: [(0, '4.301')] |
|
[2024-08-26 13:31:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5188.3). Total num frames: 933888. Throughput: 0: 1253.2. Samples: 232984. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:31:59,785][67293] Avg episode reward: [(0, '4.564')] |
|
[2024-08-26 13:32:00,724][67464] Updated weights for policy 0, policy_version 230 (0.0005) |
|
[2024-08-26 13:32:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5051.7, 300 sec: 5203.0). Total num frames: 962560. Throughput: 0: 1274.9. Samples: 241160. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:04,785][67293] Avg episode reward: [(0, '4.499')] |
|
[2024-08-26 13:32:08,125][67464] Updated weights for policy 0, policy_version 240 (0.0006) |
|
[2024-08-26 13:32:09,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5120.0, 300 sec: 5217.0). Total num frames: 991232. Throughput: 0: 1282.3. Samples: 245228. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:09,785][67293] Avg episode reward: [(0, '4.405')] |
|
[2024-08-26 13:32:14,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5230.3). Total num frames: 1019904. Throughput: 0: 1299.9. Samples: 253612. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:14,785][67293] Avg episode reward: [(0, '4.402')] |
|
[2024-08-26 13:32:15,416][67464] Updated weights for policy 0, policy_version 250 (0.0005) |
|
[2024-08-26 13:32:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5222.4). Total num frames: 1044480. Throughput: 0: 1321.1. Samples: 261856. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:19,785][67293] Avg episode reward: [(0, '4.419')] |
|
[2024-08-26 13:32:23,245][67464] Updated weights for policy 0, policy_version 260 (0.0005) |
|
[2024-08-26 13:32:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5234.9). Total num frames: 1073152. Throughput: 0: 1316.5. Samples: 265548. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:24,785][67293] Avg episode reward: [(0, '4.355')] |
|
[2024-08-26 13:32:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5207.8). Total num frames: 1093632. Throughput: 0: 1322.6. Samples: 272948. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:29,785][67293] Avg episode reward: [(0, '4.411')] |
|
[2024-08-26 13:32:31,838][67464] Updated weights for policy 0, policy_version 270 (0.0005) |
|
[2024-08-26 13:32:34,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5188.3, 300 sec: 5201.0). Total num frames: 1118208. Throughput: 0: 1310.5. Samples: 280532. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:34,785][67293] Avg episode reward: [(0, '4.462')] |
|
[2024-08-26 13:32:39,393][67464] Updated weights for policy 0, policy_version 280 (0.0005) |
|
[2024-08-26 13:32:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5213.1). Total num frames: 1146880. Throughput: 0: 1311.0. Samples: 284428. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:39,785][67293] Avg episode reward: [(0, '4.378')] |
|
[2024-08-26 13:32:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5206.5). Total num frames: 1171456. Throughput: 0: 1318.8. Samples: 292328. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:44,785][67293] Avg episode reward: [(0, '4.437')] |
|
[2024-08-26 13:32:47,111][67464] Updated weights for policy 0, policy_version 290 (0.0005) |
|
[2024-08-26 13:32:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5217.9). Total num frames: 1200128. Throughput: 0: 1312.4. Samples: 300220. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:32:49,785][67293] Avg episode reward: [(0, '4.536')] |
|
[2024-08-26 13:32:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5211.5). Total num frames: 1224704. Throughput: 0: 1307.0. Samples: 304044. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:54,785][67293] Avg episode reward: [(0, '4.479')] |
|
[2024-08-26 13:32:54,794][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000299_1224704.pth... |
|
[2024-08-26 13:32:55,289][67464] Updated weights for policy 0, policy_version 300 (0.0005) |
|
[2024-08-26 13:32:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5205.3). Total num frames: 1249280. Throughput: 0: 1280.1. Samples: 311216. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:32:59,785][67293] Avg episode reward: [(0, '4.325')] |
|
[2024-08-26 13:33:04,113][67464] Updated weights for policy 0, policy_version 310 (0.0006) |
|
[2024-08-26 13:33:04,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5120.0, 300 sec: 5182.7). Total num frames: 1269760. Throughput: 0: 1253.0. Samples: 318240. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:33:04,785][67293] Avg episode reward: [(0, '4.221')] |
|
[2024-08-26 13:33:09,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5051.7, 300 sec: 5177.3). Total num frames: 1294336. Throughput: 0: 1251.7. Samples: 321876. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:09,785][67293] Avg episode reward: [(0, '4.292')] |
|
[2024-08-26 13:33:12,627][67464] Updated weights for policy 0, policy_version 320 (0.0007) |
|
[2024-08-26 13:33:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5172.2). Total num frames: 1318912. Throughput: 0: 1248.7. Samples: 329140. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:14,785][67293] Avg episode reward: [(0, '4.219')] |
|
[2024-08-26 13:33:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5167.3). Total num frames: 1343488. Throughput: 0: 1241.5. Samples: 336400. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:33:19,785][67293] Avg episode reward: [(0, '4.218')] |
|
[2024-08-26 13:33:20,893][67464] Updated weights for policy 0, policy_version 330 (0.0006) |
|
[2024-08-26 13:33:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5162.5). Total num frames: 1368064. Throughput: 0: 1240.5. Samples: 340252. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:24,785][67293] Avg episode reward: [(0, '4.273')] |
|
[2024-08-26 13:33:29,288][67464] Updated weights for policy 0, policy_version 340 (0.0006) |
|
[2024-08-26 13:33:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5157.9). Total num frames: 1392640. Throughput: 0: 1224.6. Samples: 347436. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:29,785][67293] Avg episode reward: [(0, '4.329')] |
|
[2024-08-26 13:33:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5153.5). Total num frames: 1417216. Throughput: 0: 1211.2. Samples: 354724. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:34,785][67293] Avg episode reward: [(0, '4.455')] |
|
[2024-08-26 13:33:37,759][67464] Updated weights for policy 0, policy_version 350 (0.0006) |
|
[2024-08-26 13:33:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5149.3). Total num frames: 1441792. Throughput: 0: 1207.6. Samples: 358384. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:39,785][67293] Avg episode reward: [(0, '4.490')] |
|
[2024-08-26 13:33:44,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5145.2). Total num frames: 1466368. Throughput: 0: 1219.3. Samples: 366084. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:44,785][67293] Avg episode reward: [(0, '4.251')] |
|
[2024-08-26 13:33:45,681][67464] Updated weights for policy 0, policy_version 360 (0.0005) |
|
[2024-08-26 13:33:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4915.2, 300 sec: 5155.3). Total num frames: 1495040. Throughput: 0: 1245.4. Samples: 374284. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:49,785][67293] Avg episode reward: [(0, '4.303')] |
|
[2024-08-26 13:33:53,361][67464] Updated weights for policy 0, policy_version 370 (0.0005) |
|
[2024-08-26 13:33:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4915.2, 300 sec: 5151.2). Total num frames: 1519616. Throughput: 0: 1247.2. Samples: 378000. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:33:54,785][67293] Avg episode reward: [(0, '4.251')] |
|
[2024-08-26 13:33:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5165.1). Total num frames: 1548288. Throughput: 0: 1264.2. Samples: 386028. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:33:59,785][67293] Avg episode reward: [(0, '4.264')] |
|
[2024-08-26 13:34:01,212][67464] Updated weights for policy 0, policy_version 380 (0.0005) |
|
[2024-08-26 13:34:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5051.7, 300 sec: 5151.2). Total num frames: 1572864. Throughput: 0: 1280.8. Samples: 394036. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:04,785][67293] Avg episode reward: [(0, '4.392')] |
|
[2024-08-26 13:34:08,765][67464] Updated weights for policy 0, policy_version 390 (0.0005) |
|
[2024-08-26 13:34:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 1601536. Throughput: 0: 1281.8. Samples: 397932. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:09,785][67293] Avg episode reward: [(0, '4.434')] |
|
[2024-08-26 13:34:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 1626112. Throughput: 0: 1298.0. Samples: 405844. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:14,785][67293] Avg episode reward: [(0, '4.492')] |
|
[2024-08-26 13:34:16,475][67464] Updated weights for policy 0, policy_version 400 (0.0005) |
|
[2024-08-26 13:34:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 1654784. Throughput: 0: 1315.6. Samples: 413928. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:19,785][67293] Avg episode reward: [(0, '4.347')] |
|
[2024-08-26 13:34:24,311][67464] Updated weights for policy 0, policy_version 410 (0.0005) |
|
[2024-08-26 13:34:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 1679360. Throughput: 0: 1321.9. Samples: 417868. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:24,785][67293] Avg episode reward: [(0, '4.222')] |
|
[2024-08-26 13:34:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 1703936. Throughput: 0: 1320.2. Samples: 425492. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:29,785][67293] Avg episode reward: [(0, '4.278')] |
|
[2024-08-26 13:34:32,055][67464] Updated weights for policy 0, policy_version 420 (0.0006) |
|
[2024-08-26 13:34:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5192.9). Total num frames: 1732608. Throughput: 0: 1318.1. Samples: 433600. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:34,785][67293] Avg episode reward: [(0, '4.300')] |
|
[2024-08-26 13:34:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 1757184. Throughput: 0: 1326.2. Samples: 437680. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:39,785][67293] Avg episode reward: [(0, '4.437')] |
|
[2024-08-26 13:34:39,878][67464] Updated weights for policy 0, policy_version 430 (0.0005) |
|
[2024-08-26 13:34:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5179.0). Total num frames: 1785856. Throughput: 0: 1321.8. Samples: 445508. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:34:44,785][67293] Avg episode reward: [(0, '4.387')] |
|
[2024-08-26 13:34:47,431][67464] Updated weights for policy 0, policy_version 440 (0.0005) |
|
[2024-08-26 13:34:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 1810432. Throughput: 0: 1324.7. Samples: 453648. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:49,785][67293] Avg episode reward: [(0, '4.337')] |
|
[2024-08-26 13:34:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5179.0). Total num frames: 1839104. Throughput: 0: 1324.1. Samples: 457516. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:54,785][67293] Avg episode reward: [(0, '4.212')] |
|
[2024-08-26 13:34:54,793][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000449_1839104.pth... |
|
[2024-08-26 13:34:54,818][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000149_610304.pth |
|
[2024-08-26 13:34:55,596][67464] Updated weights for policy 0, policy_version 450 (0.0006) |
|
[2024-08-26 13:34:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 1863680. Throughput: 0: 1316.5. Samples: 465088. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:34:59,785][67293] Avg episode reward: [(0, '4.250')] |
|
[2024-08-26 13:35:03,267][67464] Updated weights for policy 0, policy_version 460 (0.0006) |
|
[2024-08-26 13:35:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 1888256. Throughput: 0: 1310.5. Samples: 472900. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:04,785][67293] Avg episode reward: [(0, '4.382')] |
|
[2024-08-26 13:35:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 1916928. Throughput: 0: 1315.6. Samples: 477068. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:09,785][67293] Avg episode reward: [(0, '4.491')] |
|
[2024-08-26 13:35:11,048][67464] Updated weights for policy 0, policy_version 470 (0.0005) |
|
[2024-08-26 13:35:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 1941504. Throughput: 0: 1323.1. Samples: 485032. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:14,785][67293] Avg episode reward: [(0, '4.455')] |
|
[2024-08-26 13:35:18,723][67464] Updated weights for policy 0, policy_version 480 (0.0005) |
|
[2024-08-26 13:35:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 1970176. Throughput: 0: 1318.1. Samples: 492916. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:19,785][67293] Avg episode reward: [(0, '4.407')] |
|
[2024-08-26 13:35:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 1994752. Throughput: 0: 1315.3. Samples: 496868. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:24,785][67293] Avg episode reward: [(0, '4.396')] |
|
[2024-08-26 13:35:26,465][67464] Updated weights for policy 0, policy_version 490 (0.0005) |
|
[2024-08-26 13:35:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5151.2). Total num frames: 2023424. Throughput: 0: 1320.7. Samples: 504940. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:29,785][67293] Avg episode reward: [(0, '4.500')] |
|
[2024-08-26 13:35:34,168][67464] Updated weights for policy 0, policy_version 500 (0.0005) |
|
[2024-08-26 13:35:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 2048000. Throughput: 0: 1314.5. Samples: 512800. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:35:34,785][67293] Avg episode reward: [(0, '4.234')] |
|
[2024-08-26 13:35:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5151.2). Total num frames: 2076672. Throughput: 0: 1316.4. Samples: 516756. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:39,785][67293] Avg episode reward: [(0, '4.395')] |
|
[2024-08-26 13:35:41,748][67464] Updated weights for policy 0, policy_version 510 (0.0005) |
|
[2024-08-26 13:35:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 2101248. Throughput: 0: 1325.0. Samples: 524712. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:35:44,785][67293] Avg episode reward: [(0, '4.510')] |
|
[2024-08-26 13:35:49,509][67464] Updated weights for policy 0, policy_version 520 (0.0005) |
|
[2024-08-26 13:35:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5151.3). Total num frames: 2129920. Throughput: 0: 1331.3. Samples: 532808. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:35:49,785][67293] Avg episode reward: [(0, '4.514')] |
|
[2024-08-26 13:35:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 2154496. Throughput: 0: 1327.6. Samples: 536808. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:35:54,785][67293] Avg episode reward: [(0, '4.504')] |
|
[2024-08-26 13:35:57,364][67464] Updated weights for policy 0, policy_version 530 (0.0006) |
|
[2024-08-26 13:35:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5165.1). Total num frames: 2183168. Throughput: 0: 1318.8. Samples: 544380. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:35:59,785][67293] Avg episode reward: [(0, '4.333')] |
|
[2024-08-26 13:36:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5165.1). Total num frames: 2207744. Throughput: 0: 1321.2. Samples: 552372. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:04,785][67293] Avg episode reward: [(0, '4.408')] |
|
[2024-08-26 13:36:05,173][67464] Updated weights for policy 0, policy_version 540 (0.0005) |
|
[2024-08-26 13:36:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 2232320. Throughput: 0: 1325.2. Samples: 556504. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:09,785][67293] Avg episode reward: [(0, '4.487')] |
|
[2024-08-26 13:36:12,820][67464] Updated weights for policy 0, policy_version 550 (0.0005) |
|
[2024-08-26 13:36:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5179.0). Total num frames: 2260992. Throughput: 0: 1319.3. Samples: 564308. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:14,785][67293] Avg episode reward: [(0, '4.382')] |
|
[2024-08-26 13:36:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 2285568. Throughput: 0: 1325.4. Samples: 572444. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:19,785][67293] Avg episode reward: [(0, '4.424')] |
|
[2024-08-26 13:36:20,641][67464] Updated weights for policy 0, policy_version 560 (0.0005) |
|
[2024-08-26 13:36:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5192.9). Total num frames: 2314240. Throughput: 0: 1324.7. Samples: 576368. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:24,785][67293] Avg episode reward: [(0, '4.496')] |
|
[2024-08-26 13:36:28,177][67464] Updated weights for policy 0, policy_version 570 (0.0006) |
|
[2024-08-26 13:36:29,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5206.8). Total num frames: 2342912. Throughput: 0: 1327.5. Samples: 584448. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:29,785][67293] Avg episode reward: [(0, '4.398')] |
|
[2024-08-26 13:36:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5206.8). Total num frames: 2367488. Throughput: 0: 1319.3. Samples: 592176. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:34,785][67293] Avg episode reward: [(0, '4.286')] |
|
[2024-08-26 13:36:36,222][67464] Updated weights for policy 0, policy_version 580 (0.0005) |
|
[2024-08-26 13:36:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5206.8). Total num frames: 2392064. Throughput: 0: 1313.6. Samples: 595920. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:39,785][67293] Avg episode reward: [(0, '4.356')] |
|
[2024-08-26 13:36:44,286][67464] Updated weights for policy 0, policy_version 590 (0.0006) |
|
[2024-08-26 13:36:44,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5192.9). Total num frames: 2416640. Throughput: 0: 1312.3. Samples: 603432. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:36:44,785][67293] Avg episode reward: [(0, '4.454')] |
|
[2024-08-26 13:36:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5206.8). Total num frames: 2445312. Throughput: 0: 1310.3. Samples: 611336. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:36:49,785][67293] Avg episode reward: [(0, '4.411')] |
|
[2024-08-26 13:36:52,316][67464] Updated weights for policy 0, policy_version 600 (0.0005) |
|
[2024-08-26 13:36:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5206.8). Total num frames: 2469888. Throughput: 0: 1300.8. Samples: 615040. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:36:54,785][67293] Avg episode reward: [(0, '4.455')] |
|
[2024-08-26 13:36:54,789][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000603_2469888.pth... |
|
[2024-08-26 13:36:54,812][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000299_1224704.pth |
|
[2024-08-26 13:36:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5192.9). Total num frames: 2494464. Throughput: 0: 1299.3. Samples: 622776. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:36:59,785][67293] Avg episode reward: [(0, '4.449')] |
|
[2024-08-26 13:37:00,140][67464] Updated weights for policy 0, policy_version 610 (0.0006) |
|
[2024-08-26 13:37:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 2519040. Throughput: 0: 1293.5. Samples: 630652. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:37:04,785][67293] Avg episode reward: [(0, '4.513')] |
|
[2024-08-26 13:37:08,089][67464] Updated weights for policy 0, policy_version 620 (0.0005) |
|
[2024-08-26 13:37:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 2547712. Throughput: 0: 1290.1. Samples: 634424. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:09,785][67293] Avg episode reward: [(0, '4.389')] |
|
[2024-08-26 13:37:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 2572288. Throughput: 0: 1284.2. Samples: 642236. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:14,785][67293] Avg episode reward: [(0, '4.462')] |
|
[2024-08-26 13:37:16,146][67464] Updated weights for policy 0, policy_version 630 (0.0005) |
|
[2024-08-26 13:37:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 2596864. Throughput: 0: 1284.0. Samples: 649956. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:19,785][67293] Avg episode reward: [(0, '4.450')] |
|
[2024-08-26 13:37:23,941][67464] Updated weights for policy 0, policy_version 640 (0.0006) |
|
[2024-08-26 13:37:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5192.9). Total num frames: 2625536. Throughput: 0: 1286.1. Samples: 653796. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:24,785][67293] Avg episode reward: [(0, '4.385')] |
|
[2024-08-26 13:37:29,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5120.0, 300 sec: 5192.9). Total num frames: 2650112. Throughput: 0: 1290.2. Samples: 661492. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:29,785][67293] Avg episode reward: [(0, '4.265')] |
|
[2024-08-26 13:37:32,027][67464] Updated weights for policy 0, policy_version 650 (0.0006) |
|
[2024-08-26 13:37:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5179.0). Total num frames: 2674688. Throughput: 0: 1285.6. Samples: 669188. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:34,785][67293] Avg episode reward: [(0, '4.355')] |
|
[2024-08-26 13:37:39,785][67293] Fps is (10 sec: 4915.3, 60 sec: 5120.0, 300 sec: 5179.0). Total num frames: 2699264. Throughput: 0: 1288.0. Samples: 673000. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:39,785][67293] Avg episode reward: [(0, '4.350')] |
|
[2024-08-26 13:37:39,883][67464] Updated weights for policy 0, policy_version 660 (0.0005) |
|
[2024-08-26 13:37:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 2727936. Throughput: 0: 1290.8. Samples: 680860. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:44,785][67293] Avg episode reward: [(0, '4.380')] |
|
[2024-08-26 13:37:47,656][67464] Updated weights for policy 0, policy_version 670 (0.0006) |
|
[2024-08-26 13:37:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5179.0). Total num frames: 2752512. Throughput: 0: 1296.4. Samples: 688988. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:37:49,785][67293] Avg episode reward: [(0, '4.326')] |
|
[2024-08-26 13:37:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5192.9). Total num frames: 2781184. Throughput: 0: 1300.4. Samples: 692944. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:37:54,785][67293] Avg episode reward: [(0, '4.394')] |
|
[2024-08-26 13:37:55,150][67464] Updated weights for policy 0, policy_version 680 (0.0006) |
|
[2024-08-26 13:37:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5206.8). Total num frames: 2805760. Throughput: 0: 1304.0. Samples: 700916. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:37:59,785][67293] Avg episode reward: [(0, '4.436')] |
|
[2024-08-26 13:38:03,272][67464] Updated weights for policy 0, policy_version 690 (0.0005) |
|
[2024-08-26 13:38:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5220.7). Total num frames: 2834432. Throughput: 0: 1304.1. Samples: 708640. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:38:04,785][67293] Avg episode reward: [(0, '4.448')] |
|
[2024-08-26 13:38:09,786][67293] Fps is (10 sec: 5324.5, 60 sec: 5188.2, 300 sec: 5220.7). Total num frames: 2859008. Throughput: 0: 1303.5. Samples: 712456. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:09,786][67293] Avg episode reward: [(0, '4.459')] |
|
[2024-08-26 13:38:10,862][67464] Updated weights for policy 0, policy_version 700 (0.0005) |
|
[2024-08-26 13:38:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5234.5). Total num frames: 2887680. Throughput: 0: 1315.3. Samples: 720680. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:14,785][67293] Avg episode reward: [(0, '4.361')] |
|
[2024-08-26 13:38:18,527][67464] Updated weights for policy 0, policy_version 710 (0.0005) |
|
[2024-08-26 13:38:19,785][67293] Fps is (10 sec: 5325.1, 60 sec: 5256.5, 300 sec: 5234.5). Total num frames: 2912256. Throughput: 0: 1326.3. Samples: 728872. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:19,785][67293] Avg episode reward: [(0, '4.360')] |
|
[2024-08-26 13:38:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 2940928. Throughput: 0: 1336.8. Samples: 733156. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:38:24,785][67293] Avg episode reward: [(0, '4.407')] |
|
[2024-08-26 13:38:26,039][67464] Updated weights for policy 0, policy_version 720 (0.0005) |
|
[2024-08-26 13:38:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.6, 300 sec: 5248.4). Total num frames: 2965504. Throughput: 0: 1335.6. Samples: 740960. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:38:29,785][67293] Avg episode reward: [(0, '4.347')] |
|
[2024-08-26 13:38:33,898][67464] Updated weights for policy 0, policy_version 730 (0.0005) |
|
[2024-08-26 13:38:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 2994176. Throughput: 0: 1327.7. Samples: 748736. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:34,785][67293] Avg episode reward: [(0, '4.297')] |
|
[2024-08-26 13:38:39,785][67293] Fps is (10 sec: 5324.6, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 3018752. Throughput: 0: 1328.3. Samples: 752720. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:38:39,785][67293] Avg episode reward: [(0, '4.390')] |
|
[2024-08-26 13:38:41,555][67464] Updated weights for policy 0, policy_version 740 (0.0005) |
|
[2024-08-26 13:38:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 3047424. Throughput: 0: 1329.9. Samples: 760760. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:44,785][67293] Avg episode reward: [(0, '4.511')] |
|
[2024-08-26 13:38:49,359][67464] Updated weights for policy 0, policy_version 750 (0.0006) |
|
[2024-08-26 13:38:49,785][67293] Fps is (10 sec: 5325.0, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 3072000. Throughput: 0: 1330.2. Samples: 768500. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:38:49,785][67293] Avg episode reward: [(0, '4.488')] |
|
[2024-08-26 13:38:54,785][67293] Fps is (10 sec: 4915.1, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 3096576. Throughput: 0: 1329.1. Samples: 772264. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:54,786][67293] Avg episode reward: [(0, '4.356')] |
|
[2024-08-26 13:38:54,795][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000756_3096576.pth... |
|
[2024-08-26 13:38:54,795][67293] Components not started: RolloutWorker_w1, RolloutWorker_w2, RolloutWorker_w3, RolloutWorker_w4, RolloutWorker_w5, RolloutWorker_w6, RolloutWorker_w7, wait_time=600.0 seconds |
|
[2024-08-26 13:38:54,820][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000449_1839104.pth |
|
[2024-08-26 13:38:57,223][67464] Updated weights for policy 0, policy_version 760 (0.0006) |
|
[2024-08-26 13:38:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 3125248. Throughput: 0: 1319.8. Samples: 780072. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:38:59,785][67293] Avg episode reward: [(0, '4.416')] |
|
[2024-08-26 13:39:04,785][67293] Fps is (10 sec: 5324.9, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 3149824. Throughput: 0: 1305.2. Samples: 787604. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:39:04,785][67293] Avg episode reward: [(0, '4.342')] |
|
[2024-08-26 13:39:05,356][67464] Updated weights for policy 0, policy_version 770 (0.0006) |
|
[2024-08-26 13:39:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 3178496. Throughput: 0: 1305.2. Samples: 791892. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:09,785][67293] Avg episode reward: [(0, '4.455')] |
|
[2024-08-26 13:39:12,754][67464] Updated weights for policy 0, policy_version 780 (0.0005) |
|
[2024-08-26 13:39:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 3203072. Throughput: 0: 1313.2. Samples: 800052. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:14,785][67293] Avg episode reward: [(0, '4.428')] |
|
[2024-08-26 13:39:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 3227648. Throughput: 0: 1305.7. Samples: 807492. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:39:19,785][67293] Avg episode reward: [(0, '4.540')] |
|
[2024-08-26 13:39:20,864][67464] Updated weights for policy 0, policy_version 790 (0.0005) |
|
[2024-08-26 13:39:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5248.4). Total num frames: 3252224. Throughput: 0: 1303.7. Samples: 811388. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:24,785][67293] Avg episode reward: [(0, '4.422')] |
|
[2024-08-26 13:39:29,298][67464] Updated weights for policy 0, policy_version 800 (0.0006) |
|
[2024-08-26 13:39:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5234.5). Total num frames: 3276800. Throughput: 0: 1284.3. Samples: 818552. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:39:29,785][67293] Avg episode reward: [(0, '4.541')] |
|
[2024-08-26 13:39:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5234.5). Total num frames: 3301376. Throughput: 0: 1271.9. Samples: 825736. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:34,785][67293] Avg episode reward: [(0, '4.338')] |
|
[2024-08-26 13:39:37,954][67464] Updated weights for policy 0, policy_version 810 (0.0007) |
|
[2024-08-26 13:39:39,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5051.8, 300 sec: 5206.8). Total num frames: 3321856. Throughput: 0: 1264.4. Samples: 829160. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:39:39,785][67293] Avg episode reward: [(0, '4.332')] |
|
[2024-08-26 13:39:44,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4983.5, 300 sec: 5206.8). Total num frames: 3346432. Throughput: 0: 1247.3. Samples: 836200. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:44,785][67293] Avg episode reward: [(0, '4.322')] |
|
[2024-08-26 13:39:46,457][67464] Updated weights for policy 0, policy_version 820 (0.0006) |
|
[2024-08-26 13:39:49,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5192.9). Total num frames: 3371008. Throughput: 0: 1249.7. Samples: 843840. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:49,785][67293] Avg episode reward: [(0, '4.445')] |
|
[2024-08-26 13:39:54,727][67464] Updated weights for policy 0, policy_version 830 (0.0005) |
|
[2024-08-26 13:39:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5051.8, 300 sec: 5206.8). Total num frames: 3399680. Throughput: 0: 1237.5. Samples: 847580. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:39:54,785][67293] Avg episode reward: [(0, '4.404')] |
|
[2024-08-26 13:39:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5206.8). Total num frames: 3424256. Throughput: 0: 1219.5. Samples: 854928. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:39:59,785][67293] Avg episode reward: [(0, '4.499')] |
|
[2024-08-26 13:40:02,818][67464] Updated weights for policy 0, policy_version 840 (0.0005) |
|
[2024-08-26 13:40:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5192.9). Total num frames: 3448832. Throughput: 0: 1227.0. Samples: 862708. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:04,785][67293] Avg episode reward: [(0, '4.608')] |
|
[2024-08-26 13:40:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5192.9). Total num frames: 3473408. Throughput: 0: 1224.2. Samples: 866476. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:09,785][67293] Avg episode reward: [(0, '4.308')] |
|
[2024-08-26 13:40:11,085][67464] Updated weights for policy 0, policy_version 850 (0.0006) |
|
[2024-08-26 13:40:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5179.0). Total num frames: 3497984. Throughput: 0: 1225.3. Samples: 873692. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:14,786][67293] Avg episode reward: [(0, '4.294')] |
|
[2024-08-26 13:40:19,478][67464] Updated weights for policy 0, policy_version 860 (0.0006) |
|
[2024-08-26 13:40:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5179.0). Total num frames: 3522560. Throughput: 0: 1228.7. Samples: 881028. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:19,785][67293] Avg episode reward: [(0, '4.371')] |
|
[2024-08-26 13:40:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5165.1). Total num frames: 3547136. Throughput: 0: 1237.7. Samples: 884856. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:24,785][67293] Avg episode reward: [(0, '4.245')] |
|
[2024-08-26 13:40:27,239][67464] Updated weights for policy 0, policy_version 870 (0.0006) |
|
[2024-08-26 13:40:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5179.0). Total num frames: 3575808. Throughput: 0: 1255.5. Samples: 892696. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:29,785][67293] Avg episode reward: [(0, '4.276')] |
|
[2024-08-26 13:40:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5165.1). Total num frames: 3600384. Throughput: 0: 1261.7. Samples: 900616. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:40:34,785][67293] Avg episode reward: [(0, '4.265')] |
|
[2024-08-26 13:40:35,093][67464] Updated weights for policy 0, policy_version 880 (0.0005) |
|
[2024-08-26 13:40:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5179.0). Total num frames: 3629056. Throughput: 0: 1269.2. Samples: 904696. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:39,785][67293] Avg episode reward: [(0, '4.295')] |
|
[2024-08-26 13:40:42,752][67464] Updated weights for policy 0, policy_version 890 (0.0005) |
|
[2024-08-26 13:40:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 3653632. Throughput: 0: 1279.7. Samples: 912516. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:44,785][67293] Avg episode reward: [(0, '4.199')] |
|
[2024-08-26 13:40:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 3682304. Throughput: 0: 1285.4. Samples: 920552. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:40:49,785][67293] Avg episode reward: [(0, '4.551')] |
|
[2024-08-26 13:40:50,500][67464] Updated weights for policy 0, policy_version 900 (0.0005) |
|
[2024-08-26 13:40:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 3706880. Throughput: 0: 1293.1. Samples: 924664. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:54,785][67293] Avg episode reward: [(0, '4.548')] |
|
[2024-08-26 13:40:54,794][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000905_3706880.pth... |
|
[2024-08-26 13:40:54,818][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000603_2469888.pth |
|
[2024-08-26 13:40:58,271][67464] Updated weights for policy 0, policy_version 910 (0.0005) |
|
[2024-08-26 13:40:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 3731456. Throughput: 0: 1306.8. Samples: 932500. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:40:59,785][67293] Avg episode reward: [(0, '4.511')] |
|
[2024-08-26 13:41:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5179.0). Total num frames: 3760128. Throughput: 0: 1314.3. Samples: 940172. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:04,785][67293] Avg episode reward: [(0, '4.472')] |
|
[2024-08-26 13:41:06,154][67464] Updated weights for policy 0, policy_version 920 (0.0005) |
|
[2024-08-26 13:41:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 3784704. Throughput: 0: 1318.7. Samples: 944196. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:09,785][67293] Avg episode reward: [(0, '4.539')] |
|
[2024-08-26 13:41:13,949][67464] Updated weights for policy 0, policy_version 930 (0.0005) |
|
[2024-08-26 13:41:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 3809280. Throughput: 0: 1318.9. Samples: 952048. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:14,785][67293] Avg episode reward: [(0, '4.652')] |
|
[2024-08-26 13:41:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5151.2). Total num frames: 3833856. Throughput: 0: 1304.0. Samples: 959296. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:19,785][67293] Avg episode reward: [(0, '4.635')] |
|
[2024-08-26 13:41:22,380][67464] Updated weights for policy 0, policy_version 940 (0.0005) |
|
[2024-08-26 13:41:24,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5256.5, 300 sec: 5151.2). Total num frames: 3862528. Throughput: 0: 1298.1. Samples: 963112. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:41:24,786][67293] Avg episode reward: [(0, '4.264')] |
|
[2024-08-26 13:41:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5151.2). Total num frames: 3887104. Throughput: 0: 1305.1. Samples: 971244. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:29,785][67293] Avg episode reward: [(0, '4.345')] |
|
[2024-08-26 13:41:30,284][67464] Updated weights for policy 0, policy_version 950 (0.0005) |
|
[2024-08-26 13:41:34,785][67293] Fps is (10 sec: 4915.3, 60 sec: 5188.3, 300 sec: 5151.2). Total num frames: 3911680. Throughput: 0: 1290.8. Samples: 978640. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:34,785][67293] Avg episode reward: [(0, '4.464')] |
|
[2024-08-26 13:41:38,470][67464] Updated weights for policy 0, policy_version 960 (0.0005) |
|
[2024-08-26 13:41:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5151.2). Total num frames: 3936256. Throughput: 0: 1276.4. Samples: 982100. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:41:39,785][67293] Avg episode reward: [(0, '4.243')] |
|
[2024-08-26 13:41:44,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5137.4). Total num frames: 3960832. Throughput: 0: 1273.3. Samples: 989800. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:44,785][67293] Avg episode reward: [(0, '4.241')] |
|
[2024-08-26 13:41:46,260][67464] Updated weights for policy 0, policy_version 970 (0.0006) |
|
[2024-08-26 13:41:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5151.2). Total num frames: 3989504. Throughput: 0: 1288.5. Samples: 998156. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:49,785][67293] Avg episode reward: [(0, '4.557')] |
|
[2024-08-26 13:41:53,751][67464] Updated weights for policy 0, policy_version 980 (0.0005) |
|
[2024-08-26 13:41:54,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 4018176. Throughput: 0: 1295.9. Samples: 1002512. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:54,785][67293] Avg episode reward: [(0, '4.597')] |
|
[2024-08-26 13:41:59,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 4046848. Throughput: 0: 1299.4. Samples: 1010520. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:41:59,785][67293] Avg episode reward: [(0, '4.460')] |
|
[2024-08-26 13:42:01,104][67464] Updated weights for policy 0, policy_version 990 (0.0005) |
|
[2024-08-26 13:42:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 4071424. Throughput: 0: 1320.7. Samples: 1018728. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:04,785][67293] Avg episode reward: [(0, '4.466')] |
|
[2024-08-26 13:42:08,631][67464] Updated weights for policy 0, policy_version 1000 (0.0005) |
|
[2024-08-26 13:42:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 4100096. Throughput: 0: 1331.3. Samples: 1023020. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:09,785][67293] Avg episode reward: [(0, '4.515')] |
|
[2024-08-26 13:42:14,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5192.9). Total num frames: 4128768. Throughput: 0: 1328.2. Samples: 1031012. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:14,785][67293] Avg episode reward: [(0, '4.367')] |
|
[2024-08-26 13:42:16,008][67464] Updated weights for policy 0, policy_version 1010 (0.0005) |
|
[2024-08-26 13:42:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5179.0). Total num frames: 4153344. Throughput: 0: 1345.5. Samples: 1039188. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:42:19,785][67293] Avg episode reward: [(0, '4.373')] |
|
[2024-08-26 13:42:23,517][67464] Updated weights for policy 0, policy_version 1020 (0.0005) |
|
[2024-08-26 13:42:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5192.9). Total num frames: 4182016. Throughput: 0: 1364.1. Samples: 1043484. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:24,785][67293] Avg episode reward: [(0, '4.279')] |
|
[2024-08-26 13:42:29,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5206.8). Total num frames: 4210688. Throughput: 0: 1373.1. Samples: 1051588. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:42:29,785][67293] Avg episode reward: [(0, '4.511')] |
|
[2024-08-26 13:42:31,031][67464] Updated weights for policy 0, policy_version 1030 (0.0006) |
|
[2024-08-26 13:42:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5206.8). Total num frames: 4235264. Throughput: 0: 1358.8. Samples: 1059304. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:42:34,785][67293] Avg episode reward: [(0, '4.364')] |
|
[2024-08-26 13:42:39,178][67464] Updated weights for policy 0, policy_version 1040 (0.0005) |
|
[2024-08-26 13:42:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5393.1, 300 sec: 5192.9). Total num frames: 4259840. Throughput: 0: 1349.4. Samples: 1063236. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:39,785][67293] Avg episode reward: [(0, '4.358')] |
|
[2024-08-26 13:42:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5206.8). Total num frames: 4288512. Throughput: 0: 1343.8. Samples: 1070992. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:44,785][67293] Avg episode reward: [(0, '4.358')] |
|
[2024-08-26 13:42:47,006][67464] Updated weights for policy 0, policy_version 1050 (0.0006) |
|
[2024-08-26 13:42:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5192.9). Total num frames: 4313088. Throughput: 0: 1329.2. Samples: 1078544. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:49,785][67293] Avg episode reward: [(0, '4.352')] |
|
[2024-08-26 13:42:54,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5192.9). Total num frames: 4337664. Throughput: 0: 1313.7. Samples: 1082136. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:42:54,785][67293] Avg episode reward: [(0, '4.322')] |
|
[2024-08-26 13:42:54,788][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001059_4337664.pth... |
|
[2024-08-26 13:42:54,811][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000756_3096576.pth |
|
[2024-08-26 13:42:55,285][67464] Updated weights for policy 0, policy_version 1060 (0.0006) |
|
[2024-08-26 13:42:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 4362240. Throughput: 0: 1308.4. Samples: 1089892. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:42:59,785][67293] Avg episode reward: [(0, '4.402')] |
|
[2024-08-26 13:43:03,395][67464] Updated weights for policy 0, policy_version 1070 (0.0007) |
|
[2024-08-26 13:43:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 4386816. Throughput: 0: 1294.9. Samples: 1097460. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:04,785][67293] Avg episode reward: [(0, '4.235')] |
|
[2024-08-26 13:43:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5165.1). Total num frames: 4411392. Throughput: 0: 1281.3. Samples: 1101144. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:43:09,785][67293] Avg episode reward: [(0, '4.417')] |
|
[2024-08-26 13:43:11,548][67464] Updated weights for policy 0, policy_version 1080 (0.0006) |
|
[2024-08-26 13:43:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5165.1). Total num frames: 4435968. Throughput: 0: 1267.6. Samples: 1108632. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:14,785][67293] Avg episode reward: [(0, '4.437')] |
|
[2024-08-26 13:43:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5120.0, 300 sec: 5151.2). Total num frames: 4460544. Throughput: 0: 1251.0. Samples: 1115600. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:19,785][67293] Avg episode reward: [(0, '4.407')] |
|
[2024-08-26 13:43:20,161][67464] Updated weights for policy 0, policy_version 1090 (0.0005) |
|
[2024-08-26 13:43:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5151.2). Total num frames: 4485120. Throughput: 0: 1238.9. Samples: 1118988. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:24,785][67293] Avg episode reward: [(0, '4.563')] |
|
[2024-08-26 13:43:28,870][67464] Updated weights for policy 0, policy_version 1100 (0.0006) |
|
[2024-08-26 13:43:29,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4915.2, 300 sec: 5123.5). Total num frames: 4505600. Throughput: 0: 1228.1. Samples: 1126256. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:29,785][67293] Avg episode reward: [(0, '4.502')] |
|
[2024-08-26 13:43:34,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4915.2, 300 sec: 5123.5). Total num frames: 4530176. Throughput: 0: 1210.0. Samples: 1132992. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:43:34,785][67293] Avg episode reward: [(0, '4.488')] |
|
[2024-08-26 13:43:38,025][67464] Updated weights for policy 0, policy_version 1110 (0.0006) |
|
[2024-08-26 13:43:39,785][67293] Fps is (10 sec: 4915.1, 60 sec: 4915.2, 300 sec: 5109.6). Total num frames: 4554752. Throughput: 0: 1206.7. Samples: 1136436. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:43:39,786][67293] Avg episode reward: [(0, '4.314')] |
|
[2024-08-26 13:43:44,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4778.7, 300 sec: 5095.7). Total num frames: 4575232. Throughput: 0: 1185.8. Samples: 1143252. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:44,785][67293] Avg episode reward: [(0, '4.361')] |
|
[2024-08-26 13:43:46,729][67464] Updated weights for policy 0, policy_version 1120 (0.0005) |
|
[2024-08-26 13:43:49,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4778.6, 300 sec: 5095.7). Total num frames: 4599808. Throughput: 0: 1176.0. Samples: 1150380. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:49,785][67293] Avg episode reward: [(0, '4.508')] |
|
[2024-08-26 13:43:54,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4778.7, 300 sec: 5081.8). Total num frames: 4624384. Throughput: 0: 1171.2. Samples: 1153848. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:43:54,785][67293] Avg episode reward: [(0, '4.674')] |
|
[2024-08-26 13:43:55,499][67464] Updated weights for policy 0, policy_version 1130 (0.0007) |
|
[2024-08-26 13:43:59,785][67293] Fps is (10 sec: 4505.7, 60 sec: 4710.4, 300 sec: 5067.9). Total num frames: 4644864. Throughput: 0: 1152.1. Samples: 1160476. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:43:59,785][67293] Avg episode reward: [(0, '4.581')] |
|
[2024-08-26 13:44:04,735][67464] Updated weights for policy 0, policy_version 1140 (0.0007) |
|
[2024-08-26 13:44:04,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4710.4, 300 sec: 5054.0). Total num frames: 4669440. Throughput: 0: 1150.8. Samples: 1167384. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:04,785][67293] Avg episode reward: [(0, '4.400')] |
|
[2024-08-26 13:44:09,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4642.1, 300 sec: 5040.2). Total num frames: 4689920. Throughput: 0: 1151.5. Samples: 1170804. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:09,785][67293] Avg episode reward: [(0, '4.434')] |
|
[2024-08-26 13:44:13,515][67464] Updated weights for policy 0, policy_version 1150 (0.0007) |
|
[2024-08-26 13:44:14,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4642.1, 300 sec: 5040.2). Total num frames: 4714496. Throughput: 0: 1148.2. Samples: 1177924. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:44:14,785][67293] Avg episode reward: [(0, '4.467')] |
|
[2024-08-26 13:44:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4642.1, 300 sec: 5040.2). Total num frames: 4739072. Throughput: 0: 1157.6. Samples: 1185084. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:19,785][67293] Avg episode reward: [(0, '4.492')] |
|
[2024-08-26 13:44:22,034][67464] Updated weights for policy 0, policy_version 1160 (0.0005) |
|
[2024-08-26 13:44:24,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4573.9, 300 sec: 5026.3). Total num frames: 4759552. Throughput: 0: 1160.3. Samples: 1188648. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:24,785][67293] Avg episode reward: [(0, '4.508')] |
|
[2024-08-26 13:44:29,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4710.4, 300 sec: 5040.2). Total num frames: 4788224. Throughput: 0: 1171.6. Samples: 1195972. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:29,785][67293] Avg episode reward: [(0, '4.475')] |
|
[2024-08-26 13:44:30,599][67464] Updated weights for policy 0, policy_version 1170 (0.0006) |
|
[2024-08-26 13:44:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4710.4, 300 sec: 5054.0). Total num frames: 4812800. Throughput: 0: 1177.3. Samples: 1203360. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:34,785][67293] Avg episode reward: [(0, '4.384')] |
|
[2024-08-26 13:44:38,674][67464] Updated weights for policy 0, policy_version 1180 (0.0006) |
|
[2024-08-26 13:44:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4710.4, 300 sec: 5054.0). Total num frames: 4837376. Throughput: 0: 1188.4. Samples: 1207328. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:39,785][67293] Avg episode reward: [(0, '4.504')] |
|
[2024-08-26 13:44:44,786][67293] Fps is (10 sec: 4915.0, 60 sec: 4778.6, 300 sec: 5054.0). Total num frames: 4861952. Throughput: 0: 1207.1. Samples: 1214796. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:44,786][67293] Avg episode reward: [(0, '4.479')] |
|
[2024-08-26 13:44:46,585][67464] Updated weights for policy 0, policy_version 1190 (0.0005) |
|
[2024-08-26 13:44:49,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4778.7, 300 sec: 5040.2). Total num frames: 4886528. Throughput: 0: 1224.9. Samples: 1222504. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:49,785][67293] Avg episode reward: [(0, '4.448')] |
|
[2024-08-26 13:44:54,785][67293] Fps is (10 sec: 4915.4, 60 sec: 4778.7, 300 sec: 5040.2). Total num frames: 4911104. Throughput: 0: 1238.5. Samples: 1226536. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:54,785][67293] Avg episode reward: [(0, '4.229')] |
|
[2024-08-26 13:44:54,797][67464] Updated weights for policy 0, policy_version 1200 (0.0005) |
|
[2024-08-26 13:44:54,797][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001200_4915200.pth... |
|
[2024-08-26 13:44:54,819][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000000905_3706880.pth |
|
[2024-08-26 13:44:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4915.2, 300 sec: 5054.0). Total num frames: 4939776. Throughput: 0: 1240.6. Samples: 1233752. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:44:59,785][67293] Avg episode reward: [(0, '4.339')] |
|
[2024-08-26 13:45:02,914][67464] Updated weights for policy 0, policy_version 1210 (0.0005) |
|
[2024-08-26 13:45:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4915.2, 300 sec: 5054.0). Total num frames: 4964352. Throughput: 0: 1252.9. Samples: 1241464. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:04,785][67293] Avg episode reward: [(0, '4.664')] |
|
[2024-08-26 13:45:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5054.0). Total num frames: 4988928. Throughput: 0: 1257.5. Samples: 1245236. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:09,785][67293] Avg episode reward: [(0, '4.635')] |
|
[2024-08-26 13:45:10,784][67464] Updated weights for policy 0, policy_version 1220 (0.0005) |
|
[2024-08-26 13:45:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5054.0). Total num frames: 5013504. Throughput: 0: 1259.0. Samples: 1252628. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:45:14,785][67293] Avg episode reward: [(0, '4.487')] |
|
[2024-08-26 13:45:19,785][67293] Fps is (10 sec: 4505.6, 60 sec: 4915.2, 300 sec: 5040.2). Total num frames: 5033984. Throughput: 0: 1244.6. Samples: 1259368. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:19,785][67293] Avg episode reward: [(0, '4.390')] |
|
[2024-08-26 13:45:19,897][67464] Updated weights for policy 0, policy_version 1230 (0.0005) |
|
[2024-08-26 13:45:24,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5040.2). Total num frames: 5062656. Throughput: 0: 1242.0. Samples: 1263220. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:24,785][67293] Avg episode reward: [(0, '4.274')] |
|
[2024-08-26 13:45:28,021][67464] Updated weights for policy 0, policy_version 1240 (0.0005) |
|
[2024-08-26 13:45:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5040.2). Total num frames: 5087232. Throughput: 0: 1244.4. Samples: 1270792. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:29,785][67293] Avg episode reward: [(0, '4.294')] |
|
[2024-08-26 13:45:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5026.3). Total num frames: 5111808. Throughput: 0: 1249.7. Samples: 1278740. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:34,785][67293] Avg episode reward: [(0, '4.236')] |
|
[2024-08-26 13:45:35,894][67464] Updated weights for policy 0, policy_version 1250 (0.0005) |
|
[2024-08-26 13:45:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5026.3). Total num frames: 5136384. Throughput: 0: 1243.6. Samples: 1282500. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:45:39,785][67293] Avg episode reward: [(0, '4.415')] |
|
[2024-08-26 13:45:43,519][67464] Updated weights for policy 0, policy_version 1260 (0.0005) |
|
[2024-08-26 13:45:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5051.8, 300 sec: 5026.3). Total num frames: 5165056. Throughput: 0: 1262.5. Samples: 1290564. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:44,785][67293] Avg episode reward: [(0, '4.366')] |
|
[2024-08-26 13:45:49,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5120.0, 300 sec: 5040.2). Total num frames: 5193728. Throughput: 0: 1272.9. Samples: 1298744. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:49,785][67293] Avg episode reward: [(0, '4.351')] |
|
[2024-08-26 13:45:51,130][67464] Updated weights for policy 0, policy_version 1270 (0.0005) |
|
[2024-08-26 13:45:54,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5054.0). Total num frames: 5222400. Throughput: 0: 1281.9. Samples: 1302920. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:54,785][67293] Avg episode reward: [(0, '4.388')] |
|
[2024-08-26 13:45:58,390][67464] Updated weights for policy 0, policy_version 1280 (0.0005) |
|
[2024-08-26 13:45:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5120.0, 300 sec: 5040.2). Total num frames: 5246976. Throughput: 0: 1302.8. Samples: 1311256. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:45:59,785][67293] Avg episode reward: [(0, '4.216')] |
|
[2024-08-26 13:46:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5054.0). Total num frames: 5275648. Throughput: 0: 1343.6. Samples: 1319828. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:04,785][67293] Avg episode reward: [(0, '4.496')] |
|
[2024-08-26 13:46:05,499][67464] Updated weights for policy 0, policy_version 1290 (0.0005) |
|
[2024-08-26 13:46:09,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5256.5, 300 sec: 5067.9). Total num frames: 5304320. Throughput: 0: 1353.2. Samples: 1324112. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:09,785][67293] Avg episode reward: [(0, '4.578')] |
|
[2024-08-26 13:46:12,820][67464] Updated weights for policy 0, policy_version 1300 (0.0005) |
|
[2024-08-26 13:46:14,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5081.8). Total num frames: 5332992. Throughput: 0: 1373.2. Samples: 1332584. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:14,785][67293] Avg episode reward: [(0, '4.368')] |
|
[2024-08-26 13:46:19,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5081.8). Total num frames: 5361664. Throughput: 0: 1383.7. Samples: 1341008. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:19,785][67293] Avg episode reward: [(0, '4.490')] |
|
[2024-08-26 13:46:20,078][67464] Updated weights for policy 0, policy_version 1310 (0.0005) |
|
[2024-08-26 13:46:24,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5095.7). Total num frames: 5390336. Throughput: 0: 1388.2. Samples: 1344968. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:24,785][67293] Avg episode reward: [(0, '4.430')] |
|
[2024-08-26 13:46:27,879][67464] Updated weights for policy 0, policy_version 1320 (0.0005) |
|
[2024-08-26 13:46:29,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5461.3, 300 sec: 5095.7). Total num frames: 5414912. Throughput: 0: 1387.3. Samples: 1352992. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:29,786][67293] Avg episode reward: [(0, '4.312')] |
|
[2024-08-26 13:46:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5443584. Throughput: 0: 1394.0. Samples: 1361472. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:34,785][67293] Avg episode reward: [(0, '4.343')] |
|
[2024-08-26 13:46:35,062][67464] Updated weights for policy 0, policy_version 1330 (0.0005) |
|
[2024-08-26 13:46:39,785][67293] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5123.5). Total num frames: 5472256. Throughput: 0: 1391.1. Samples: 1365520. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:39,785][67293] Avg episode reward: [(0, '4.477')] |
|
[2024-08-26 13:46:42,593][67464] Updated weights for policy 0, policy_version 1340 (0.0006) |
|
[2024-08-26 13:46:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5496832. Throughput: 0: 1384.1. Samples: 1373540. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:44,785][67293] Avg episode reward: [(0, '4.565')] |
|
[2024-08-26 13:46:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5525504. Throughput: 0: 1371.6. Samples: 1381548. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:49,785][67293] Avg episode reward: [(0, '4.545')] |
|
[2024-08-26 13:46:50,495][67464] Updated weights for policy 0, policy_version 1350 (0.0005) |
|
[2024-08-26 13:46:54,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5554176. Throughput: 0: 1365.4. Samples: 1385556. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:54,785][67293] Avg episode reward: [(0, '4.574')] |
|
[2024-08-26 13:46:54,794][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001356_5554176.pth... |
|
[2024-08-26 13:46:54,816][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001059_4337664.pth |
|
[2024-08-26 13:46:57,828][67464] Updated weights for policy 0, policy_version 1360 (0.0005) |
|
[2024-08-26 13:46:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5578752. Throughput: 0: 1364.2. Samples: 1393972. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:46:59,785][67293] Avg episode reward: [(0, '4.559')] |
|
[2024-08-26 13:47:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5109.6). Total num frames: 5607424. Throughput: 0: 1359.1. Samples: 1402168. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:04,785][67293] Avg episode reward: [(0, '4.515')] |
|
[2024-08-26 13:47:05,229][67464] Updated weights for policy 0, policy_version 1370 (0.0005) |
|
[2024-08-26 13:47:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5095.7). Total num frames: 5632000. Throughput: 0: 1361.3. Samples: 1406228. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:09,785][67293] Avg episode reward: [(0, '4.641')] |
|
[2024-08-26 13:47:13,140][67464] Updated weights for policy 0, policy_version 1380 (0.0005) |
|
[2024-08-26 13:47:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5109.6). Total num frames: 5660672. Throughput: 0: 1356.7. Samples: 1414044. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:14,785][67293] Avg episode reward: [(0, '4.568')] |
|
[2024-08-26 13:47:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5095.7). Total num frames: 5685248. Throughput: 0: 1349.1. Samples: 1422180. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:47:19,785][67293] Avg episode reward: [(0, '4.380')] |
|
[2024-08-26 13:47:20,586][67464] Updated weights for policy 0, policy_version 1390 (0.0006) |
|
[2024-08-26 13:47:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5095.7). Total num frames: 5713920. Throughput: 0: 1351.0. Samples: 1426316. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:47:24,785][67293] Avg episode reward: [(0, '4.467')] |
|
[2024-08-26 13:47:28,242][67464] Updated weights for policy 0, policy_version 1400 (0.0005) |
|
[2024-08-26 13:47:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5095.7). Total num frames: 5738496. Throughput: 0: 1350.5. Samples: 1434312. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:29,785][67293] Avg episode reward: [(0, '4.284')] |
|
[2024-08-26 13:47:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5109.6). Total num frames: 5767168. Throughput: 0: 1355.4. Samples: 1442540. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:34,785][67293] Avg episode reward: [(0, '4.326')] |
|
[2024-08-26 13:47:35,792][67464] Updated weights for policy 0, policy_version 1410 (0.0005) |
|
[2024-08-26 13:47:39,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5109.6). Total num frames: 5795840. Throughput: 0: 1356.8. Samples: 1446612. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:39,785][67293] Avg episode reward: [(0, '4.394')] |
|
[2024-08-26 13:47:43,304][67464] Updated weights for policy 0, policy_version 1420 (0.0005) |
|
[2024-08-26 13:47:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5109.6). Total num frames: 5820416. Throughput: 0: 1349.9. Samples: 1454716. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:44,785][67293] Avg episode reward: [(0, '4.394')] |
|
[2024-08-26 13:47:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5123.5). Total num frames: 5849088. Throughput: 0: 1338.2. Samples: 1462388. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:49,785][67293] Avg episode reward: [(0, '4.273')] |
|
[2024-08-26 13:47:51,412][67464] Updated weights for policy 0, policy_version 1430 (0.0006) |
|
[2024-08-26 13:47:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5123.5). Total num frames: 5873664. Throughput: 0: 1327.2. Samples: 1465952. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:47:54,785][67293] Avg episode reward: [(0, '4.373')] |
|
[2024-08-26 13:47:59,735][67464] Updated weights for policy 0, policy_version 1440 (0.0006) |
|
[2024-08-26 13:47:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5123.5). Total num frames: 5898240. Throughput: 0: 1320.9. Samples: 1473484. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:47:59,785][67293] Avg episode reward: [(0, '4.418')] |
|
[2024-08-26 13:48:04,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5123.5). Total num frames: 5922816. Throughput: 0: 1309.8. Samples: 1481120. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:04,785][67293] Avg episode reward: [(0, '4.428')] |
|
[2024-08-26 13:48:07,757][67464] Updated weights for policy 0, policy_version 1450 (0.0006) |
|
[2024-08-26 13:48:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5123.5). Total num frames: 5947392. Throughput: 0: 1299.2. Samples: 1484780. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:09,785][67293] Avg episode reward: [(0, '4.604')] |
|
[2024-08-26 13:48:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5137.4). Total num frames: 5976064. Throughput: 0: 1307.3. Samples: 1493140. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:14,785][67293] Avg episode reward: [(0, '4.346')] |
|
[2024-08-26 13:48:15,024][67464] Updated weights for policy 0, policy_version 1460 (0.0006) |
|
[2024-08-26 13:48:19,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5151.2). Total num frames: 6004736. Throughput: 0: 1309.8. Samples: 1501480. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:48:19,785][67293] Avg episode reward: [(0, '4.310')] |
|
[2024-08-26 13:48:22,779][67464] Updated weights for policy 0, policy_version 1470 (0.0006) |
|
[2024-08-26 13:48:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5165.1). Total num frames: 6029312. Throughput: 0: 1302.8. Samples: 1505236. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:48:24,785][67293] Avg episode reward: [(0, '4.471')] |
|
[2024-08-26 13:48:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5179.0). Total num frames: 6057984. Throughput: 0: 1303.7. Samples: 1513384. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:29,785][67293] Avg episode reward: [(0, '4.374')] |
|
[2024-08-26 13:48:30,250][67464] Updated weights for policy 0, policy_version 1480 (0.0005) |
|
[2024-08-26 13:48:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5179.0). Total num frames: 6082560. Throughput: 0: 1314.0. Samples: 1521516. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:34,785][67293] Avg episode reward: [(0, '4.297')] |
|
[2024-08-26 13:48:37,889][67464] Updated weights for policy 0, policy_version 1490 (0.0005) |
|
[2024-08-26 13:48:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5206.8). Total num frames: 6111232. Throughput: 0: 1329.6. Samples: 1525784. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:39,785][67293] Avg episode reward: [(0, '4.246')] |
|
[2024-08-26 13:48:44,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5220.7). Total num frames: 6139904. Throughput: 0: 1343.5. Samples: 1533940. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:44,785][67293] Avg episode reward: [(0, '4.457')] |
|
[2024-08-26 13:48:45,452][67464] Updated weights for policy 0, policy_version 1500 (0.0005) |
|
[2024-08-26 13:48:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5220.7). Total num frames: 6164480. Throughput: 0: 1348.2. Samples: 1541788. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:49,785][67293] Avg episode reward: [(0, '4.499')] |
|
[2024-08-26 13:48:53,405][67464] Updated weights for policy 0, policy_version 1510 (0.0006) |
|
[2024-08-26 13:48:54,785][67293] Fps is (10 sec: 4915.0, 60 sec: 5256.5, 300 sec: 5234.5). Total num frames: 6189056. Throughput: 0: 1350.7. Samples: 1545560. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:54,785][67293] Avg episode reward: [(0, '4.621')] |
|
[2024-08-26 13:48:54,793][67293] Components not started: RolloutWorker_w1, RolloutWorker_w2, RolloutWorker_w3, RolloutWorker_w4, RolloutWorker_w5, RolloutWorker_w6, RolloutWorker_w7, wait_time=1200.0 seconds |
|
[2024-08-26 13:48:54,793][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001511_6189056.pth... |
|
[2024-08-26 13:48:54,822][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001200_4915200.pth |
|
[2024-08-26 13:48:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5234.5). Total num frames: 6213632. Throughput: 0: 1338.8. Samples: 1553388. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:48:59,785][67293] Avg episode reward: [(0, '4.425')] |
|
[2024-08-26 13:49:01,383][67464] Updated weights for policy 0, policy_version 1520 (0.0005) |
|
[2024-08-26 13:49:04,785][67293] Fps is (10 sec: 5325.0, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 6242304. Throughput: 0: 1318.0. Samples: 1560792. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:04,785][67293] Avg episode reward: [(0, '4.312')] |
|
[2024-08-26 13:49:09,580][67464] Updated weights for policy 0, policy_version 1530 (0.0006) |
|
[2024-08-26 13:49:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5262.3). Total num frames: 6266880. Throughput: 0: 1313.1. Samples: 1564324. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:09,785][67293] Avg episode reward: [(0, '4.560')] |
|
[2024-08-26 13:49:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5262.3). Total num frames: 6291456. Throughput: 0: 1298.8. Samples: 1571828. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:14,785][67293] Avg episode reward: [(0, '4.640')] |
|
[2024-08-26 13:49:18,062][67464] Updated weights for policy 0, policy_version 1540 (0.0006) |
|
[2024-08-26 13:49:19,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5276.2). Total num frames: 6316032. Throughput: 0: 1282.0. Samples: 1579204. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:19,785][67293] Avg episode reward: [(0, '4.407')] |
|
[2024-08-26 13:49:24,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5120.0, 300 sec: 5248.4). Total num frames: 6336512. Throughput: 0: 1272.0. Samples: 1583024. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:24,785][67293] Avg episode reward: [(0, '4.448')] |
|
[2024-08-26 13:49:26,633][67464] Updated weights for policy 0, policy_version 1550 (0.0007) |
|
[2024-08-26 13:49:29,785][67293] Fps is (10 sec: 4505.6, 60 sec: 5051.7, 300 sec: 5248.4). Total num frames: 6361088. Throughput: 0: 1244.1. Samples: 1589924. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:29,785][67293] Avg episode reward: [(0, '4.191')] |
|
[2024-08-26 13:49:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5248.4). Total num frames: 6385664. Throughput: 0: 1228.5. Samples: 1597072. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:34,785][67293] Avg episode reward: [(0, '4.187')] |
|
[2024-08-26 13:49:35,048][67464] Updated weights for policy 0, policy_version 1560 (0.0007) |
|
[2024-08-26 13:49:39,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4983.5, 300 sec: 5248.4). Total num frames: 6410240. Throughput: 0: 1228.9. Samples: 1600860. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:49:39,785][67293] Avg episode reward: [(0, '4.330')] |
|
[2024-08-26 13:49:43,320][67464] Updated weights for policy 0, policy_version 1570 (0.0005) |
|
[2024-08-26 13:49:44,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5248.4). Total num frames: 6434816. Throughput: 0: 1219.9. Samples: 1608284. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:44,785][67293] Avg episode reward: [(0, '4.531')] |
|
[2024-08-26 13:49:49,785][67293] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5248.4). Total num frames: 6459392. Throughput: 0: 1219.7. Samples: 1615680. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:49:49,785][67293] Avg episode reward: [(0, '4.513')] |
|
[2024-08-26 13:49:51,336][67464] Updated weights for policy 0, policy_version 1580 (0.0006) |
|
[2024-08-26 13:49:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5248.4). Total num frames: 6488064. Throughput: 0: 1233.2. Samples: 1619820. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:49:54,785][67293] Avg episode reward: [(0, '4.429')] |
|
[2024-08-26 13:49:59,239][67464] Updated weights for policy 0, policy_version 1590 (0.0005) |
|
[2024-08-26 13:49:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5248.4). Total num frames: 6512640. Throughput: 0: 1237.3. Samples: 1627508. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:49:59,785][67293] Avg episode reward: [(0, '4.266')] |
|
[2024-08-26 13:50:04,786][67293] Fps is (10 sec: 4914.9, 60 sec: 4915.2, 300 sec: 5248.4). Total num frames: 6537216. Throughput: 0: 1247.4. Samples: 1635336. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:04,786][67293] Avg episode reward: [(0, '4.374')] |
|
[2024-08-26 13:50:07,016][67464] Updated weights for policy 0, policy_version 1600 (0.0006) |
|
[2024-08-26 13:50:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 4983.5, 300 sec: 5262.3). Total num frames: 6565888. Throughput: 0: 1253.3. Samples: 1639424. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:50:09,785][67293] Avg episode reward: [(0, '4.403')] |
|
[2024-08-26 13:50:14,645][67464] Updated weights for policy 0, policy_version 1610 (0.0006) |
|
[2024-08-26 13:50:14,785][67293] Fps is (10 sec: 5734.7, 60 sec: 5051.7, 300 sec: 5290.1). Total num frames: 6594560. Throughput: 0: 1276.1. Samples: 1647348. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:14,785][67293] Avg episode reward: [(0, '4.649')] |
|
[2024-08-26 13:50:19,785][67293] Fps is (10 sec: 5324.5, 60 sec: 5051.7, 300 sec: 5276.2). Total num frames: 6619136. Throughput: 0: 1295.3. Samples: 1655360. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:19,786][67293] Avg episode reward: [(0, '4.519')] |
|
[2024-08-26 13:50:22,484][67464] Updated weights for policy 0, policy_version 1620 (0.0005) |
|
[2024-08-26 13:50:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5290.1). Total num frames: 6647808. Throughput: 0: 1298.6. Samples: 1659296. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:24,785][67293] Avg episode reward: [(0, '4.581')] |
|
[2024-08-26 13:50:29,785][67293] Fps is (10 sec: 5325.0, 60 sec: 5188.3, 300 sec: 5290.1). Total num frames: 6672384. Throughput: 0: 1313.0. Samples: 1667368. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:29,785][67293] Avg episode reward: [(0, '4.525')] |
|
[2024-08-26 13:50:30,172][67464] Updated weights for policy 0, policy_version 1630 (0.0005) |
|
[2024-08-26 13:50:34,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5290.1). Total num frames: 6696960. Throughput: 0: 1318.0. Samples: 1674992. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:34,785][67293] Avg episode reward: [(0, '4.430')] |
|
[2024-08-26 13:50:37,926][67464] Updated weights for policy 0, policy_version 1640 (0.0005) |
|
[2024-08-26 13:50:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5290.1). Total num frames: 6725632. Throughput: 0: 1318.2. Samples: 1679140. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:39,785][67293] Avg episode reward: [(0, '4.339')] |
|
[2024-08-26 13:50:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5276.2). Total num frames: 6750208. Throughput: 0: 1316.0. Samples: 1686728. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:44,785][67293] Avg episode reward: [(0, '4.178')] |
|
[2024-08-26 13:50:45,769][67464] Updated weights for policy 0, policy_version 1650 (0.0006) |
|
[2024-08-26 13:50:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5276.2). Total num frames: 6778880. Throughput: 0: 1324.9. Samples: 1694956. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:49,785][67293] Avg episode reward: [(0, '4.304')] |
|
[2024-08-26 13:50:53,212][67464] Updated weights for policy 0, policy_version 1660 (0.0005) |
|
[2024-08-26 13:50:54,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5290.1). Total num frames: 6807552. Throughput: 0: 1328.5. Samples: 1699208. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:50:54,785][67293] Avg episode reward: [(0, '4.292')] |
|
[2024-08-26 13:50:54,788][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001662_6807552.pth... |
|
[2024-08-26 13:50:54,815][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001356_5554176.pth |
|
[2024-08-26 13:50:59,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5276.2). Total num frames: 6832128. Throughput: 0: 1335.8. Samples: 1707460. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:50:59,785][67293] Avg episode reward: [(0, '4.472')] |
|
[2024-08-26 13:51:00,671][67464] Updated weights for policy 0, policy_version 1670 (0.0006) |
|
[2024-08-26 13:51:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5276.2). Total num frames: 6860800. Throughput: 0: 1344.5. Samples: 1715864. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:04,785][67293] Avg episode reward: [(0, '4.374')] |
|
[2024-08-26 13:51:08,001][67464] Updated weights for policy 0, policy_version 1680 (0.0005) |
|
[2024-08-26 13:51:09,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5276.2). Total num frames: 6889472. Throughput: 0: 1347.8. Samples: 1719948. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:09,785][67293] Avg episode reward: [(0, '4.405')] |
|
[2024-08-26 13:51:14,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5276.2). Total num frames: 6918144. Throughput: 0: 1352.4. Samples: 1728224. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:14,785][67293] Avg episode reward: [(0, '4.558')] |
|
[2024-08-26 13:51:15,404][67464] Updated weights for policy 0, policy_version 1690 (0.0005) |
|
[2024-08-26 13:51:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5262.3). Total num frames: 6942720. Throughput: 0: 1365.0. Samples: 1736416. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:19,785][67293] Avg episode reward: [(0, '4.585')] |
|
[2024-08-26 13:51:23,240][67464] Updated weights for policy 0, policy_version 1700 (0.0005) |
|
[2024-08-26 13:51:24,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5393.0, 300 sec: 5276.2). Total num frames: 6971392. Throughput: 0: 1360.1. Samples: 1740344. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:24,785][67293] Avg episode reward: [(0, '4.433')] |
|
[2024-08-26 13:51:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5262.3). Total num frames: 6995968. Throughput: 0: 1352.6. Samples: 1747596. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:51:29,785][67293] Avg episode reward: [(0, '4.385')] |
|
[2024-08-26 13:51:31,383][67464] Updated weights for policy 0, policy_version 1710 (0.0006) |
|
[2024-08-26 13:51:34,785][67293] Fps is (10 sec: 4915.3, 60 sec: 5393.1, 300 sec: 5248.4). Total num frames: 7020544. Throughput: 0: 1346.7. Samples: 1755556. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:51:34,785][67293] Avg episode reward: [(0, '4.330')] |
|
[2024-08-26 13:51:38,997][67464] Updated weights for policy 0, policy_version 1720 (0.0005) |
|
[2024-08-26 13:51:39,785][67293] Fps is (10 sec: 5324.6, 60 sec: 5393.0, 300 sec: 5262.3). Total num frames: 7049216. Throughput: 0: 1341.5. Samples: 1759576. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:39,786][67293] Avg episode reward: [(0, '4.370')] |
|
[2024-08-26 13:51:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5248.4). Total num frames: 7073792. Throughput: 0: 1325.4. Samples: 1767104. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:44,785][67293] Avg episode reward: [(0, '4.497')] |
|
[2024-08-26 13:51:47,258][67464] Updated weights for policy 0, policy_version 1730 (0.0005) |
|
[2024-08-26 13:51:49,785][67293] Fps is (10 sec: 4915.4, 60 sec: 5324.8, 300 sec: 5234.5). Total num frames: 7098368. Throughput: 0: 1315.1. Samples: 1775044. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:49,785][67293] Avg episode reward: [(0, '4.355')] |
|
[2024-08-26 13:51:54,544][67464] Updated weights for policy 0, policy_version 1740 (0.0005) |
|
[2024-08-26 13:51:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5248.4). Total num frames: 7127040. Throughput: 0: 1318.4. Samples: 1779276. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:51:54,785][67293] Avg episode reward: [(0, '4.465')] |
|
[2024-08-26 13:51:59,785][67293] Fps is (10 sec: 5734.2, 60 sec: 5393.0, 300 sec: 5248.4). Total num frames: 7155712. Throughput: 0: 1314.5. Samples: 1787376. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:51:59,785][67293] Avg episode reward: [(0, '4.312')] |
|
[2024-08-26 13:52:01,880][67464] Updated weights for policy 0, policy_version 1750 (0.0005) |
|
[2024-08-26 13:52:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5248.4). Total num frames: 7180288. Throughput: 0: 1320.3. Samples: 1795828. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:04,785][67293] Avg episode reward: [(0, '4.312')] |
|
[2024-08-26 13:52:09,383][67464] Updated weights for policy 0, policy_version 1760 (0.0005) |
|
[2024-08-26 13:52:09,785][67293] Fps is (10 sec: 5325.0, 60 sec: 5324.8, 300 sec: 5248.4). Total num frames: 7208960. Throughput: 0: 1324.3. Samples: 1799936. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:09,785][67293] Avg episode reward: [(0, '4.374')] |
|
[2024-08-26 13:52:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5248.4). Total num frames: 7233536. Throughput: 0: 1339.1. Samples: 1807856. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:14,785][67293] Avg episode reward: [(0, '4.398')] |
|
[2024-08-26 13:52:17,320][67464] Updated weights for policy 0, policy_version 1770 (0.0005) |
|
[2024-08-26 13:52:19,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5324.8, 300 sec: 5248.4). Total num frames: 7262208. Throughput: 0: 1342.5. Samples: 1815968. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:52:19,785][67293] Avg episode reward: [(0, '4.571')] |
|
[2024-08-26 13:52:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.6, 300 sec: 5248.4). Total num frames: 7286784. Throughput: 0: 1336.1. Samples: 1819700. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:24,785][67293] Avg episode reward: [(0, '4.337')] |
|
[2024-08-26 13:52:24,796][67464] Updated weights for policy 0, policy_version 1780 (0.0005) |
|
[2024-08-26 13:52:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5248.4). Total num frames: 7315456. Throughput: 0: 1358.4. Samples: 1828232. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:52:29,785][67293] Avg episode reward: [(0, '4.558')] |
|
[2024-08-26 13:52:32,281][67464] Updated weights for policy 0, policy_version 1790 (0.0006) |
|
[2024-08-26 13:52:34,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5248.4). Total num frames: 7344128. Throughput: 0: 1367.2. Samples: 1836568. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:34,785][67293] Avg episode reward: [(0, '4.444')] |
|
[2024-08-26 13:52:39,680][67464] Updated weights for policy 0, policy_version 1800 (0.0005) |
|
[2024-08-26 13:52:39,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5393.1, 300 sec: 5262.3). Total num frames: 7372800. Throughput: 0: 1363.5. Samples: 1840632. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:39,785][67293] Avg episode reward: [(0, '4.315')] |
|
[2024-08-26 13:52:44,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5262.3). Total num frames: 7401472. Throughput: 0: 1370.1. Samples: 1849032. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:52:44,785][67293] Avg episode reward: [(0, '4.348')] |
|
[2024-08-26 13:52:46,927][67464] Updated weights for policy 0, policy_version 1810 (0.0005) |
|
[2024-08-26 13:52:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5262.3). Total num frames: 7426048. Throughput: 0: 1368.4. Samples: 1857404. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:49,785][67293] Avg episode reward: [(0, '4.368')] |
|
[2024-08-26 13:52:54,233][67464] Updated weights for policy 0, policy_version 1820 (0.0006) |
|
[2024-08-26 13:52:54,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 7454720. Throughput: 0: 1371.9. Samples: 1861672. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:52:54,785][67293] Avg episode reward: [(0, '4.570')] |
|
[2024-08-26 13:52:54,794][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001820_7454720.pth... |
|
[2024-08-26 13:52:54,816][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001511_6189056.pth |
|
[2024-08-26 13:52:59,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5461.4, 300 sec: 5290.1). Total num frames: 7483392. Throughput: 0: 1369.7. Samples: 1869492. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:52:59,785][67293] Avg episode reward: [(0, '4.610')] |
|
[2024-08-26 13:53:02,097][67464] Updated weights for policy 0, policy_version 1830 (0.0006) |
|
[2024-08-26 13:53:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 7507968. Throughput: 0: 1368.4. Samples: 1877548. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:04,785][67293] Avg episode reward: [(0, '4.496')] |
|
[2024-08-26 13:53:09,782][67464] Updated weights for policy 0, policy_version 1840 (0.0005) |
|
[2024-08-26 13:53:09,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 7536640. Throughput: 0: 1370.1. Samples: 1881356. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:09,789][67293] Avg episode reward: [(0, '4.716')] |
|
[2024-08-26 13:53:14,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 7561216. Throughput: 0: 1364.4. Samples: 1889632. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:14,785][67293] Avg episode reward: [(0, '4.557')] |
|
[2024-08-26 13:53:17,294][67464] Updated weights for policy 0, policy_version 1850 (0.0005) |
|
[2024-08-26 13:53:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 7589888. Throughput: 0: 1355.3. Samples: 1897556. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:53:19,785][67293] Avg episode reward: [(0, '4.494')] |
|
[2024-08-26 13:53:24,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 7614464. Throughput: 0: 1359.0. Samples: 1901788. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:24,785][67293] Avg episode reward: [(0, '4.549')] |
|
[2024-08-26 13:53:25,220][67464] Updated weights for policy 0, policy_version 1860 (0.0005) |
|
[2024-08-26 13:53:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 7643136. Throughput: 0: 1339.3. Samples: 1909300. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:53:29,785][67293] Avg episode reward: [(0, '4.482')] |
|
[2024-08-26 13:53:32,739][67464] Updated weights for policy 0, policy_version 1870 (0.0005) |
|
[2024-08-26 13:53:34,785][67293] Fps is (10 sec: 5324.7, 60 sec: 5393.1, 300 sec: 5276.2). Total num frames: 7667712. Throughput: 0: 1333.3. Samples: 1917404. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:34,785][67293] Avg episode reward: [(0, '4.455')] |
|
[2024-08-26 13:53:39,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5276.2). Total num frames: 7696384. Throughput: 0: 1332.7. Samples: 1921644. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:39,785][67293] Avg episode reward: [(0, '4.288')] |
|
[2024-08-26 13:53:40,621][67464] Updated weights for policy 0, policy_version 1880 (0.0005) |
|
[2024-08-26 13:53:44,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5324.8, 300 sec: 5276.2). Total num frames: 7720960. Throughput: 0: 1335.0. Samples: 1929568. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:44,785][67293] Avg episode reward: [(0, '4.443')] |
|
[2024-08-26 13:53:48,105][67464] Updated weights for policy 0, policy_version 1890 (0.0005) |
|
[2024-08-26 13:53:49,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5393.1, 300 sec: 5290.1). Total num frames: 7749632. Throughput: 0: 1332.8. Samples: 1937524. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:49,785][67293] Avg episode reward: [(0, '4.503')] |
|
[2024-08-26 13:53:54,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5276.2). Total num frames: 7770112. Throughput: 0: 1326.0. Samples: 1941024. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:53:54,785][67293] Avg episode reward: [(0, '4.330')] |
|
[2024-08-26 13:53:56,579][67464] Updated weights for policy 0, policy_version 1900 (0.0007) |
|
[2024-08-26 13:53:59,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5256.5, 300 sec: 5276.2). Total num frames: 7798784. Throughput: 0: 1309.0. Samples: 1948536. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:53:59,785][67293] Avg episode reward: [(0, '4.469')] |
|
[2024-08-26 13:54:04,548][67464] Updated weights for policy 0, policy_version 1910 (0.0005) |
|
[2024-08-26 13:54:04,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5276.2). Total num frames: 7823360. Throughput: 0: 1302.1. Samples: 1956152. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:04,785][67293] Avg episode reward: [(0, '4.764')] |
|
[2024-08-26 13:54:09,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5276.2). Total num frames: 7847936. Throughput: 0: 1296.5. Samples: 1960132. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:09,785][67293] Avg episode reward: [(0, '4.643')] |
|
[2024-08-26 13:54:12,485][67464] Updated weights for policy 0, policy_version 1920 (0.0006) |
|
[2024-08-26 13:54:14,785][67293] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5276.2). Total num frames: 7872512. Throughput: 0: 1299.9. Samples: 1967796. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-08-26 13:54:14,785][67293] Avg episode reward: [(0, '4.363')] |
|
[2024-08-26 13:54:19,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5304.0). Total num frames: 7901184. Throughput: 0: 1296.2. Samples: 1975732. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:19,785][67293] Avg episode reward: [(0, '4.454')] |
|
[2024-08-26 13:54:20,156][67464] Updated weights for policy 0, policy_version 1930 (0.0006) |
|
[2024-08-26 13:54:24,785][67293] Fps is (10 sec: 5734.4, 60 sec: 5256.5, 300 sec: 5317.9). Total num frames: 7929856. Throughput: 0: 1296.5. Samples: 1979988. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:24,785][67293] Avg episode reward: [(0, '4.405')] |
|
[2024-08-26 13:54:27,701][67464] Updated weights for policy 0, policy_version 1940 (0.0005) |
|
[2024-08-26 13:54:29,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5188.3, 300 sec: 5317.9). Total num frames: 7954432. Throughput: 0: 1300.1. Samples: 1988072. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:29,785][67293] Avg episode reward: [(0, '4.531')] |
|
[2024-08-26 13:54:34,785][67293] Fps is (10 sec: 5324.8, 60 sec: 5256.5, 300 sec: 5331.7). Total num frames: 7983104. Throughput: 0: 1306.2. Samples: 1996304. Policy #0 lag: (min: 0.0, avg: 0.0, max: 1.0) |
|
[2024-08-26 13:54:34,785][67293] Avg episode reward: [(0, '4.252')] |
|
[2024-08-26 13:54:35,243][67464] Updated weights for policy 0, policy_version 1950 (0.0005) |
|
[2024-08-26 13:54:38,985][67293] Component Batcher_0 stopped! |
|
[2024-08-26 13:54:38,985][67417] Stopping Batcher_0... |
|
[2024-08-26 13:54:38,985][67293] Component RolloutWorker_w1 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,985][67417] Loop batcher_evt_loop terminating... |
|
[2024-08-26 13:54:38,985][67293] Component RolloutWorker_w2 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67293] Component RolloutWorker_w3 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67293] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67293] Component RolloutWorker_w5 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67293] Component RolloutWorker_w6 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67293] Component RolloutWorker_w7 process died already! Don't wait for it. |
|
[2024-08-26 13:54:38,986][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001955_8007680.pth... |
|
[2024-08-26 13:54:39,006][67465] Stopping RolloutWorker_w0... |
|
[2024-08-26 13:54:39,006][67293] Component RolloutWorker_w0 stopped! |
|
[2024-08-26 13:54:39,006][67465] Loop rollout_proc0_evt_loop terminating... |
|
[2024-08-26 13:54:39,009][67417] Removing /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001662_6807552.pth |
|
[2024-08-26 13:54:39,011][67417] Saving /home/ai24/condaprojects/droid/d0/train_dir/default_experiment_v1/checkpoint_p0/checkpoint_000001955_8007680.pth... |
|
[2024-08-26 13:54:39,021][67464] Weights refcount: 2 0 |
|
[2024-08-26 13:54:39,022][67464] Stopping InferenceWorker_p0-w0... |
|
[2024-08-26 13:54:39,022][67293] Component InferenceWorker_p0-w0 stopped! |
|
[2024-08-26 13:54:39,022][67464] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-08-26 13:54:39,039][67417] Stopping LearnerWorker_p0... |
|
[2024-08-26 13:54:39,039][67417] Loop learner_proc0_evt_loop terminating... |
|
[2024-08-26 13:54:39,039][67293] Component LearnerWorker_p0 stopped! |
|
[2024-08-26 13:54:39,039][67293] Waiting for process learner_proc0 to stop... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process inference_proc0-0 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc0 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc1 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc2 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc3 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc4 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc5 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc6 to join... |
|
[2024-08-26 13:54:39,411][67293] Waiting for process rollout_proc7 to join... |
|
[2024-08-26 13:54:39,411][67293] Batcher 0 profile tree view: |
|
batching: 9.3169, releasing_batches: 0.0300 |
|
[2024-08-26 13:54:39,411][67293] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0051 |
|
wait_policy_total: 740.6931 |
|
update_model: 9.2513 |
|
weight_update: 0.0005 |
|
one_step: 0.0023 |
|
handle_policy_step: 756.5426 |
|
deserialize: 13.3478, stack: 2.9089, obs_to_device_normalize: 169.4280, forward: 414.5934, send_messages: 21.5340 |
|
prepare_outputs: 113.6517 |
|
to_cpu: 89.4876 |
|
[2024-08-26 13:54:39,412][67293] Learner 0 profile tree view: |
|
misc: 0.0050, prepare_batch: 44.0557 |
|
train: 80.2385 |
|
epoch_init: 0.0050, minibatch_init: 0.0061, losses_postprocess: 0.7104, kl_divergence: 0.6316, after_optimizer: 42.5246 |
|
calculate_losses: 29.4243 |
|
losses_init: 0.0025, forward_head: 0.7450, bptt_initial: 23.7916, tail: 0.6036, advantages_returns: 0.1715, losses: 2.6958 |
|
bptt: 1.2549 |
|
bptt_forward_core: 1.1995 |
|
update: 6.5804 |
|
clip: 0.7061 |
|
[2024-08-26 13:54:39,412][67293] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.7018, enqueue_policy_requests: 45.8556, env_step: 590.7037, overhead: 28.5742, complete_rollouts: 1.4342 |
|
save_policy_outputs: 45.1309 |
|
split_output_tensors: 15.5540 |
|
[2024-08-26 13:54:39,412][67293] Loop Runner_EvtLoop terminating... |
|
[2024-08-26 13:54:39,412][67293] Runner profile tree view: |
|
main_loop: 1543.0016 |
|
[2024-08-26 13:54:39,412][67293] Collected {0: 8007680}, FPS: 5189.7 |
|
|