diff --git a/train_dir/Standup/.summary/0/events.out.tfevents.1695118333.rhmmedcatt-ProLiant-ML350-Gen10 b/train_dir/Standup/.summary/0/events.out.tfevents.1695118333.rhmmedcatt-ProLiant-ML350-Gen10 deleted file mode 100644 index ce9a72a..0000000 Binary files a/train_dir/Standup/.summary/0/events.out.tfevents.1695118333.rhmmedcatt-ProLiant-ML350-Gen10 and /dev/null differ diff --git a/train_dir/Standup/.summary/0/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 b/train_dir/Standup/.summary/0/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 deleted file mode 100644 index 85ffbb3..0000000 Binary files a/train_dir/Standup/.summary/0/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 and /dev/null differ diff --git a/train_dir/Standup/.summary/0/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 b/train_dir/Standup/.summary/0/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 deleted file mode 100644 index 6b68289..0000000 Binary files a/train_dir/Standup/.summary/0/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 and /dev/null differ diff --git a/train_dir/Standup/.summary/1/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 b/train_dir/Standup/.summary/1/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 deleted file mode 100644 index 7e65434..0000000 Binary files a/train_dir/Standup/.summary/1/events.out.tfevents.1695118395.rhmmedcatt-ProLiant-ML350-Gen10 and /dev/null differ diff --git a/train_dir/Standup/.summary/1/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 b/train_dir/Standup/.summary/1/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 deleted file mode 100644 index e2184e2..0000000 Binary files a/train_dir/Standup/.summary/1/events.out.tfevents.1695118777.rhmmedcatt-ProLiant-ML350-Gen10 and /dev/null differ diff --git a/train_dir/Standup/README.md b/train_dir/Standup/README.md index 59b4eea..2dc15b6 100644 --- a/train_dir/Standup/README.md +++ b/train_dir/Standup/README.md @@ -5,7 +5,7 @@ tags: - reinforcement-learning - sample-factory model-index: -- name: APPO +- name: ATD3 results: - task: type: reinforcement-learning @@ -15,12 +15,12 @@ model-index: type: mujoco_standup metrics: - type: mean_reward - value: 160842.81 +/- 49335.32 + value: 157750.89 +/- 30990.47 name: mean_reward verified: false --- -A(n) **APPO** model trained on the **mujoco_standup** environment. +A(n) **ATD3** model trained on the **mujoco_standup** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ @@ -30,7 +30,7 @@ Documentation for how to use Sample-Factory can be found at https://www.samplefa After installing Sample-Factory, download the model with: ``` -python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-Standup +python -m sample_factory.huggingface.load_from_hub -r MattStammers/atd3-mujoco-standup ``` @@ -38,7 +38,7 @@ python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-S To run the model after download, use the `enjoy` script corresponding to this environment: ``` -python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_standup --train_dir=./train_dir --experiment=appo-mujoco-Standup +python -m sf_examples.mujoco.enjoy_mujoco --algo=ATD3 --env=mujoco_standup --train_dir=./train_dir --experiment=atd3-mujoco-standup ``` @@ -49,7 +49,7 @@ See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details To continue training with this model, use the `train` script corresponding to this environment: ``` -python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_standup --train_dir=./train_dir --experiment=appo-mujoco-Standup --restart_behavior=resume --train_for_env_steps=10000000000 +python -m sf_examples.mujoco.train_mujoco --algo=ATD3 --env=mujoco_standup --train_dir=./train_dir --experiment=atd3-mujoco-standup --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at. diff --git a/train_dir/Standup/checkpoint_p0/best_000008160_4177920_reward_162764.036.pth b/train_dir/Standup/checkpoint_p0/best_000008160_4177920_reward_162764.036.pth deleted file mode 100644 index 7b4f077..0000000 Binary files a/train_dir/Standup/checkpoint_p0/best_000008160_4177920_reward_162764.036.pth and /dev/null differ diff --git a/train_dir/Standup/checkpoint_p0/checkpoint_000014336_7340032.pth b/train_dir/Standup/checkpoint_p0/checkpoint_000014336_7340032.pth deleted file mode 100644 index d1f1336..0000000 Binary files a/train_dir/Standup/checkpoint_p0/checkpoint_000014336_7340032.pth and /dev/null differ diff --git a/train_dir/Standup/checkpoint_p0/checkpoint_000014408_7376896.pth b/train_dir/Standup/checkpoint_p0/checkpoint_000014408_7376896.pth deleted file mode 100644 index f4340e0..0000000 Binary files a/train_dir/Standup/checkpoint_p0/checkpoint_000014408_7376896.pth and /dev/null differ diff --git a/train_dir/Standup/checkpoint_p1/best_000013232_6774784_reward_164168.870.pth b/train_dir/Standup/checkpoint_p1/best_000013232_6774784_reward_164168.870.pth deleted file mode 100644 index af623df..0000000 Binary files a/train_dir/Standup/checkpoint_p1/best_000013232_6774784_reward_164168.870.pth and /dev/null differ diff --git a/train_dir/Standup/checkpoint_p1/checkpoint_000014296_7319552.pth b/train_dir/Standup/checkpoint_p1/checkpoint_000014296_7319552.pth deleted file mode 100644 index f875926..0000000 Binary files a/train_dir/Standup/checkpoint_p1/checkpoint_000014296_7319552.pth and /dev/null differ diff --git a/train_dir/Standup/checkpoint_p1/checkpoint_000014368_7356416.pth b/train_dir/Standup/checkpoint_p1/checkpoint_000014368_7356416.pth deleted file mode 100644 index 0916341..0000000 Binary files a/train_dir/Standup/checkpoint_p1/checkpoint_000014368_7356416.pth and /dev/null differ diff --git a/train_dir/Standup/config.json b/train_dir/Standup/config.json index 638783d..22fa3bb 100644 --- a/train_dir/Standup/config.json +++ b/train_dir/Standup/config.json @@ -1,10 +1,10 @@ { "help": false, - "algo": "APPO", + "algo": "ATD3", "env": "mujoco_standup", "experiment": "Standup", "train_dir": "./train_dir", - "restart_behavior": "resume", + "restart_behavior": "restart", "device": "gpu", "seed": null, "num_policies": 2, @@ -104,8 +104,8 @@ "use_record_episode_statistics": false, "with_wandb": true, "wandb_user": "matt-stammers", - "wandb_project": "sample_factory", - "wandb_group": "mujoco_standup", + "wandb_project": "mujoco", + "wandb_group": "mujoco_standup3", "wandb_job_type": "SF", "wandb_tags": [ "mujoco" diff --git a/train_dir/Standup/replay.mp4 b/train_dir/Standup/replay.mp4 index 51d7026..f9ddacb 100644 Binary files a/train_dir/Standup/replay.mp4 and b/train_dir/Standup/replay.mp4 differ diff --git a/train_dir/Standup/sf_log.txt b/train_dir/Standup/sf_log.txt index c3ddd83..0d60e70 100644 --- a/train_dir/Standup/sf_log.txt +++ b/train_dir/Standup/sf_log.txt @@ -1,46 +1,48 @@ -[2023-09-19 11:12:17,416][35316] Saving configuration to ./train_dir/Standup/config.json... -[2023-09-19 11:12:17,417][35316] Rollout worker 0 uses device cpu -[2023-09-19 11:12:17,418][35316] Rollout worker 1 uses device cpu -[2023-09-19 11:12:17,418][35316] Rollout worker 2 uses device cpu -[2023-09-19 11:12:17,418][35316] Rollout worker 3 uses device cpu -[2023-09-19 11:12:17,418][35316] Rollout worker 4 uses device cpu -[2023-09-19 11:12:17,419][35316] Rollout worker 5 uses device cpu -[2023-09-19 11:12:17,419][35316] Rollout worker 6 uses device cpu -[2023-09-19 11:12:17,419][35316] Rollout worker 7 uses device cpu -[2023-09-19 11:12:17,419][35316] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 -[2023-09-19 11:12:17,463][35316] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:12:17,463][35316] InferenceWorker_p0-w0: min num requests: 2 -[2023-09-19 11:12:17,487][35316] Starting all processes... -[2023-09-19 11:12:17,488][35316] Starting process learner_proc0 -[2023-09-19 11:12:17,492][35316] Starting all processes... -[2023-09-19 11:12:17,504][35316] Starting process inference_proc0-0 -[2023-09-19 11:12:17,504][35316] Starting process rollout_proc0 -[2023-09-19 11:12:17,505][35316] Starting process rollout_proc1 -[2023-09-19 11:12:17,505][35316] Starting process rollout_proc2 -[2023-09-19 11:12:17,507][35316] Starting process rollout_proc3 -[2023-09-19 11:12:17,507][35316] Starting process rollout_proc4 -[2023-09-19 11:12:17,508][35316] Starting process rollout_proc5 -[2023-09-19 11:12:17,508][35316] Starting process rollout_proc6 -[2023-09-19 11:12:17,508][35316] Starting process rollout_proc7 -[2023-09-19 11:12:19,355][36026] Worker 6 uses CPU cores [24, 25, 26, 27] -[2023-09-19 11:12:19,356][36006] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:12:19,356][36006] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-09-19 11:12:19,370][36022] Worker 0 uses CPU cores [0, 1, 2, 3] -[2023-09-19 11:12:19,373][36023] Worker 5 uses CPU cores [20, 21, 22, 23] -[2023-09-19 11:12:19,378][36006] Num visible devices: 1 -[2023-09-19 11:12:19,396][36020] Worker 1 uses CPU cores [4, 5, 6, 7] -[2023-09-19 11:12:19,404][36021] Worker 2 uses CPU cores [8, 9, 10, 11] -[2023-09-19 11:12:19,410][36027] Worker 4 uses CPU cores [16, 17, 18, 19] -[2023-09-19 11:12:19,436][36025] Worker 7 uses CPU cores [28, 29, 30, 31] -[2023-09-19 11:12:19,436][36006] Starting seed is not provided -[2023-09-19 11:12:19,436][36006] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:12:19,436][36006] Initializing actor-critic model on device cuda:0 -[2023-09-19 11:12:19,437][36006] RunningMeanStd input shape: (376,) -[2023-09-19 11:12:19,437][36006] RunningMeanStd input shape: (1,) -[2023-09-19 11:12:19,528][36019] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:12:19,529][36019] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-09-19 11:12:19,530][36006] Created Actor Critic model with architecture: -[2023-09-19 11:12:19,530][36006] ActorCriticSharedWeights( +[2023-09-21 15:10:43,177][99566] Saving configuration to ./train_dir/Standup/config.json... +[2023-09-21 15:10:43,343][99566] Rollout worker 0 uses device cpu +[2023-09-21 15:10:43,344][99566] Rollout worker 1 uses device cpu +[2023-09-21 15:10:43,345][99566] Rollout worker 2 uses device cpu +[2023-09-21 15:10:43,345][99566] Rollout worker 3 uses device cpu +[2023-09-21 15:10:43,346][99566] Rollout worker 4 uses device cpu +[2023-09-21 15:10:43,346][99566] Rollout worker 5 uses device cpu +[2023-09-21 15:10:43,346][99566] Rollout worker 6 uses device cpu +[2023-09-21 15:10:43,347][99566] Rollout worker 7 uses device cpu +[2023-09-21 15:10:43,347][99566] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-21 15:10:43,408][99566] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-21 15:10:43,408][99566] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-21 15:10:43,411][99566] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-21 15:10:43,412][99566] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-21 15:10:43,436][99566] Starting all processes... +[2023-09-21 15:10:43,437][99566] Starting process learner_proc0 +[2023-09-21 15:10:43,439][99566] Starting process learner_proc1 +[2023-09-21 15:10:43,486][99566] Starting all processes... +[2023-09-21 15:10:43,493][99566] Starting process inference_proc0-0 +[2023-09-21 15:10:43,493][99566] Starting process inference_proc1-0 +[2023-09-21 15:10:43,494][99566] Starting process rollout_proc0 +[2023-09-21 15:10:43,494][99566] Starting process rollout_proc1 +[2023-09-21 15:10:43,494][99566] Starting process rollout_proc2 +[2023-09-21 15:10:43,495][99566] Starting process rollout_proc3 +[2023-09-21 15:10:43,495][99566] Starting process rollout_proc4 +[2023-09-21 15:10:43,505][99566] Starting process rollout_proc5 +[2023-09-21 15:10:43,508][99566] Starting process rollout_proc6 +[2023-09-21 15:10:43,514][99566] Starting process rollout_proc7 +[2023-09-21 15:10:45,312][101035] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-21 15:10:45,312][101035] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-21 15:10:45,328][101117] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-21 15:10:45,328][101117] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-21 15:10:45,330][101035] Num visible devices: 1 +[2023-09-21 15:10:45,346][101117] Num visible devices: 1 +[2023-09-21 15:10:45,369][101035] Starting seed is not provided +[2023-09-21 15:10:45,370][101035] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-21 15:10:45,370][101035] Initializing actor-critic model on device cuda:0 +[2023-09-21 15:10:45,370][101035] RunningMeanStd input shape: (376,) +[2023-09-21 15:10:45,371][101035] RunningMeanStd input shape: (1,) +[2023-09-21 15:10:45,373][101122] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-21 15:10:45,392][101119] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-21 15:10:45,415][101120] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-21 15:10:45,415][101121] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-21 15:10:45,421][101035] Created Actor Critic model with architecture: +[2023-09-21 15:10:45,421][101035] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( @@ -71,167 +73,21 @@ (distribution_linear): Linear(in_features=64, out_features=17, bias=True) ) ) -[2023-09-19 11:12:19,571][36019] Num visible devices: 1 -[2023-09-19 11:12:19,598][36024] Worker 3 uses CPU cores [12, 13, 14, 15] -[2023-09-19 11:12:20,100][36006] Using optimizer -[2023-09-19 11:12:20,101][36006] No checkpoints found -[2023-09-19 11:12:20,101][36006] Did not load from checkpoint, starting from scratch! -[2023-09-19 11:12:20,101][36006] Initialized policy 0 weights for model version 0 -[2023-09-19 11:12:20,103][36006] LearnerWorker_p0 finished initialization! -[2023-09-19 11:12:20,103][36006] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:12:20,715][36019] RunningMeanStd input shape: (376,) -[2023-09-19 11:12:20,716][36019] RunningMeanStd input shape: (1,) -[2023-09-19 11:12:20,748][35316] Inference worker 0-0 is ready! -[2023-09-19 11:12:20,749][35316] All inference workers are ready! Signal rollout workers to start! -[2023-09-19 11:12:20,854][36024] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,855][36024] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,857][36025] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,858][36025] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,858][36026] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,858][36021] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,859][36026] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,859][36021] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,872][36027] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,873][36027] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,881][36022] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,882][36022] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,899][36020] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,899][36023] Decorrelating experience for 0 frames... -[2023-09-19 11:12:20,900][36023] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,900][36020] Decorrelating experience for 64 frames... -[2023-09-19 11:12:20,908][36024] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,910][36026] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,913][36025] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,915][36021] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,926][36027] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,938][36022] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,984][36023] Decorrelating experience for 128 frames... -[2023-09-19 11:12:20,986][36020] Decorrelating experience for 128 frames... -[2023-09-19 11:12:21,016][36026] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,017][36024] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,018][36021] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,021][36025] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,033][36027] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,055][36022] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,144][36023] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,150][36020] Decorrelating experience for 192 frames... -[2023-09-19 11:12:21,191][36021] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,191][36024] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,198][36025] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,200][36026] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,204][36027] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,236][36022] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,312][36023] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,320][36020] Decorrelating experience for 256 frames... -[2023-09-19 11:12:21,387][36021] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,402][36024] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,408][36027] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,411][36025] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,449][36026] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,456][36022] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,516][36023] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,526][36020] Decorrelating experience for 320 frames... -[2023-09-19 11:12:21,639][36021] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,667][36024] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,672][36027] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,679][36025] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,690][36026] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,727][36022] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,766][36020] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,777][36023] Decorrelating experience for 384 frames... -[2023-09-19 11:12:21,966][36021] Decorrelating experience for 448 frames... -[2023-09-19 11:12:21,974][36027] Decorrelating experience for 448 frames... -[2023-09-19 11:12:21,978][36024] Decorrelating experience for 448 frames... -[2023-09-19 11:12:21,991][36025] Decorrelating experience for 448 frames... -[2023-09-19 11:12:22,000][36026] Decorrelating experience for 448 frames... -[2023-09-19 11:12:22,066][36022] Decorrelating experience for 448 frames... -[2023-09-19 11:12:22,100][36020] Decorrelating experience for 448 frames... -[2023-09-19 11:12:22,150][36023] Decorrelating experience for 448 frames... -[2023-09-19 11:12:23,541][35316] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-09-19 11:12:28,541][35316] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 16384. Throughput: 0: 2365.6. Samples: 11828. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:12:28,544][36006] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000032_16384.pth... -[2023-09-19 11:12:29,287][35316] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 35316], exiting... -[2023-09-19 11:12:29,288][35316] Runner profile tree view: -main_loop: 11.8008 -[2023-09-19 11:12:29,288][35316] Collected {0: 20480}, FPS: 1735.5 -[2023-09-19 11:12:29,288][36006] Stopping Batcher_0... -[2023-09-19 11:12:29,289][36026] Stopping RolloutWorker_w6... -[2023-09-19 11:12:29,290][36026] Loop rollout_proc6_evt_loop terminating... -[2023-09-19 11:12:29,289][36006] Loop batcher_evt_loop terminating... -[2023-09-19 11:12:29,290][36027] Stopping RolloutWorker_w4... -[2023-09-19 11:12:29,290][36027] Loop rollout_proc4_evt_loop terminating... -[2023-09-19 11:12:29,290][36020] Stopping RolloutWorker_w1... -[2023-09-19 11:12:29,290][36023] Stopping RolloutWorker_w5... -[2023-09-19 11:12:29,290][36020] Loop rollout_proc1_evt_loop terminating... -[2023-09-19 11:12:29,290][36023] Loop rollout_proc5_evt_loop terminating... -[2023-09-19 11:12:29,290][36006] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000040_20480.pth... -[2023-09-19 11:12:29,291][36022] Stopping RolloutWorker_w0... -[2023-09-19 11:12:29,291][36022] Loop rollout_proc0_evt_loop terminating... -[2023-09-19 11:12:29,291][36024] Stopping RolloutWorker_w3... -[2023-09-19 11:12:29,292][36024] Loop rollout_proc3_evt_loop terminating... -[2023-09-19 11:12:29,292][36021] Stopping RolloutWorker_w2... -[2023-09-19 11:12:29,292][36021] Loop rollout_proc2_evt_loop terminating... -[2023-09-19 11:12:29,294][36025] Stopping RolloutWorker_w7... -[2023-09-19 11:12:29,295][36025] Loop rollout_proc7_evt_loop terminating... -[2023-09-19 11:12:29,299][36006] Stopping LearnerWorker_p0... -[2023-09-19 11:12:29,300][36006] Loop learner_proc0_evt_loop terminating... -[2023-09-19 11:12:29,303][36019] Weights refcount: 2 0 -[2023-09-19 11:12:29,304][36019] Stopping InferenceWorker_p0-w0... -[2023-09-19 11:12:29,304][36019] Loop inference_proc0-0_evt_loop terminating... -[2023-09-19 11:13:18,923][40303] Saving configuration to ./train_dir/Standup/config.json... -[2023-09-19 11:13:18,925][40303] Rollout worker 0 uses device cpu -[2023-09-19 11:13:18,926][40303] Rollout worker 1 uses device cpu -[2023-09-19 11:13:18,926][40303] Rollout worker 2 uses device cpu -[2023-09-19 11:13:18,927][40303] Rollout worker 3 uses device cpu -[2023-09-19 11:13:18,928][40303] Rollout worker 4 uses device cpu -[2023-09-19 11:13:18,928][40303] Rollout worker 5 uses device cpu -[2023-09-19 11:13:18,929][40303] Rollout worker 6 uses device cpu -[2023-09-19 11:13:18,929][40303] Rollout worker 7 uses device cpu -[2023-09-19 11:13:18,930][40303] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 -[2023-09-19 11:13:18,986][40303] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:13:18,986][40303] InferenceWorker_p0-w0: min num requests: 1 -[2023-09-19 11:13:18,990][40303] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:13:18,990][40303] InferenceWorker_p1-w0: min num requests: 1 -[2023-09-19 11:13:19,015][40303] Starting all processes... -[2023-09-19 11:13:19,015][40303] Starting process learner_proc0 -[2023-09-19 11:13:19,018][40303] Starting process learner_proc1 -[2023-09-19 11:13:19,065][40303] Starting all processes... -[2023-09-19 11:13:19,071][40303] Starting process inference_proc0-0 -[2023-09-19 11:13:19,071][40303] Starting process inference_proc1-0 -[2023-09-19 11:13:19,071][40303] Starting process rollout_proc0 -[2023-09-19 11:13:19,071][40303] Starting process rollout_proc1 -[2023-09-19 11:13:19,072][40303] Starting process rollout_proc2 -[2023-09-19 11:13:19,072][40303] Starting process rollout_proc3 -[2023-09-19 11:13:19,073][40303] Starting process rollout_proc4 -[2023-09-19 11:13:19,074][40303] Starting process rollout_proc5 -[2023-09-19 11:13:19,080][40303] Starting process rollout_proc6 -[2023-09-19 11:13:19,081][40303] Starting process rollout_proc7 -[2023-09-19 11:13:21,055][41278] Worker 2 uses CPU cores [8, 9, 10, 11] -[2023-09-19 11:13:21,063][41246] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:13:21,063][41246] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-09-19 11:13:21,068][41284] Worker 3 uses CPU cores [12, 13, 14, 15] -[2023-09-19 11:13:21,080][41271] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:13:21,080][41271] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 -[2023-09-19 11:13:21,083][41246] Num visible devices: 1 -[2023-09-19 11:13:21,088][41292] Worker 5 uses CPU cores [20, 21, 22, 23] -[2023-09-19 11:13:21,100][41271] Num visible devices: 1 -[2023-09-19 11:13:21,157][41272] Worker 0 uses CPU cores [0, 1, 2, 3] -[2023-09-19 11:13:21,187][41276] Worker 1 uses CPU cores [4, 5, 6, 7] -[2023-09-19 11:13:21,291][41291] Worker 7 uses CPU cores [28, 29, 30, 31] -[2023-09-19 11:13:21,319][41290] Worker 6 uses CPU cores [24, 25, 26, 27] -[2023-09-19 11:13:21,326][41287] Worker 4 uses CPU cores [16, 17, 18, 19] -[2023-09-19 11:13:21,373][41187] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:13:21,373][41187] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-09-19 11:13:21,391][41187] Num visible devices: 1 -[2023-09-19 11:13:21,412][41187] Starting seed is not provided -[2023-09-19 11:13:21,412][41187] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:13:21,412][41187] Initializing actor-critic model on device cuda:0 -[2023-09-19 11:13:21,413][41187] RunningMeanStd input shape: (376,) -[2023-09-19 11:13:21,413][41187] RunningMeanStd input shape: (1,) -[2023-09-19 11:13:21,450][41188] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:13:21,450][41188] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 -[2023-09-19 11:13:21,461][41187] Created Actor Critic model with architecture: -[2023-09-19 11:13:21,462][41187] ActorCriticSharedWeights( +[2023-09-21 15:10:45,459][101034] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-21 15:10:45,459][101034] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-21 15:10:45,470][101124] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-21 15:10:45,484][101123] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-21 15:10:45,490][101034] Num visible devices: 1 +[2023-09-21 15:10:45,528][101034] Starting seed is not provided +[2023-09-21 15:10:45,528][101034] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-21 15:10:45,528][101034] Initializing actor-critic model on device cuda:0 +[2023-09-21 15:10:45,529][101034] RunningMeanStd input shape: (376,) +[2023-09-21 15:10:45,530][101034] RunningMeanStd input shape: (1,) +[2023-09-21 15:10:45,552][101115] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-21 15:10:45,552][101115] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-21 15:10:45,570][101115] Num visible devices: 1 +[2023-09-21 15:10:45,580][101034] Created Actor Critic model with architecture: +[2023-09-21 15:10:45,581][101034] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( @@ -262,2573 +118,537 @@ main_loop: 11.8008 (distribution_linear): Linear(in_features=64, out_features=17, bias=True) ) ) -[2023-09-19 11:13:21,478][41188] Num visible devices: 1 -[2023-09-19 11:13:21,500][41188] Starting seed is not provided -[2023-09-19 11:13:21,500][41188] Using GPUs [0] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:13:21,500][41188] Initializing actor-critic model on device cuda:0 -[2023-09-19 11:13:21,501][41188] RunningMeanStd input shape: (376,) -[2023-09-19 11:13:21,501][41188] RunningMeanStd input shape: (1,) -[2023-09-19 11:13:21,548][41188] Created Actor Critic model with architecture: -[2023-09-19 11:13:21,548][41188] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): MultiInputEncoder( - (encoders): ModuleDict( - (obs): MlpEncoder( - (mlp_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=Tanh) - (2): RecursiveScriptModule(original_name=Linear) - (3): RecursiveScriptModule(original_name=Tanh) - ) - ) - ) - ) - (core): ModelCoreIdentity() - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=64, out_features=1, bias=True) - (action_parameterization): ActionParameterizationContinuousNonAdaptiveStddev( - (distribution_linear): Linear(in_features=64, out_features=17, bias=True) - ) -) -[2023-09-19 11:13:22,080][41187] Using optimizer -[2023-09-19 11:13:22,081][41187] Loading state from checkpoint ./train_dir/Standup/checkpoint_p0/checkpoint_000000040_20480.pth... -[2023-09-19 11:13:22,087][41187] Loading model from checkpoint -[2023-09-19 11:13:22,089][41187] Loaded experiment state at self.train_step=40, self.env_steps=20480 -[2023-09-19 11:13:22,090][41187] Initialized policy 0 weights for model version 40 -[2023-09-19 11:13:22,091][41187] LearnerWorker_p0 finished initialization! -[2023-09-19 11:13:22,092][41187] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:13:22,122][41188] Using optimizer -[2023-09-19 11:13:22,123][41188] No checkpoints found -[2023-09-19 11:13:22,123][41188] Did not load from checkpoint, starting from scratch! -[2023-09-19 11:13:22,124][41188] Initialized policy 1 weights for model version 0 -[2023-09-19 11:13:22,142][41188] LearnerWorker_p1 finished initialization! -[2023-09-19 11:13:22,142][41188] Using GPUs [0] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:13:22,704][41246] RunningMeanStd input shape: (376,) -[2023-09-19 11:13:22,705][41246] RunningMeanStd input shape: (1,) -[2023-09-19 11:13:22,718][41271] RunningMeanStd input shape: (376,) -[2023-09-19 11:13:22,718][41271] RunningMeanStd input shape: (1,) -[2023-09-19 11:13:22,737][40303] Inference worker 0-0 is ready! -[2023-09-19 11:13:22,750][40303] Inference worker 1-0 is ready! -[2023-09-19 11:13:22,751][40303] All inference workers are ready! Signal rollout workers to start! -[2023-09-19 11:13:22,845][41278] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,846][41278] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,852][41290] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,853][41290] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,872][41287] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,873][41287] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,886][41276] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,885][41272] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,886][41276] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,886][41272] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,893][41292] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,894][41292] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,899][41278] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,905][41291] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,905][41284] Decorrelating experience for 0 frames... -[2023-09-19 11:13:22,907][41284] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,907][41291] Decorrelating experience for 64 frames... -[2023-09-19 11:13:22,907][41290] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,939][41276] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,944][41287] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,957][41292] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,974][41272] Decorrelating experience for 128 frames... -[2023-09-19 11:13:22,990][41291] Decorrelating experience for 128 frames... -[2023-09-19 11:13:23,000][41284] Decorrelating experience for 128 frames... -[2023-09-19 11:13:23,003][41278] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,051][41290] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,056][41287] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,057][41292] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,102][41276] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,147][41272] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,170][41291] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,177][41278] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,180][41284] Decorrelating experience for 192 frames... -[2023-09-19 11:13:23,229][41292] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,230][41287] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,248][41290] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,374][41276] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,378][41278] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,426][41287] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,428][41272] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,429][41292] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,433][41291] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,434][41284] Decorrelating experience for 256 frames... -[2023-09-19 11:13:23,481][41290] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,626][41291] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,675][41278] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,680][41284] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,685][41292] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,686][41287] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,711][41276] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,748][41272] Decorrelating experience for 320 frames... -[2023-09-19 11:13:23,770][41290] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,868][41291] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,948][41284] Decorrelating experience for 384 frames... -[2023-09-19 11:13:23,986][41278] Decorrelating experience for 448 frames... -[2023-09-19 11:13:23,993][41292] Decorrelating experience for 448 frames... -[2023-09-19 11:13:23,995][41287] Decorrelating experience for 448 frames... -[2023-09-19 11:13:24,071][41290] Decorrelating experience for 448 frames... -[2023-09-19 11:13:24,113][41276] Decorrelating experience for 384 frames... -[2023-09-19 11:13:24,140][41272] Decorrelating experience for 384 frames... -[2023-09-19 11:13:24,170][41291] Decorrelating experience for 448 frames... -[2023-09-19 11:13:24,250][41284] Decorrelating experience for 448 frames... -[2023-09-19 11:13:24,433][41276] Decorrelating experience for 448 frames... -[2023-09-19 11:13:24,471][41272] Decorrelating experience for 448 frames... -[2023-09-19 11:13:25,197][40303] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 20480. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-09-19 11:13:30,198][40303] Fps is (10 sec: 3276.7, 60 sec: 3276.7, 300 sec: 3276.7). Total num frames: 36864. Throughput: 0: 1638.4, 1: 1638.4. Samples: 16384. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:13:30,200][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000056_28672.pth... -[2023-09-19 11:13:30,201][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000016_8192.pth... -[2023-09-19 11:13:30,211][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000032_16384.pth -[2023-09-19 11:13:35,197][40303] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 69632. Throughput: 0: 2621.6, 1: 2627.4. Samples: 52490. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:13:35,198][40303] Avg episode reward: [(0, '31454.083'), (1, '27038.960')] -[2023-09-19 11:13:38,652][41271] Updated weights for policy 1, policy_version 80 (0.0016) -[2023-09-19 11:13:38,652][41246] Updated weights for policy 0, policy_version 120 (0.0015) -[2023-09-19 11:13:38,973][40303] Heartbeat connected on Batcher_0 -[2023-09-19 11:13:38,976][40303] Heartbeat connected on LearnerWorker_p0 -[2023-09-19 11:13:38,979][40303] Heartbeat connected on Batcher_1 -[2023-09-19 11:13:38,982][40303] Heartbeat connected on LearnerWorker_p1 -[2023-09-19 11:13:38,989][40303] Heartbeat connected on InferenceWorker_p0-w0 -[2023-09-19 11:13:38,992][40303] Heartbeat connected on InferenceWorker_p1-w0 -[2023-09-19 11:13:38,998][40303] Heartbeat connected on RolloutWorker_w0 -[2023-09-19 11:13:39,001][40303] Heartbeat connected on RolloutWorker_w1 -[2023-09-19 11:13:39,004][40303] Heartbeat connected on RolloutWorker_w2 -[2023-09-19 11:13:39,006][40303] Heartbeat connected on RolloutWorker_w3 -[2023-09-19 11:13:39,008][40303] Heartbeat connected on RolloutWorker_w4 -[2023-09-19 11:13:39,014][40303] Heartbeat connected on RolloutWorker_w5 -[2023-09-19 11:13:39,018][40303] Heartbeat connected on RolloutWorker_w7 -[2023-09-19 11:13:39,018][40303] Heartbeat connected on RolloutWorker_w6 -[2023-09-19 11:13:40,198][40303] Fps is (10 sec: 7372.8, 60 sec: 6007.4, 300 sec: 6007.4). Total num frames: 110592. Throughput: 0: 2508.5, 1: 2511.7. Samples: 75304. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:13:40,199][40303] Avg episode reward: [(0, '34634.079'), (1, '30037.595')] -[2023-09-19 11:13:45,197][40303] Fps is (10 sec: 7372.8, 60 sec: 6144.0, 300 sec: 6144.0). Total num frames: 143360. Throughput: 0: 2989.8, 1: 2992.6. Samples: 119648. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:13:45,198][40303] Avg episode reward: [(0, '43292.280'), (1, '42584.158')] -[2023-09-19 11:13:45,201][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000160_81920.pth... -[2023-09-19 11:13:45,201][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000120_61440.pth... -[2023-09-19 11:13:45,207][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000040_20480.pth -[2023-09-19 11:13:49,800][41271] Updated weights for policy 1, policy_version 160 (0.0015) -[2023-09-19 11:13:49,801][41246] Updated weights for policy 0, policy_version 200 (0.0014) -[2023-09-19 11:13:50,198][40303] Fps is (10 sec: 7372.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 184320. Throughput: 0: 3276.9, 1: 3276.9. Samples: 163844. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:13:50,199][40303] Avg episode reward: [(0, '43292.280'), (1, '44801.351')] -[2023-09-19 11:13:50,200][41187] Saving new best policy, reward=43292.280! -[2023-09-19 11:13:55,197][40303] Fps is (10 sec: 7372.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 217088. Throughput: 0: 3089.5, 1: 3091.3. Samples: 185422. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:13:55,198][40303] Avg episode reward: [(0, '50284.168'), (1, '51051.259')] -[2023-09-19 11:13:55,199][41187] Saving new best policy, reward=50284.168! -[2023-09-19 11:14:00,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 249856. Throughput: 0: 3196.1, 1: 3197.6. Samples: 223782. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:14:00,199][40303] Avg episode reward: [(0, '53046.991'), (1, '53883.351')] -[2023-09-19 11:14:00,208][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000264_135168.pth... -[2023-09-19 11:14:00,208][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000224_114688.pth... -[2023-09-19 11:14:00,217][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000016_8192.pth -[2023-09-19 11:14:00,217][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000056_28672.pth -[2023-09-19 11:14:00,218][41188] Saving new best policy, reward=53883.351! -[2023-09-19 11:14:00,218][41187] Saving new best policy, reward=53046.991! -[2023-09-19 11:14:01,592][41271] Updated weights for policy 1, policy_version 240 (0.0012) -[2023-09-19 11:14:01,592][41246] Updated weights for policy 0, policy_version 280 (0.0013) -[2023-09-19 11:14:05,198][40303] Fps is (10 sec: 7372.7, 60 sec: 6758.4, 300 sec: 6758.4). Total num frames: 290816. Throughput: 0: 3357.1, 1: 3358.6. Samples: 268630. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:14:05,199][40303] Avg episode reward: [(0, '62279.408'), (1, '63904.672')] -[2023-09-19 11:14:05,200][41187] Saving new best policy, reward=62279.408! -[2023-09-19 11:14:05,200][41188] Saving new best policy, reward=63904.672! -[2023-09-19 11:14:10,197][40303] Fps is (10 sec: 7372.9, 60 sec: 6735.6, 300 sec: 6735.6). Total num frames: 323584. Throughput: 0: 3232.0, 1: 3233.1. Samples: 290930. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:10,198][40303] Avg episode reward: [(0, '66478.184'), (1, '70755.397')] -[2023-09-19 11:14:10,199][41188] Saving new best policy, reward=70755.397! -[2023-09-19 11:14:10,199][41187] Saving new best policy, reward=66478.184! -[2023-09-19 11:14:12,751][41271] Updated weights for policy 1, policy_version 320 (0.0010) -[2023-09-19 11:14:12,752][41246] Updated weights for policy 0, policy_version 360 (0.0015) -[2023-09-19 11:14:15,197][40303] Fps is (10 sec: 7372.9, 60 sec: 6881.3, 300 sec: 6881.3). Total num frames: 364544. Throughput: 0: 3545.5, 1: 3546.1. Samples: 335508. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:15,198][40303] Avg episode reward: [(0, '69427.017'), (1, '72462.244')] -[2023-09-19 11:14:15,204][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000376_192512.pth... -[2023-09-19 11:14:15,204][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000336_172032.pth... -[2023-09-19 11:14:15,208][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000160_81920.pth -[2023-09-19 11:14:15,209][41187] Saving new best policy, reward=69427.017! -[2023-09-19 11:14:15,211][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000120_61440.pth -[2023-09-19 11:14:15,211][41188] Saving new best policy, reward=72462.244! -[2023-09-19 11:14:20,197][40303] Fps is (10 sec: 7372.9, 60 sec: 6851.5, 300 sec: 6851.5). Total num frames: 397312. Throughput: 0: 3635.0, 1: 3635.0. Samples: 379638. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:20,198][40303] Avg episode reward: [(0, '73711.169'), (1, '77171.232')] -[2023-09-19 11:14:20,199][41187] Saving new best policy, reward=73711.169! -[2023-09-19 11:14:20,199][41188] Saving new best policy, reward=77171.232! -[2023-09-19 11:14:23,933][41246] Updated weights for policy 0, policy_version 440 (0.0013) -[2023-09-19 11:14:23,934][41271] Updated weights for policy 1, policy_version 400 (0.0011) -[2023-09-19 11:14:25,197][40303] Fps is (10 sec: 6553.6, 60 sec: 6826.7, 300 sec: 6826.7). Total num frames: 430080. Throughput: 0: 3632.2, 1: 3632.4. Samples: 402210. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:25,198][40303] Avg episode reward: [(0, '73688.883'), (1, '77171.232')] -[2023-09-19 11:14:30,198][40303] Fps is (10 sec: 7372.7, 60 sec: 7236.3, 300 sec: 6931.7). Total num frames: 471040. Throughput: 0: 3606.1, 1: 3606.2. Samples: 444202. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:30,198][40303] Avg episode reward: [(0, '76128.666'), (1, '83391.647')] -[2023-09-19 11:14:30,204][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000480_245760.pth... -[2023-09-19 11:14:30,205][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000440_225280.pth... -[2023-09-19 11:14:30,211][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000264_135168.pth -[2023-09-19 11:14:30,212][41187] Saving new best policy, reward=76128.666! -[2023-09-19 11:14:30,213][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000224_114688.pth -[2023-09-19 11:14:30,214][41188] Saving new best policy, reward=83391.647! -[2023-09-19 11:14:35,005][41271] Updated weights for policy 1, policy_version 480 (0.0015) -[2023-09-19 11:14:35,006][41246] Updated weights for policy 0, policy_version 520 (0.0013) -[2023-09-19 11:14:35,198][40303] Fps is (10 sec: 8191.9, 60 sec: 7372.8, 300 sec: 7021.7). Total num frames: 512000. Throughput: 0: 3626.8, 1: 3628.0. Samples: 490312. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:14:35,199][40303] Avg episode reward: [(0, '76490.813'), (1, '86333.937')] -[2023-09-19 11:14:35,200][41187] Saving new best policy, reward=76490.813! -[2023-09-19 11:14:35,200][41188] Saving new best policy, reward=86333.937! -[2023-09-19 11:14:40,198][40303] Fps is (10 sec: 7372.6, 60 sec: 7236.2, 300 sec: 6990.5). Total num frames: 544768. Throughput: 0: 3630.3, 1: 3630.3. Samples: 512152. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:40,199][40303] Avg episode reward: [(0, '78215.963'), (1, '88825.825')] -[2023-09-19 11:14:40,201][41187] Saving new best policy, reward=78215.963! -[2023-09-19 11:14:40,201][41188] Saving new best policy, reward=88825.825! -[2023-09-19 11:14:45,197][40303] Fps is (10 sec: 7372.9, 60 sec: 7372.8, 300 sec: 7065.6). Total num frames: 585728. Throughput: 0: 3707.3, 1: 3707.2. Samples: 557434. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:14:45,198][40303] Avg episode reward: [(0, '80179.767'), (1, '93784.048')] -[2023-09-19 11:14:45,208][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000552_282624.pth... -[2023-09-19 11:14:45,207][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000592_303104.pth... -[2023-09-19 11:14:45,216][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000336_172032.pth -[2023-09-19 11:14:45,216][41188] Saving new best policy, reward=93784.048! -[2023-09-19 11:14:45,219][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000376_192512.pth -[2023-09-19 11:14:45,220][41187] Saving new best policy, reward=80179.767! -[2023-09-19 11:14:46,154][41271] Updated weights for policy 1, policy_version 560 (0.0013) -[2023-09-19 11:14:46,154][41246] Updated weights for policy 0, policy_version 600 (0.0015) -[2023-09-19 11:14:50,198][40303] Fps is (10 sec: 7372.9, 60 sec: 7236.3, 300 sec: 7035.5). Total num frames: 618496. Throughput: 0: 3695.9, 1: 3695.8. Samples: 601256. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:14:50,199][40303] Avg episode reward: [(0, '81224.789'), (1, '95410.430')] -[2023-09-19 11:14:50,200][41187] Saving new best policy, reward=81224.789! -[2023-09-19 11:14:50,200][41188] Saving new best policy, reward=95410.430! -[2023-09-19 11:14:55,215][40303] Fps is (10 sec: 7359.6, 60 sec: 7370.6, 300 sec: 7098.3). Total num frames: 659456. Throughput: 0: 3656.8, 1: 3656.9. Samples: 620178. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:14:55,218][40303] Avg episode reward: [(0, '81804.261'), (1, '100729.706')] -[2023-09-19 11:14:55,219][41187] Saving new best policy, reward=81804.261! -[2023-09-19 11:14:55,219][41188] Saving new best policy, reward=100729.706! -[2023-09-19 11:14:57,288][41246] Updated weights for policy 0, policy_version 680 (0.0016) -[2023-09-19 11:14:57,288][41271] Updated weights for policy 1, policy_version 640 (0.0013) -[2023-09-19 11:15:00,198][40303] Fps is (10 sec: 7372.8, 60 sec: 7372.8, 300 sec: 7071.0). Total num frames: 692224. Throughput: 0: 3672.5, 1: 3673.2. Samples: 666064. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:15:00,198][40303] Avg episode reward: [(0, '81842.550'), (1, '102461.265')] -[2023-09-19 11:15:00,204][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000656_335872.pth... -[2023-09-19 11:15:00,204][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000696_356352.pth... -[2023-09-19 11:15:00,208][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000440_225280.pth -[2023-09-19 11:15:00,208][41188] Saving new best policy, reward=102461.265! -[2023-09-19 11:15:00,211][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000480_245760.pth -[2023-09-19 11:15:00,211][41187] Saving new best policy, reward=81842.550! -[2023-09-19 11:15:05,198][40303] Fps is (10 sec: 5744.7, 60 sec: 7099.7, 300 sec: 6963.2). Total num frames: 716800. Throughput: 0: 3578.5, 1: 3578.2. Samples: 701690. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:05,199][40303] Avg episode reward: [(0, '82742.948'), (1, '108945.356')] -[2023-09-19 11:15:05,200][41187] Saving new best policy, reward=82742.948! -[2023-09-19 11:15:05,200][41188] Saving new best policy, reward=108945.356! -[2023-09-19 11:15:09,873][41271] Updated weights for policy 1, policy_version 720 (0.0015) -[2023-09-19 11:15:09,873][41246] Updated weights for policy 0, policy_version 760 (0.0015) -[2023-09-19 11:15:10,198][40303] Fps is (10 sec: 6553.6, 60 sec: 7236.3, 300 sec: 7021.7). Total num frames: 757760. Throughput: 0: 3542.3, 1: 3541.1. Samples: 720964. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:15:10,199][40303] Avg episode reward: [(0, '83013.951'), (1, '110482.262')] -[2023-09-19 11:15:10,200][41187] Saving new best policy, reward=83013.951! -[2023-09-19 11:15:10,200][41188] Saving new best policy, reward=110482.262! -[2023-09-19 11:15:15,198][40303] Fps is (10 sec: 7372.7, 60 sec: 7099.7, 300 sec: 7000.4). Total num frames: 790528. Throughput: 0: 3537.8, 1: 3537.8. Samples: 762602. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:15:15,199][40303] Avg episode reward: [(0, '84790.627'), (1, '117040.476')] -[2023-09-19 11:15:15,208][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000792_405504.pth... -[2023-09-19 11:15:15,209][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000752_385024.pth... -[2023-09-19 11:15:15,215][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000552_282624.pth -[2023-09-19 11:15:15,216][41188] Saving new best policy, reward=117040.476! -[2023-09-19 11:15:15,218][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000592_303104.pth -[2023-09-19 11:15:15,219][41187] Saving new best policy, reward=84790.627! -[2023-09-19 11:15:20,197][40303] Fps is (10 sec: 6553.7, 60 sec: 7099.7, 300 sec: 6981.0). Total num frames: 823296. Throughput: 0: 3525.8, 1: 3526.0. Samples: 807640. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:20,198][40303] Avg episode reward: [(0, '86690.838'), (1, '119342.411')] -[2023-09-19 11:15:20,199][41188] Saving new best policy, reward=119342.411! -[2023-09-19 11:15:20,199][41187] Saving new best policy, reward=86690.838! -[2023-09-19 11:15:21,652][41271] Updated weights for policy 1, policy_version 800 (0.0011) -[2023-09-19 11:15:21,653][41246] Updated weights for policy 0, policy_version 840 (0.0015) -[2023-09-19 11:15:25,198][40303] Fps is (10 sec: 7372.9, 60 sec: 7236.2, 300 sec: 7031.5). Total num frames: 864256. Throughput: 0: 3502.0, 1: 3501.0. Samples: 827284. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:25,199][40303] Avg episode reward: [(0, '90539.601'), (1, '125734.908')] -[2023-09-19 11:15:25,200][41187] Saving new best policy, reward=90539.601! -[2023-09-19 11:15:25,200][41188] Saving new best policy, reward=125734.908! -[2023-09-19 11:15:30,198][40303] Fps is (10 sec: 8191.9, 60 sec: 7236.3, 300 sec: 7077.9). Total num frames: 905216. Throughput: 0: 3517.5, 1: 3518.0. Samples: 874032. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:30,198][40303] Avg episode reward: [(0, '92852.762'), (1, '132358.210')] -[2023-09-19 11:15:30,207][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000904_462848.pth... -[2023-09-19 11:15:30,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000864_442368.pth... -[2023-09-19 11:15:30,213][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000656_335872.pth -[2023-09-19 11:15:30,214][41188] Saving new best policy, reward=132358.210! -[2023-09-19 11:15:30,216][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000696_356352.pth -[2023-09-19 11:15:30,217][41187] Saving new best policy, reward=92852.762! -[2023-09-19 11:15:32,850][41271] Updated weights for policy 1, policy_version 880 (0.0013) -[2023-09-19 11:15:32,851][41246] Updated weights for policy 0, policy_version 920 (0.0013) -[2023-09-19 11:15:35,197][40303] Fps is (10 sec: 7372.9, 60 sec: 7099.7, 300 sec: 7057.7). Total num frames: 937984. Throughput: 0: 3497.4, 1: 3497.6. Samples: 916030. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:35,198][40303] Avg episode reward: [(0, '96297.528'), (1, '134666.372')] -[2023-09-19 11:15:35,199][41187] Saving new best policy, reward=96297.528! -[2023-09-19 11:15:35,200][41188] Saving new best policy, reward=134666.372! -[2023-09-19 11:15:40,198][40303] Fps is (10 sec: 6553.5, 60 sec: 7099.7, 300 sec: 7039.0). Total num frames: 970752. Throughput: 0: 3537.6, 1: 3537.8. Samples: 938444. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:40,199][40303] Avg episode reward: [(0, '101172.898'), (1, '137965.048')] -[2023-09-19 11:15:40,200][41187] Saving new best policy, reward=101172.898! -[2023-09-19 11:15:40,200][41188] Saving new best policy, reward=137965.048! -[2023-09-19 11:15:44,038][41246] Updated weights for policy 0, policy_version 1000 (0.0014) -[2023-09-19 11:15:44,038][41271] Updated weights for policy 1, policy_version 960 (0.0014) -[2023-09-19 11:15:45,197][40303] Fps is (10 sec: 6553.7, 60 sec: 6963.2, 300 sec: 7021.7). Total num frames: 1003520. Throughput: 0: 3519.5, 1: 3518.3. Samples: 982764. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:15:45,198][40303] Avg episode reward: [(0, '103392.205'), (1, '137965.048')] -[2023-09-19 11:15:45,205][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000968_495616.pth... -[2023-09-19 11:15:45,208][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000752_385024.pth -[2023-09-19 11:15:45,213][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001008_516096.pth... -[2023-09-19 11:15:45,217][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000792_405504.pth -[2023-09-19 11:15:45,218][41187] Saving new best policy, reward=103392.205! -[2023-09-19 11:15:50,198][40303] Fps is (10 sec: 7372.8, 60 sec: 7099.7, 300 sec: 7062.1). Total num frames: 1044480. Throughput: 0: 3604.8, 1: 3604.9. Samples: 1026124. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:15:50,199][40303] Avg episode reward: [(0, '112090.239'), (1, '141479.122')] -[2023-09-19 11:15:50,200][41187] Saving new best policy, reward=112090.239! -[2023-09-19 11:15:50,200][41188] Saving new best policy, reward=141479.122! -[2023-09-19 11:15:55,198][40303] Fps is (10 sec: 7372.6, 60 sec: 6965.3, 300 sec: 7045.1). Total num frames: 1077248. Throughput: 0: 3640.0, 1: 3640.1. Samples: 1048568. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:15:55,199][40303] Avg episode reward: [(0, '115037.292'), (1, '141360.147')] -[2023-09-19 11:15:55,200][41187] Saving new best policy, reward=115037.292! -[2023-09-19 11:15:55,356][41271] Updated weights for policy 1, policy_version 1040 (0.0016) -[2023-09-19 11:15:55,356][41246] Updated weights for policy 0, policy_version 1080 (0.0011) -[2023-09-19 11:16:00,198][40303] Fps is (10 sec: 7372.8, 60 sec: 7099.7, 300 sec: 7082.1). Total num frames: 1118208. Throughput: 0: 3634.6, 1: 3634.6. Samples: 1089716. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:16:00,199][40303] Avg episode reward: [(0, '121696.345'), (1, '143516.477')] -[2023-09-19 11:16:00,207][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001112_569344.pth... -[2023-09-19 11:16:00,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001072_548864.pth... -[2023-09-19 11:16:00,213][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000904_462848.pth -[2023-09-19 11:16:00,214][41187] Saving new best policy, reward=121696.345! -[2023-09-19 11:16:00,218][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000864_442368.pth -[2023-09-19 11:16:00,218][41188] Saving new best policy, reward=143516.477! -[2023-09-19 11:16:05,198][40303] Fps is (10 sec: 8192.0, 60 sec: 7372.8, 300 sec: 7116.8). Total num frames: 1159168. Throughput: 0: 3656.8, 1: 3657.0. Samples: 1136764. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:16:05,199][40303] Avg episode reward: [(0, '126501.030'), (1, '144413.674')] -[2023-09-19 11:16:05,200][41187] Saving new best policy, reward=126501.030! -[2023-09-19 11:16:05,200][41188] Saving new best policy, reward=144413.674! -[2023-09-19 11:16:06,100][41271] Updated weights for policy 1, policy_version 1120 (0.0013) -[2023-09-19 11:16:06,100][41246] Updated weights for policy 0, policy_version 1160 (0.0016) -[2023-09-19 11:16:10,197][40303] Fps is (10 sec: 7372.9, 60 sec: 7236.3, 300 sec: 7099.7). Total num frames: 1191936. Throughput: 0: 3691.9, 1: 3692.5. Samples: 1159578. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:16:10,198][40303] Avg episode reward: [(0, '132111.068'), (1, '146024.408')] -[2023-09-19 11:16:10,199][41187] Saving new best policy, reward=132111.068! -[2023-09-19 11:16:10,200][41188] Saving new best policy, reward=146024.408! -[2023-09-19 11:16:15,198][40303] Fps is (10 sec: 5734.4, 60 sec: 7099.7, 300 sec: 7035.5). Total num frames: 1216512. Throughput: 0: 3566.4, 1: 3565.5. Samples: 1194970. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:16:15,198][40303] Avg episode reward: [(0, '135452.407'), (1, '145133.020')] -[2023-09-19 11:16:15,205][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001208_618496.pth... -[2023-09-19 11:16:15,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001168_598016.pth... -[2023-09-19 11:16:15,217][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000968_495616.pth -[2023-09-19 11:16:15,218][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001008_516096.pth -[2023-09-19 11:16:15,218][41187] Saving new best policy, reward=135452.407! -[2023-09-19 11:16:19,680][41271] Updated weights for policy 1, policy_version 1200 (0.0013) -[2023-09-19 11:16:19,680][41246] Updated weights for policy 0, policy_version 1240 (0.0011) -[2023-09-19 11:16:20,198][40303] Fps is (10 sec: 5734.3, 60 sec: 7099.7, 300 sec: 7021.7). Total num frames: 1249280. Throughput: 0: 3480.7, 1: 3480.4. Samples: 1229280. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:16:20,199][40303] Avg episode reward: [(0, '138970.413'), (1, '144367.334')] -[2023-09-19 11:16:20,200][41187] Saving new best policy, reward=138970.413! -[2023-09-19 11:16:25,198][40303] Fps is (10 sec: 7372.8, 60 sec: 7099.7, 300 sec: 7054.2). Total num frames: 1290240. Throughput: 0: 3490.3, 1: 3490.3. Samples: 1252572. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:16:25,199][40303] Avg episode reward: [(0, '138954.797'), (1, '143024.613')] -[2023-09-19 11:16:30,198][40303] Fps is (10 sec: 7372.7, 60 sec: 6963.2, 300 sec: 7040.7). Total num frames: 1323008. Throughput: 0: 3514.6, 1: 3515.8. Samples: 1299130. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:16:30,199][40303] Avg episode reward: [(0, '138602.655'), (1, '141826.181')] -[2023-09-19 11:16:30,206][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001312_671744.pth... -[2023-09-19 11:16:30,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001272_651264.pth... -[2023-09-19 11:16:30,216][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001112_569344.pth -[2023-09-19 11:16:30,216][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001072_548864.pth -[2023-09-19 11:16:30,719][41246] Updated weights for policy 0, policy_version 1320 (0.0010) -[2023-09-19 11:16:30,720][41271] Updated weights for policy 1, policy_version 1280 (0.0015) -[2023-09-19 11:16:35,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6963.2, 300 sec: 7027.9). Total num frames: 1355776. Throughput: 0: 3463.6, 1: 3463.8. Samples: 1337854. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:16:35,199][40303] Avg episode reward: [(0, '139704.889'), (1, '143172.712')] -[2023-09-19 11:16:35,200][41187] Saving new best policy, reward=139704.889! -[2023-09-19 11:16:40,197][40303] Fps is (10 sec: 6553.8, 60 sec: 6963.2, 300 sec: 7015.7). Total num frames: 1388544. Throughput: 0: 3470.0, 1: 3470.8. Samples: 1360900. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:16:40,198][40303] Avg episode reward: [(0, '140044.752'), (1, '143172.712')] -[2023-09-19 11:16:40,199][41187] Saving new best policy, reward=140044.752! -[2023-09-19 11:16:42,534][41246] Updated weights for policy 0, policy_version 1400 (0.0015) -[2023-09-19 11:16:42,534][41271] Updated weights for policy 1, policy_version 1360 (0.0013) -[2023-09-19 11:16:45,198][40303] Fps is (10 sec: 7372.7, 60 sec: 7099.7, 300 sec: 7045.1). Total num frames: 1429504. Throughput: 0: 3467.3, 1: 3466.8. Samples: 1401752. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:16:45,199][40303] Avg episode reward: [(0, '142593.411'), (1, '143339.303')] -[2023-09-19 11:16:45,209][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001416_724992.pth... -[2023-09-19 11:16:45,211][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001376_704512.pth... -[2023-09-19 11:16:45,219][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001208_618496.pth -[2023-09-19 11:16:45,220][41187] Saving new best policy, reward=142593.411! -[2023-09-19 11:16:45,221][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001168_598016.pth -[2023-09-19 11:16:50,198][40303] Fps is (10 sec: 7372.6, 60 sec: 6963.2, 300 sec: 7033.1). Total num frames: 1462272. Throughput: 0: 3406.2, 1: 3406.0. Samples: 1443310. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:16:50,199][40303] Avg episode reward: [(0, '142593.411'), (1, '143931.524')] -[2023-09-19 11:16:53,925][41271] Updated weights for policy 1, policy_version 1440 (0.0013) -[2023-09-19 11:16:53,925][41246] Updated weights for policy 0, policy_version 1480 (0.0014) -[2023-09-19 11:16:55,197][40303] Fps is (10 sec: 6553.8, 60 sec: 6963.2, 300 sec: 7021.7). Total num frames: 1495040. Throughput: 0: 3409.2, 1: 3408.4. Samples: 1466370. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:16:55,198][40303] Avg episode reward: [(0, '149470.139'), (1, '145636.804')] -[2023-09-19 11:16:55,217][41187] Saving new best policy, reward=149470.139! -[2023-09-19 11:17:00,198][40303] Fps is (10 sec: 7372.9, 60 sec: 6963.2, 300 sec: 7048.9). Total num frames: 1536000. Throughput: 0: 3492.7, 1: 3493.2. Samples: 1509334. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:00,198][40303] Avg episode reward: [(0, '153538.121'), (1, '146540.869')] -[2023-09-19 11:17:00,207][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001520_778240.pth... -[2023-09-19 11:17:00,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001480_757760.pth... -[2023-09-19 11:17:00,214][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001272_651264.pth -[2023-09-19 11:17:00,215][41188] Saving new best policy, reward=146540.869! -[2023-09-19 11:17:00,215][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001312_671744.pth -[2023-09-19 11:17:00,216][41187] Saving new best policy, reward=153538.121! -[2023-09-19 11:17:05,198][40303] Fps is (10 sec: 7372.6, 60 sec: 6826.7, 300 sec: 7037.7). Total num frames: 1568768. Throughput: 0: 3593.8, 1: 3594.0. Samples: 1552730. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:05,199][40303] Avg episode reward: [(0, '153219.886'), (1, '146009.885')] -[2023-09-19 11:17:05,296][41271] Updated weights for policy 1, policy_version 1520 (0.0013) -[2023-09-19 11:17:05,298][41246] Updated weights for policy 0, policy_version 1560 (0.0011) -[2023-09-19 11:17:10,197][40303] Fps is (10 sec: 7372.9, 60 sec: 6963.2, 300 sec: 7063.3). Total num frames: 1609728. Throughput: 0: 3560.3, 1: 3559.7. Samples: 1572972. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:17:10,198][40303] Avg episode reward: [(0, '152679.736'), (1, '148353.321')] -[2023-09-19 11:17:10,199][41188] Saving new best policy, reward=148353.321! -[2023-09-19 11:17:15,197][40303] Fps is (10 sec: 7372.9, 60 sec: 7099.7, 300 sec: 7052.2). Total num frames: 1642496. Throughput: 0: 3533.3, 1: 3532.9. Samples: 1617104. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:17:15,198][40303] Avg episode reward: [(0, '152608.632'), (1, '147785.111')] -[2023-09-19 11:17:15,204][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001624_831488.pth... -[2023-09-19 11:17:15,204][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001584_811008.pth... -[2023-09-19 11:17:15,211][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001376_704512.pth -[2023-09-19 11:17:15,212][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001416_724992.pth -[2023-09-19 11:17:17,037][41271] Updated weights for policy 1, policy_version 1600 (0.0014) -[2023-09-19 11:17:17,037][41246] Updated weights for policy 0, policy_version 1640 (0.0013) -[2023-09-19 11:17:20,198][40303] Fps is (10 sec: 6553.5, 60 sec: 7099.7, 300 sec: 7041.6). Total num frames: 1675264. Throughput: 0: 3521.5, 1: 3520.4. Samples: 1654740. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:20,199][40303] Avg episode reward: [(0, '153339.116'), (1, '146902.121')] -[2023-09-19 11:17:25,198][40303] Fps is (10 sec: 5734.3, 60 sec: 6826.7, 300 sec: 6997.3). Total num frames: 1699840. Throughput: 0: 3454.4, 1: 3454.3. Samples: 1671790. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:25,199][40303] Avg episode reward: [(0, '153197.923'), (1, '148121.368')] -[2023-09-19 11:17:29,838][41246] Updated weights for policy 0, policy_version 1720 (0.0013) -[2023-09-19 11:17:29,839][41271] Updated weights for policy 1, policy_version 1680 (0.0014) -[2023-09-19 11:17:30,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6963.2, 300 sec: 7021.7). Total num frames: 1740800. Throughput: 0: 3465.9, 1: 3466.4. Samples: 1713704. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:30,199][40303] Avg episode reward: [(0, '149499.415'), (1, '148156.690')] -[2023-09-19 11:17:30,206][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001720_880640.pth... -[2023-09-19 11:17:30,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001680_860160.pth... -[2023-09-19 11:17:30,213][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001520_778240.pth -[2023-09-19 11:17:30,215][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001480_757760.pth -[2023-09-19 11:17:35,198][40303] Fps is (10 sec: 7372.8, 60 sec: 6963.2, 300 sec: 7012.3). Total num frames: 1773568. Throughput: 0: 3480.4, 1: 3480.4. Samples: 1756542. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:17:35,199][40303] Avg episode reward: [(0, '148300.205'), (1, '148607.536')] -[2023-09-19 11:17:35,200][41188] Saving new best policy, reward=148607.536! -[2023-09-19 11:17:40,198][40303] Fps is (10 sec: 7372.8, 60 sec: 7099.7, 300 sec: 7035.5). Total num frames: 1814528. Throughput: 0: 3459.8, 1: 3460.8. Samples: 1777800. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:17:40,199][40303] Avg episode reward: [(0, '148265.013'), (1, '149557.427')] -[2023-09-19 11:17:40,201][41188] Saving new best policy, reward=149557.427! -[2023-09-19 11:17:41,639][41271] Updated weights for policy 1, policy_version 1760 (0.0011) -[2023-09-19 11:17:41,639][41246] Updated weights for policy 0, policy_version 1800 (0.0015) -[2023-09-19 11:17:45,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6826.7, 300 sec: 6994.7). Total num frames: 1839104. Throughput: 0: 3421.1, 1: 3421.1. Samples: 1817234. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:17:45,199][40303] Avg episode reward: [(0, '148265.013'), (1, '149557.427')] -[2023-09-19 11:17:45,233][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001784_913408.pth... -[2023-09-19 11:17:45,236][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001584_811008.pth -[2023-09-19 11:17:45,238][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001824_933888.pth... -[2023-09-19 11:17:45,242][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001624_831488.pth -[2023-09-19 11:17:50,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6963.2, 300 sec: 7017.3). Total num frames: 1880064. Throughput: 0: 3389.4, 1: 3389.2. Samples: 1857766. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:17:50,199][40303] Avg episode reward: [(0, '148074.607'), (1, '152409.695')] -[2023-09-19 11:17:50,201][41188] Saving new best policy, reward=152409.695! -[2023-09-19 11:17:53,620][41246] Updated weights for policy 0, policy_version 1880 (0.0014) -[2023-09-19 11:17:53,621][41271] Updated weights for policy 1, policy_version 1840 (0.0015) -[2023-09-19 11:17:55,198][40303] Fps is (10 sec: 7372.9, 60 sec: 6963.2, 300 sec: 7008.7). Total num frames: 1912832. Throughput: 0: 3389.5, 1: 3390.0. Samples: 1878050. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:17:55,199][40303] Avg episode reward: [(0, '148074.607'), (1, '152273.875')] -[2023-09-19 11:18:00,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6826.7, 300 sec: 7000.4). Total num frames: 1945600. Throughput: 0: 3345.4, 1: 3345.7. Samples: 1918202. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:00,198][40303] Avg episode reward: [(0, '153050.644'), (1, '152274.335')] -[2023-09-19 11:18:00,206][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001880_962560.pth... -[2023-09-19 11:18:00,206][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001920_983040.pth... -[2023-09-19 11:18:00,212][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001680_860160.pth -[2023-09-19 11:18:00,216][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001720_880640.pth -[2023-09-19 11:18:05,198][40303] Fps is (10 sec: 6553.7, 60 sec: 6826.7, 300 sec: 6992.5). Total num frames: 1978368. Throughput: 0: 3388.8, 1: 3389.7. Samples: 1959770. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:05,199][40303] Avg episode reward: [(0, '154101.029'), (1, '152166.485')] -[2023-09-19 11:18:05,200][41187] Saving new best policy, reward=154101.029! -[2023-09-19 11:18:05,611][41246] Updated weights for policy 0, policy_version 1960 (0.0016) -[2023-09-19 11:18:05,611][41271] Updated weights for policy 1, policy_version 1920 (0.0015) -[2023-09-19 11:18:10,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6984.8). Total num frames: 2011136. Throughput: 0: 3449.6, 1: 3449.8. Samples: 1982264. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:18:10,199][40303] Avg episode reward: [(0, '156034.676'), (1, '151366.137')] -[2023-09-19 11:18:10,200][41187] Saving new best policy, reward=156034.676! -[2023-09-19 11:18:15,198][40303] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6977.3). Total num frames: 2043904. Throughput: 0: 3381.2, 1: 3381.2. Samples: 2018014. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:15,199][40303] Avg episode reward: [(0, '155164.894'), (1, '150838.954')] -[2023-09-19 11:18:15,207][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002016_1032192.pth... -[2023-09-19 11:18:15,207][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001976_1011712.pth... -[2023-09-19 11:18:15,217][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001824_933888.pth -[2023-09-19 11:18:15,217][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001784_913408.pth -[2023-09-19 11:18:18,102][41246] Updated weights for policy 0, policy_version 2040 (0.0012) -[2023-09-19 11:18:18,102][41271] Updated weights for policy 1, policy_version 2000 (0.0012) -[2023-09-19 11:18:20,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6970.1). Total num frames: 2076672. Throughput: 0: 3371.1, 1: 3370.6. Samples: 2059918. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:18:20,199][40303] Avg episode reward: [(0, '155231.770'), (1, '150958.366')] -[2023-09-19 11:18:25,198][40303] Fps is (10 sec: 5734.5, 60 sec: 6690.1, 300 sec: 6997.9). Total num frames: 2101248. Throughput: 0: 3299.0, 1: 3299.1. Samples: 2074716. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:18:25,199][40303] Avg episode reward: [(0, '154273.430'), (1, '151002.496')] -[2023-09-19 11:18:30,198][40303] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6997.9). Total num frames: 2134016. Throughput: 0: 3270.6, 1: 3270.6. Samples: 2111590. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:30,198][40303] Avg episode reward: [(0, '155007.997'), (1, '150084.194')] -[2023-09-19 11:18:30,205][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002064_1056768.pth... -[2023-09-19 11:18:30,205][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002104_1077248.pth... -[2023-09-19 11:18:30,211][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001880_962560.pth -[2023-09-19 11:18:30,214][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001920_983040.pth -[2023-09-19 11:18:32,089][41271] Updated weights for policy 1, policy_version 2080 (0.0011) -[2023-09-19 11:18:32,090][41246] Updated weights for policy 0, policy_version 2120 (0.0012) -[2023-09-19 11:18:35,197][40303] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6970.1). Total num frames: 2166784. Throughput: 0: 3205.9, 1: 3205.1. Samples: 2146258. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:35,198][40303] Avg episode reward: [(0, '155007.997'), (1, '150624.279')] -[2023-09-19 11:18:40,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6970.1). Total num frames: 2199552. Throughput: 0: 3184.1, 1: 3184.0. Samples: 2164612. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:18:40,199][40303] Avg episode reward: [(0, '145580.524'), (1, '151090.148')] -[2023-09-19 11:18:44,767][41271] Updated weights for policy 1, policy_version 2160 (0.0013) -[2023-09-19 11:18:44,768][41246] Updated weights for policy 0, policy_version 2200 (0.0014) -[2023-09-19 11:18:45,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6942.4). Total num frames: 2232320. Throughput: 0: 3191.7, 1: 3191.8. Samples: 2205460. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:18:45,199][40303] Avg episode reward: [(0, '145607.879'), (1, '152391.555')] -[2023-09-19 11:18:45,208][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002200_1126400.pth... -[2023-09-19 11:18:45,208][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002160_1105920.pth... -[2023-09-19 11:18:45,214][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001976_1011712.pth -[2023-09-19 11:18:45,217][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002016_1032192.pth -[2023-09-19 11:18:50,197][40303] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6942.4). Total num frames: 2265088. Throughput: 0: 3173.6, 1: 3173.4. Samples: 2245384. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:18:50,198][40303] Avg episode reward: [(0, '141531.277'), (1, '153070.074')] -[2023-09-19 11:18:50,199][41188] Saving new best policy, reward=153070.074! -[2023-09-19 11:18:55,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6942.4). Total num frames: 2297856. Throughput: 0: 3144.3, 1: 3144.4. Samples: 2265256. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:18:55,199][40303] Avg episode reward: [(0, '141531.277'), (1, '153134.366')] -[2023-09-19 11:18:55,200][41188] Saving new best policy, reward=153134.366! -[2023-09-19 11:18:57,266][41271] Updated weights for policy 1, policy_version 2240 (0.0014) -[2023-09-19 11:18:57,267][41246] Updated weights for policy 0, policy_version 2280 (0.0013) -[2023-09-19 11:19:00,198][40303] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6914.6). Total num frames: 2330624. Throughput: 0: 3181.9, 1: 3181.2. Samples: 2304352. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:19:00,198][40303] Avg episode reward: [(0, '140084.973'), (1, '153499.154')] -[2023-09-19 11:19:00,205][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002296_1175552.pth... -[2023-09-19 11:19:00,206][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002256_1155072.pth... -[2023-09-19 11:19:00,213][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002064_1056768.pth -[2023-09-19 11:19:00,213][41188] Saving new best policy, reward=153499.154! -[2023-09-19 11:19:00,214][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002104_1077248.pth -[2023-09-19 11:19:05,197][40303] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6914.6). Total num frames: 2363392. Throughput: 0: 3174.0, 1: 3174.3. Samples: 2345590. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:19:05,198][40303] Avg episode reward: [(0, '140163.997'), (1, '154850.945')] -[2023-09-19 11:19:05,199][41188] Saving new best policy, reward=154850.945! -[2023-09-19 11:19:09,431][41271] Updated weights for policy 1, policy_version 2320 (0.0012) -[2023-09-19 11:19:09,432][41246] Updated weights for policy 0, policy_version 2360 (0.0015) -[2023-09-19 11:19:10,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6886.8). Total num frames: 2396160. Throughput: 0: 3226.4, 1: 3226.4. Samples: 2365092. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:19:10,199][40303] Avg episode reward: [(0, '146252.432'), (1, '155854.149')] -[2023-09-19 11:19:10,200][41188] Saving new best policy, reward=155854.149! -[2023-09-19 11:19:15,198][40303] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6886.8). Total num frames: 2428928. Throughput: 0: 3251.3, 1: 3251.4. Samples: 2404212. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:19:15,198][40303] Avg episode reward: [(0, '146450.850'), (1, '155450.882')] -[2023-09-19 11:19:15,205][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002392_1224704.pth... -[2023-09-19 11:19:15,206][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002352_1204224.pth... -[2023-09-19 11:19:15,216][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002200_1126400.pth -[2023-09-19 11:19:15,217][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002160_1105920.pth -[2023-09-19 11:19:16,564][40303] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 40303], exiting... -[2023-09-19 11:19:16,565][40303] Runner profile tree view: -main_loop: 357.5508 -[2023-09-19 11:19:16,566][41187] Stopping Batcher_0... -[2023-09-19 11:19:16,566][41187] Loop batcher_evt_loop terminating... -[2023-09-19 11:19:16,566][40303] Collected {0: 1228800, 1: 1208320}, FPS: 6758.9 -[2023-09-19 11:19:16,566][41188] Stopping Batcher_1... -[2023-09-19 11:19:16,566][41188] Loop batcher_evt_loop terminating... -[2023-09-19 11:19:16,567][41188] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002360_1208320.pth... -[2023-09-19 11:19:16,567][41287] Stopping RolloutWorker_w4... -[2023-09-19 11:19:16,567][41187] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002400_1228800.pth... -[2023-09-19 11:19:16,567][41287] Loop rollout_proc4_evt_loop terminating... -[2023-09-19 11:19:16,569][41278] Stopping RolloutWorker_w2... -[2023-09-19 11:19:16,569][41276] Stopping RolloutWorker_w1... -[2023-09-19 11:19:16,569][41278] Loop rollout_proc2_evt_loop terminating... -[2023-09-19 11:19:16,569][41276] Loop rollout_proc1_evt_loop terminating... -[2023-09-19 11:19:16,570][41291] Stopping RolloutWorker_w7... -[2023-09-19 11:19:16,570][41291] Loop rollout_proc7_evt_loop terminating... -[2023-09-19 11:19:16,570][41272] Stopping RolloutWorker_w0... -[2023-09-19 11:19:16,570][41272] Loop rollout_proc0_evt_loop terminating... -[2023-09-19 11:19:16,571][41290] Stopping RolloutWorker_w6... -[2023-09-19 11:19:16,571][41290] Loop rollout_proc6_evt_loop terminating... -[2023-09-19 11:19:16,571][41284] Stopping RolloutWorker_w3... -[2023-09-19 11:19:16,571][41188] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002256_1155072.pth -[2023-09-19 11:19:16,571][41284] Loop rollout_proc3_evt_loop terminating... -[2023-09-19 11:19:16,571][41292] Stopping RolloutWorker_w5... -[2023-09-19 11:19:16,572][41188] Stopping LearnerWorker_p1... -[2023-09-19 11:19:16,572][41292] Loop rollout_proc5_evt_loop terminating... -[2023-09-19 11:19:16,572][41188] Loop learner_proc1_evt_loop terminating... -[2023-09-19 11:19:16,575][41187] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002296_1175552.pth -[2023-09-19 11:19:16,576][41187] Stopping LearnerWorker_p0... -[2023-09-19 11:19:16,576][41187] Loop learner_proc0_evt_loop terminating... -[2023-09-19 11:19:16,580][41246] Weights refcount: 2 0 -[2023-09-19 11:19:16,581][41246] Stopping InferenceWorker_p0-w0... -[2023-09-19 11:19:16,581][41246] Loop inference_proc0-0_evt_loop terminating... -[2023-09-19 11:19:16,583][41271] Weights refcount: 2 0 -[2023-09-19 11:19:16,584][41271] Stopping InferenceWorker_p1-w0... -[2023-09-19 11:19:16,584][41271] Loop inference_proc1-0_evt_loop terminating... -[2023-09-19 11:19:40,691][72530] Saving configuration to ./train_dir/Standup/config.json... -[2023-09-19 11:19:40,693][72530] Rollout worker 0 uses device cpu -[2023-09-19 11:19:40,694][72530] Rollout worker 1 uses device cpu -[2023-09-19 11:19:40,694][72530] Rollout worker 2 uses device cpu -[2023-09-19 11:19:40,695][72530] Rollout worker 3 uses device cpu -[2023-09-19 11:19:40,695][72530] Rollout worker 4 uses device cpu -[2023-09-19 11:19:40,695][72530] Rollout worker 5 uses device cpu -[2023-09-19 11:19:40,696][72530] Rollout worker 6 uses device cpu -[2023-09-19 11:19:40,696][72530] Rollout worker 7 uses device cpu -[2023-09-19 11:19:40,696][72530] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 -[2023-09-19 11:19:40,753][72530] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:19:40,753][72530] InferenceWorker_p0-w0: min num requests: 1 -[2023-09-19 11:19:40,757][72530] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:19:40,757][72530] InferenceWorker_p1-w0: min num requests: 1 -[2023-09-19 11:19:40,783][72530] Starting all processes... -[2023-09-19 11:19:40,783][72530] Starting process learner_proc0 -[2023-09-19 11:19:40,786][72530] Starting process learner_proc1 -[2023-09-19 11:19:40,832][72530] Starting all processes... -[2023-09-19 11:19:40,838][72530] Starting process inference_proc0-0 -[2023-09-19 11:19:40,838][72530] Starting process inference_proc1-0 -[2023-09-19 11:19:40,838][72530] Starting process rollout_proc0 -[2023-09-19 11:19:40,839][72530] Starting process rollout_proc1 -[2023-09-19 11:19:40,839][72530] Starting process rollout_proc2 -[2023-09-19 11:19:40,839][72530] Starting process rollout_proc3 -[2023-09-19 11:19:40,840][72530] Starting process rollout_proc4 -[2023-09-19 11:19:40,843][72530] Starting process rollout_proc5 -[2023-09-19 11:19:40,843][72530] Starting process rollout_proc6 -[2023-09-19 11:19:40,844][72530] Starting process rollout_proc7 -[2023-09-19 11:19:42,632][73131] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:19:42,632][73131] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 -[2023-09-19 11:19:42,651][73131] Num visible devices: 1 -[2023-09-19 11:19:42,668][73131] Starting seed is not provided -[2023-09-19 11:19:42,669][73131] Using GPUs [0] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:19:42,669][73131] Initializing actor-critic model on device cuda:0 -[2023-09-19 11:19:42,670][73131] RunningMeanStd input shape: (376,) -[2023-09-19 11:19:42,670][73131] RunningMeanStd input shape: (1,) -[2023-09-19 11:19:42,680][73130] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:19:42,681][73130] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 -[2023-09-19 11:19:42,683][73226] Worker 7 uses CPU cores [28, 29, 30, 31] -[2023-09-19 11:19:42,694][73220] Worker 2 uses CPU cores [8, 9, 10, 11] -[2023-09-19 11:19:42,701][73130] Num visible devices: 1 -[2023-09-19 11:19:42,723][73222] Worker 6 uses CPU cores [24, 25, 26, 27] -[2023-09-19 11:19:42,725][73130] Starting seed is not provided -[2023-09-19 11:19:42,725][73130] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:19:42,725][73130] Initializing actor-critic model on device cuda:0 -[2023-09-19 11:19:42,725][73130] RunningMeanStd input shape: (376,) -[2023-09-19 11:19:42,726][73130] RunningMeanStd input shape: (1,) -[2023-09-19 11:19:42,731][73224] Worker 4 uses CPU cores [16, 17, 18, 19] -[2023-09-19 11:19:42,733][73131] Created Actor Critic model with architecture: -[2023-09-19 11:19:42,734][73131] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): MultiInputEncoder( - (encoders): ModuleDict( - (obs): MlpEncoder( - (mlp_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=Tanh) - (2): RecursiveScriptModule(original_name=Linear) - (3): RecursiveScriptModule(original_name=Tanh) - ) - ) - ) - ) - (core): ModelCoreIdentity() - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=64, out_features=1, bias=True) - (action_parameterization): ActionParameterizationContinuousNonAdaptiveStddev( - (distribution_linear): Linear(in_features=64, out_features=17, bias=True) - ) -) -[2023-09-19 11:19:42,740][73219] Using GPUs [1] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:19:42,740][73219] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 -[2023-09-19 11:19:42,756][73145] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:19:42,756][73145] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 -[2023-09-19 11:19:42,787][73145] Num visible devices: 1 -[2023-09-19 11:19:42,787][73219] Num visible devices: 1 -[2023-09-19 11:19:42,797][73130] Created Actor Critic model with architecture: -[2023-09-19 11:19:42,797][73130] ActorCriticSharedWeights( - (obs_normalizer): ObservationNormalizer( - (running_mean_std): RunningMeanStdDictInPlace( - (running_mean_std): ModuleDict( - (obs): RunningMeanStdInPlace() - ) - ) - ) - (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) - (encoder): MultiInputEncoder( - (encoders): ModuleDict( - (obs): MlpEncoder( - (mlp_head): RecursiveScriptModule( - original_name=Sequential - (0): RecursiveScriptModule(original_name=Linear) - (1): RecursiveScriptModule(original_name=Tanh) - (2): RecursiveScriptModule(original_name=Linear) - (3): RecursiveScriptModule(original_name=Tanh) - ) - ) - ) - ) - (core): ModelCoreIdentity() - (decoder): MlpDecoder( - (mlp): Identity() - ) - (critic_linear): Linear(in_features=64, out_features=1, bias=True) - (action_parameterization): ActionParameterizationContinuousNonAdaptiveStddev( - (distribution_linear): Linear(in_features=64, out_features=17, bias=True) - ) -) -[2023-09-19 11:19:42,798][73221] Worker 1 uses CPU cores [4, 5, 6, 7] -[2023-09-19 11:19:42,939][73223] Worker 3 uses CPU cores [12, 13, 14, 15] -[2023-09-19 11:19:43,116][73229] Worker 5 uses CPU cores [20, 21, 22, 23] -[2023-09-19 11:19:43,205][73218] Worker 0 uses CPU cores [0, 1, 2, 3] -[2023-09-19 11:19:43,373][73131] Using optimizer -[2023-09-19 11:19:43,373][73131] Loading state from checkpoint ./train_dir/Standup/checkpoint_p1/checkpoint_000002360_1208320.pth... -[2023-09-19 11:19:43,379][73131] Loading model from checkpoint -[2023-09-19 11:19:43,381][73131] Loaded experiment state at self.train_step=2360, self.env_steps=1208320 -[2023-09-19 11:19:43,382][73131] Initialized policy 1 weights for model version 2360 -[2023-09-19 11:19:43,383][73131] LearnerWorker_p1 finished initialization! -[2023-09-19 11:19:43,383][73131] Using GPUs [0] for process 1 (actually maps to GPUs [1]) -[2023-09-19 11:19:43,409][73130] Using optimizer -[2023-09-19 11:19:43,410][73130] Loading state from checkpoint ./train_dir/Standup/checkpoint_p0/checkpoint_000002400_1228800.pth... -[2023-09-19 11:19:43,416][73130] Loading model from checkpoint -[2023-09-19 11:19:43,419][73130] Loaded experiment state at self.train_step=2400, self.env_steps=1228800 -[2023-09-19 11:19:43,419][73130] Initialized policy 0 weights for model version 2400 -[2023-09-19 11:19:43,427][73130] LearnerWorker_p0 finished initialization! -[2023-09-19 11:19:43,427][73130] Using GPUs [0] for process 0 (actually maps to GPUs [0]) -[2023-09-19 11:19:43,971][73219] RunningMeanStd input shape: (376,) -[2023-09-19 11:19:43,971][73219] RunningMeanStd input shape: (1,) -[2023-09-19 11:19:43,987][73145] RunningMeanStd input shape: (376,) -[2023-09-19 11:19:43,987][73145] RunningMeanStd input shape: (1,) -[2023-09-19 11:19:44,004][72530] Inference worker 1-0 is ready! -[2023-09-19 11:19:44,021][72530] Inference worker 0-0 is ready! -[2023-09-19 11:19:44,022][72530] All inference workers are ready! Signal rollout workers to start! -[2023-09-19 11:19:44,118][73223] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,119][73223] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,120][73229] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,121][73229] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,122][73221] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,123][73221] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,128][73226] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,128][73226] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,132][73220] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,133][73220] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,141][73222] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,142][73222] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,157][73224] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,158][73224] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,161][73218] Decorrelating experience for 0 frames... -[2023-09-19 11:19:44,162][73218] Decorrelating experience for 64 frames... -[2023-09-19 11:19:44,168][73223] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,171][73229] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,175][73221] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,181][73226] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,199][73220] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,200][73222] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,222][73224] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,243][73218] Decorrelating experience for 128 frames... -[2023-09-19 11:19:44,273][73223] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,277][73229] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,281][73221] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,284][73226] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,299][73222] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,304][73220] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,319][73224] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,368][73218] Decorrelating experience for 192 frames... -[2023-09-19 11:19:44,444][73223] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,447][73229] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,457][73226] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,462][73221] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,476][73222] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,492][73220] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,499][73224] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,575][73218] Decorrelating experience for 256 frames... -[2023-09-19 11:19:44,651][73223] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,657][73229] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,666][73226] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,681][73221] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,689][73222] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,707][73220] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,714][73224] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,803][73218] Decorrelating experience for 320 frames... -[2023-09-19 11:19:44,907][73223] Decorrelating experience for 384 frames... -[2023-09-19 11:19:44,915][73229] Decorrelating experience for 384 frames... -[2023-09-19 11:19:44,916][73226] Decorrelating experience for 384 frames... -[2023-09-19 11:19:44,933][73221] Decorrelating experience for 384 frames... -[2023-09-19 11:19:44,939][73222] Decorrelating experience for 384 frames... -[2023-09-19 11:19:44,969][73224] Decorrelating experience for 384 frames... -[2023-09-19 11:19:45,006][73220] Decorrelating experience for 384 frames... -[2023-09-19 11:19:45,067][73218] Decorrelating experience for 384 frames... -[2023-09-19 11:19:45,209][73223] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,221][73226] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,227][73229] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,253][73222] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,261][73221] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,302][73224] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,325][73220] Decorrelating experience for 448 frames... -[2023-09-19 11:19:45,392][73218] Decorrelating experience for 448 frames... -[2023-09-19 11:19:47,043][72530] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 2437120. Throughput: 0: nan, 1: nan. Samples: 5818. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) -[2023-09-19 11:19:52,043][72530] Fps is (10 sec: 3276.7, 60 sec: 3276.7, 300 sec: 3276.7). Total num frames: 2453504. Throughput: 0: 1049.2, 1: 1064.0. Samples: 16384. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:19:52,340][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002424_1241088.pth... -[2023-09-19 11:19:52,343][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002384_1220608.pth... -[2023-09-19 11:19:52,345][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002392_1224704.pth -[2023-09-19 11:19:52,348][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002352_1204224.pth -[2023-09-19 11:19:57,043][72530] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4096.0). Total num frames: 2478080. Throughput: 0: 1910.2, 1: 1911.2. Samples: 44032. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:19:57,045][72530] Avg episode reward: [(0, '67743.941'), (1, '43579.059')] -[2023-09-19 11:20:00,740][72530] Heartbeat connected on Batcher_0 -[2023-09-19 11:20:00,743][72530] Heartbeat connected on LearnerWorker_p0 -[2023-09-19 11:20:00,746][72530] Heartbeat connected on Batcher_1 -[2023-09-19 11:20:00,749][72530] Heartbeat connected on LearnerWorker_p1 -[2023-09-19 11:20:00,755][72530] Heartbeat connected on InferenceWorker_p0-w0 -[2023-09-19 11:20:00,760][72530] Heartbeat connected on InferenceWorker_p1-w0 -[2023-09-19 11:20:00,761][72530] Heartbeat connected on RolloutWorker_w0 -[2023-09-19 11:20:00,767][72530] Heartbeat connected on RolloutWorker_w2 -[2023-09-19 11:20:00,769][72530] Heartbeat connected on RolloutWorker_w3 -[2023-09-19 11:20:00,771][72530] Heartbeat connected on RolloutWorker_w1 -[2023-09-19 11:20:00,772][72530] Heartbeat connected on RolloutWorker_w4 -[2023-09-19 11:20:00,779][72530] Heartbeat connected on RolloutWorker_w6 -[2023-09-19 11:20:00,782][72530] Heartbeat connected on RolloutWorker_w7 -[2023-09-19 11:20:00,783][72530] Heartbeat connected on RolloutWorker_w5 -[2023-09-19 11:20:02,043][72530] Fps is (10 sec: 6144.1, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 2514944. Throughput: 0: 1864.9, 1: 1865.5. Samples: 61774. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:20:02,043][72530] Avg episode reward: [(0, '99962.574'), (1, '74154.644')] -[2023-09-19 11:20:02,049][73145] Updated weights for policy 0, policy_version 2480 (0.0015) -[2023-09-19 11:20:02,049][73219] Updated weights for policy 1, policy_version 2440 (0.0013) -[2023-09-19 11:20:07,043][72530] Fps is (10 sec: 7372.7, 60 sec: 5734.3, 300 sec: 5734.3). Total num frames: 2551808. Throughput: 0: 2437.4, 1: 2437.7. Samples: 103320. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:20:07,045][72530] Avg episode reward: [(0, '117948.527'), (1, '103367.832')] -[2023-09-19 11:20:07,048][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002512_1286144.pth... -[2023-09-19 11:20:07,048][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002472_1265664.pth... -[2023-09-19 11:20:07,055][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002400_1228800.pth -[2023-09-19 11:20:07,057][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002360_1208320.pth -[2023-09-19 11:20:12,043][72530] Fps is (10 sec: 6963.0, 60 sec: 5898.2, 300 sec: 5898.2). Total num frames: 2584576. Throughput: 0: 2806.0, 1: 2806.1. Samples: 146122. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:20:12,045][72530] Avg episode reward: [(0, '124390.108'), (1, '110149.801')] -[2023-09-19 11:20:13,613][73219] Updated weights for policy 1, policy_version 2520 (0.0012) -[2023-09-19 11:20:13,613][73145] Updated weights for policy 0, policy_version 2560 (0.0014) -[2023-09-19 11:20:17,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 6007.5). Total num frames: 2617344. Throughput: 0: 3030.0, 1: 3030.3. Samples: 187626. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:20:17,047][72530] Avg episode reward: [(0, '133247.381'), (1, '124425.608')] -[2023-09-19 11:20:22,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6085.4, 300 sec: 6085.4). Total num frames: 2650112. Throughput: 0: 2890.3, 1: 2890.4. Samples: 208144. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:20:22,044][72530] Avg episode reward: [(0, '134256.432'), (1, '126597.151')] -[2023-09-19 11:20:22,047][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002568_1314816.pth... -[2023-09-19 11:20:22,049][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002608_1335296.pth... -[2023-09-19 11:20:22,056][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002384_1220608.pth -[2023-09-19 11:20:22,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002424_1241088.pth -[2023-09-19 11:20:26,328][73219] Updated weights for policy 1, policy_version 2600 (0.0014) -[2023-09-19 11:20:26,329][73145] Updated weights for policy 0, policy_version 2640 (0.0014) -[2023-09-19 11:20:27,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6144.0). Total num frames: 2682880. Throughput: 0: 2979.6, 1: 2979.8. Samples: 244194. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:20:27,044][72530] Avg episode reward: [(0, '152920.116'), (1, '149441.210')] -[2023-09-19 11:20:32,043][72530] Fps is (10 sec: 6553.8, 60 sec: 6189.5, 300 sec: 6189.5). Total num frames: 2715648. Throughput: 0: 3119.2, 1: 3119.5. Samples: 286558. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:20:32,044][72530] Avg episode reward: [(0, '154007.595'), (1, '152818.612')] -[2023-09-19 11:20:37,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6225.9, 300 sec: 6225.9). Total num frames: 2748416. Throughput: 0: 3203.5, 1: 3202.2. Samples: 304640. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:20:37,044][72530] Avg episode reward: [(0, '152855.520'), (1, '157930.094')] -[2023-09-19 11:20:37,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002664_1363968.pth... -[2023-09-19 11:20:37,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002704_1384448.pth... -[2023-09-19 11:20:37,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002472_1265664.pth -[2023-09-19 11:20:37,059][73131] Saving new best policy, reward=157930.094! -[2023-09-19 11:20:37,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002512_1286144.pth -[2023-09-19 11:20:39,045][73219] Updated weights for policy 1, policy_version 2680 (0.0013) -[2023-09-19 11:20:39,046][73145] Updated weights for policy 0, policy_version 2720 (0.0015) -[2023-09-19 11:20:42,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6255.7, 300 sec: 6255.7). Total num frames: 2781184. Throughput: 0: 3306.8, 1: 3306.8. Samples: 341646. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:20:42,044][72530] Avg episode reward: [(0, '151664.402'), (1, '158564.220')] -[2023-09-19 11:20:42,045][73131] Saving new best policy, reward=158564.220! -[2023-09-19 11:20:47,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6280.5). Total num frames: 2813952. Throughput: 0: 3562.3, 1: 3562.2. Samples: 382380. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:20:47,044][72530] Avg episode reward: [(0, '145105.303'), (1, '155856.908')] -[2023-09-19 11:20:51,525][73145] Updated weights for policy 0, policy_version 2800 (0.0016) -[2023-09-19 11:20:51,526][73219] Updated weights for policy 1, policy_version 2760 (0.0014) -[2023-09-19 11:20:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6301.5). Total num frames: 2846720. Throughput: 0: 3312.4, 1: 3313.6. Samples: 401490. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:20:52,044][72530] Avg episode reward: [(0, '144121.200'), (1, '155856.908')] -[2023-09-19 11:20:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002800_1433600.pth... -[2023-09-19 11:20:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002760_1413120.pth... -[2023-09-19 11:20:52,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002608_1335296.pth -[2023-09-19 11:20:52,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002568_1314816.pth -[2023-09-19 11:20:57,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6319.5). Total num frames: 2879488. Throughput: 0: 3240.4, 1: 3240.4. Samples: 437758. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:20:57,044][72530] Avg episode reward: [(0, '142988.097'), (1, '155707.131')] -[2023-09-19 11:21:02,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6621.8, 300 sec: 6335.1). Total num frames: 2912256. Throughput: 0: 3233.5, 1: 3233.3. Samples: 478634. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:21:02,045][72530] Avg episode reward: [(0, '144052.838'), (1, '155707.131')] -[2023-09-19 11:21:04,172][73219] Updated weights for policy 1, policy_version 2840 (0.0013) -[2023-09-19 11:21:04,172][73145] Updated weights for policy 0, policy_version 2880 (0.0010) -[2023-09-19 11:21:07,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6348.8). Total num frames: 2945024. Throughput: 0: 3220.4, 1: 3220.5. Samples: 497982. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:21:07,044][72530] Avg episode reward: [(0, '145667.627'), (1, '156028.379')] -[2023-09-19 11:21:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002856_1462272.pth... -[2023-09-19 11:21:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002896_1482752.pth... -[2023-09-19 11:21:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002664_1363968.pth -[2023-09-19 11:21:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002704_1384448.pth -[2023-09-19 11:21:12,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6360.8). Total num frames: 2977792. Throughput: 0: 3292.6, 1: 3293.6. Samples: 540574. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:21:12,045][72530] Avg episode reward: [(0, '147376.857'), (1, '156100.448')] -[2023-09-19 11:21:16,935][73219] Updated weights for policy 1, policy_version 2920 (0.0012) -[2023-09-19 11:21:16,935][73145] Updated weights for policy 0, policy_version 2960 (0.0013) -[2023-09-19 11:21:17,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6371.5). Total num frames: 3010560. Throughput: 0: 3192.5, 1: 3192.8. Samples: 573896. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:21:17,044][72530] Avg episode reward: [(0, '153646.249'), (1, '157825.450')] -[2023-09-19 11:21:22,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6294.9). Total num frames: 3035136. Throughput: 0: 3218.0, 1: 3217.9. Samples: 594256. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:21:22,044][72530] Avg episode reward: [(0, '156927.936'), (1, '157825.450')] -[2023-09-19 11:21:22,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002944_1507328.pth... -[2023-09-19 11:21:22,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002984_1527808.pth... -[2023-09-19 11:21:22,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002800_1433600.pth -[2023-09-19 11:21:22,060][73130] Saving new best policy, reward=156927.936! -[2023-09-19 11:21:22,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002760_1413120.pth -[2023-09-19 11:21:27,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6307.8). Total num frames: 3067904. Throughput: 0: 3210.2, 1: 3210.0. Samples: 630558. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:21:27,045][72530] Avg episode reward: [(0, '158809.152'), (1, '159261.146')] -[2023-09-19 11:21:27,046][73130] Saving new best policy, reward=158809.152! -[2023-09-19 11:21:27,046][73131] Saving new best policy, reward=159261.146! -[2023-09-19 11:21:29,937][73145] Updated weights for policy 0, policy_version 3040 (0.0014) -[2023-09-19 11:21:29,937][73219] Updated weights for policy 1, policy_version 3000 (0.0015) -[2023-09-19 11:21:32,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6319.5). Total num frames: 3100672. Throughput: 0: 3202.6, 1: 3202.7. Samples: 670618. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:21:32,044][72530] Avg episode reward: [(0, '158669.703'), (1, '159261.146')] -[2023-09-19 11:21:37,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6330.2). Total num frames: 3133440. Throughput: 0: 3194.8, 1: 3193.6. Samples: 688972. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:21:37,044][72530] Avg episode reward: [(0, '160207.000'), (1, '158900.446')] -[2023-09-19 11:21:37,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003040_1556480.pth... -[2023-09-19 11:21:37,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003080_1576960.pth... -[2023-09-19 11:21:37,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002856_1462272.pth -[2023-09-19 11:21:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002896_1482752.pth -[2023-09-19 11:21:37,063][73130] Saving new best policy, reward=160207.000! -[2023-09-19 11:21:42,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6339.9). Total num frames: 3166208. Throughput: 0: 3201.6, 1: 3201.7. Samples: 725908. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:21:42,044][72530] Avg episode reward: [(0, '160167.920'), (1, '158900.446')] -[2023-09-19 11:21:43,450][73219] Updated weights for policy 1, policy_version 3080 (0.0013) -[2023-09-19 11:21:43,451][73145] Updated weights for policy 0, policy_version 3120 (0.0018) -[2023-09-19 11:21:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6280.5). Total num frames: 3190784. Throughput: 0: 3112.9, 1: 3113.1. Samples: 758804. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:21:47,045][72530] Avg episode reward: [(0, '160649.680'), (1, '159123.148')] -[2023-09-19 11:21:47,046][73130] Saving new best policy, reward=160649.680! -[2023-09-19 11:21:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6291.4). Total num frames: 3223552. Throughput: 0: 3114.0, 1: 3115.4. Samples: 778308. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:21:52,044][72530] Avg episode reward: [(0, '159415.266'), (1, '160065.227')] -[2023-09-19 11:21:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003168_1622016.pth... -[2023-09-19 11:21:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003128_1601536.pth... -[2023-09-19 11:21:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002944_1507328.pth -[2023-09-19 11:21:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002984_1527808.pth -[2023-09-19 11:21:52,062][73131] Saving new best policy, reward=160065.227! -[2023-09-19 11:21:56,561][73145] Updated weights for policy 0, policy_version 3200 (0.0015) -[2023-09-19 11:21:56,561][73219] Updated weights for policy 1, policy_version 3160 (0.0015) -[2023-09-19 11:21:57,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6301.5). Total num frames: 3256320. Throughput: 0: 3061.8, 1: 3060.8. Samples: 816090. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:21:57,044][72530] Avg episode reward: [(0, '159883.769'), (1, '159603.128')] -[2023-09-19 11:22:02,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6310.9). Total num frames: 3289088. Throughput: 0: 3110.9, 1: 3110.3. Samples: 853850. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:22:02,044][72530] Avg episode reward: [(0, '160168.162'), (1, '158805.251')] -[2023-09-19 11:22:07,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6319.5). Total num frames: 3321856. Throughput: 0: 3100.6, 1: 3100.8. Samples: 873320. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:22:07,044][72530] Avg episode reward: [(0, '159368.135'), (1, '158166.073')] -[2023-09-19 11:22:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003224_1650688.pth... -[2023-09-19 11:22:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003264_1671168.pth... -[2023-09-19 11:22:07,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003040_1556480.pth -[2023-09-19 11:22:07,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003080_1576960.pth -[2023-09-19 11:22:09,339][73145] Updated weights for policy 0, policy_version 3280 (0.0016) -[2023-09-19 11:22:09,339][73219] Updated weights for policy 1, policy_version 3240 (0.0013) -[2023-09-19 11:22:12,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6327.6). Total num frames: 3354624. Throughput: 0: 3134.1, 1: 3134.3. Samples: 912634. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:12,044][72530] Avg episode reward: [(0, '154750.825'), (1, '157929.657')] -[2023-09-19 11:22:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6280.5). Total num frames: 3379200. Throughput: 0: 3098.0, 1: 3098.2. Samples: 949448. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:17,045][72530] Avg episode reward: [(0, '152601.903'), (1, '153832.123')] -[2023-09-19 11:22:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6289.3). Total num frames: 3411968. Throughput: 0: 3112.9, 1: 3113.0. Samples: 969138. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:22:22,044][72530] Avg episode reward: [(0, '152992.401'), (1, '152373.243')] -[2023-09-19 11:22:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003312_1695744.pth... -[2023-09-19 11:22:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003352_1716224.pth... -[2023-09-19 11:22:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003128_1601536.pth -[2023-09-19 11:22:22,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003168_1622016.pth -[2023-09-19 11:22:22,748][73219] Updated weights for policy 1, policy_version 3320 (0.0013) -[2023-09-19 11:22:22,749][73145] Updated weights for policy 0, policy_version 3360 (0.0014) -[2023-09-19 11:22:27,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6297.6). Total num frames: 3444736. Throughput: 0: 3104.4, 1: 3104.3. Samples: 1005296. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:27,044][72530] Avg episode reward: [(0, '152409.297'), (1, '152400.063')] -[2023-09-19 11:22:32,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6305.4). Total num frames: 3477504. Throughput: 0: 3183.3, 1: 3183.3. Samples: 1045302. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:22:32,044][72530] Avg episode reward: [(0, '150571.798'), (1, '152330.295')] -[2023-09-19 11:22:35,267][73145] Updated weights for policy 0, policy_version 3440 (0.0013) -[2023-09-19 11:22:35,268][73219] Updated weights for policy 1, policy_version 3400 (0.0014) -[2023-09-19 11:22:37,043][72530] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6312.6). Total num frames: 3510272. Throughput: 0: 3180.8, 1: 3180.2. Samples: 1064554. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:37,044][72530] Avg episode reward: [(0, '150024.495'), (1, '153833.309')] -[2023-09-19 11:22:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003408_1744896.pth... -[2023-09-19 11:22:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003448_1765376.pth... -[2023-09-19 11:22:37,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003264_1671168.pth -[2023-09-19 11:22:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003224_1650688.pth -[2023-09-19 11:22:42,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6319.5). Total num frames: 3543040. Throughput: 0: 3161.4, 1: 3161.5. Samples: 1100622. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:42,044][72530] Avg episode reward: [(0, '153738.816'), (1, '154391.331')] -[2023-09-19 11:22:47,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6280.5). Total num frames: 3567616. Throughput: 0: 3163.8, 1: 3165.1. Samples: 1138652. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:47,044][72530] Avg episode reward: [(0, '154735.807'), (1, '155587.330')] -[2023-09-19 11:22:48,507][73145] Updated weights for policy 0, policy_version 3520 (0.0012) -[2023-09-19 11:22:48,508][73219] Updated weights for policy 1, policy_version 3480 (0.0013) -[2023-09-19 11:22:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6287.9). Total num frames: 3600384. Throughput: 0: 3151.2, 1: 3151.2. Samples: 1156926. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:22:52,044][72530] Avg episode reward: [(0, '155558.003'), (1, '158444.382')] -[2023-09-19 11:22:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003496_1789952.pth... -[2023-09-19 11:22:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003536_1810432.pth... -[2023-09-19 11:22:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003312_1695744.pth -[2023-09-19 11:22:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003352_1716224.pth -[2023-09-19 11:22:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6251.8). Total num frames: 3624960. Throughput: 0: 3076.0, 1: 3076.3. Samples: 1189490. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:22:57,044][72530] Avg episode reward: [(0, '156213.048'), (1, '159905.417')] -[2023-09-19 11:23:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6259.5). Total num frames: 3657728. Throughput: 0: 3051.2, 1: 3051.2. Samples: 1224052. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:02,044][72530] Avg episode reward: [(0, '154952.583'), (1, '160034.837')] -[2023-09-19 11:23:03,139][73219] Updated weights for policy 1, policy_version 3560 (0.0011) -[2023-09-19 11:23:03,140][73145] Updated weights for policy 0, policy_version 3600 (0.0014) -[2023-09-19 11:23:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6225.9). Total num frames: 3682304. Throughput: 0: 3014.6, 1: 3014.8. Samples: 1240462. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:23:07,045][72530] Avg episode reward: [(0, '154952.583'), (1, '160050.141')] -[2023-09-19 11:23:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003576_1830912.pth... -[2023-09-19 11:23:07,057][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003616_1851392.pth... -[2023-09-19 11:23:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003448_1765376.pth -[2023-09-19 11:23:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003408_1744896.pth -[2023-09-19 11:23:12,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6233.9). Total num frames: 3715072. Throughput: 0: 3031.9, 1: 3032.3. Samples: 1278182. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:12,045][72530] Avg episode reward: [(0, '151313.141'), (1, '159602.087')] -[2023-09-19 11:23:16,373][73219] Updated weights for policy 1, policy_version 3640 (0.0012) -[2023-09-19 11:23:16,374][73145] Updated weights for policy 0, policy_version 3680 (0.0011) -[2023-09-19 11:23:17,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6241.5). Total num frames: 3747840. Throughput: 0: 2998.0, 1: 2998.0. Samples: 1315120. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:17,044][72530] Avg episode reward: [(0, '151313.141'), (1, '159523.019')] -[2023-09-19 11:23:22,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.8). Total num frames: 3780608. Throughput: 0: 3006.6, 1: 3007.0. Samples: 1335164. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:23:22,044][72530] Avg episode reward: [(0, '152570.175'), (1, '158236.314')] -[2023-09-19 11:23:22,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003712_1900544.pth... -[2023-09-19 11:23:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003672_1880064.pth... -[2023-09-19 11:23:22,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003536_1810432.pth -[2023-09-19 11:23:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003496_1789952.pth -[2023-09-19 11:23:27,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6255.7). Total num frames: 3813376. Throughput: 0: 2986.7, 1: 2986.5. Samples: 1369418. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:27,045][72530] Avg episode reward: [(0, '152570.175'), (1, '158236.314')] -[2023-09-19 11:23:29,330][73219] Updated weights for policy 1, policy_version 3720 (0.0013) -[2023-09-19 11:23:29,331][73145] Updated weights for policy 0, policy_version 3760 (0.0014) -[2023-09-19 11:23:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6225.9). Total num frames: 3837952. Throughput: 0: 2803.1, 1: 2802.0. Samples: 1390884. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:23:32,044][72530] Avg episode reward: [(0, '152939.553'), (1, '157598.636')] -[2023-09-19 11:23:37,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6007.5, 300 sec: 6233.0). Total num frames: 3870720. Throughput: 0: 3009.5, 1: 3009.5. Samples: 1427778. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:23:37,044][72530] Avg episode reward: [(0, '152939.553'), (1, '157598.636')] -[2023-09-19 11:23:37,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003800_1945600.pth... -[2023-09-19 11:23:37,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003760_1925120.pth... -[2023-09-19 11:23:37,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003616_1851392.pth -[2023-09-19 11:23:37,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003576_1830912.pth -[2023-09-19 11:23:42,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 6239.9). Total num frames: 3903488. Throughput: 0: 3062.4, 1: 3061.9. Samples: 1465082. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:42,044][72530] Avg episode reward: [(0, '155045.334'), (1, '157877.174')] -[2023-09-19 11:23:42,591][73219] Updated weights for policy 1, policy_version 3800 (0.0013) -[2023-09-19 11:23:42,592][73145] Updated weights for policy 0, policy_version 3840 (0.0013) -[2023-09-19 11:23:47,043][72530] Fps is (10 sec: 5734.2, 60 sec: 6007.4, 300 sec: 6212.3). Total num frames: 3928064. Throughput: 0: 3055.4, 1: 3056.6. Samples: 1499094. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:47,044][72530] Avg episode reward: [(0, '155518.647'), (1, '157877.174')] -[2023-09-19 11:23:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6219.2). Total num frames: 3960832. Throughput: 0: 3090.5, 1: 3090.4. Samples: 1518604. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:23:52,044][72530] Avg episode reward: [(0, '156419.731'), (1, '159794.628')] -[2023-09-19 11:23:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003848_1970176.pth... -[2023-09-19 11:23:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003888_1990656.pth... -[2023-09-19 11:23:52,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003672_1880064.pth -[2023-09-19 11:23:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003712_1900544.pth -[2023-09-19 11:23:56,865][73219] Updated weights for policy 1, policy_version 3880 (0.0012) -[2023-09-19 11:23:56,866][73145] Updated weights for policy 0, policy_version 3920 (0.0014) -[2023-09-19 11:23:57,043][72530] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6225.9). Total num frames: 3993600. Throughput: 0: 3037.8, 1: 3037.7. Samples: 1551576. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:23:57,044][72530] Avg episode reward: [(0, '155910.674'), (1, '159794.628')] -[2023-09-19 11:24:02,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.4, 300 sec: 6200.2). Total num frames: 4018176. Throughput: 0: 2986.6, 1: 2986.6. Samples: 1583914. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:02,045][72530] Avg episode reward: [(0, '155254.236'), (1, '160650.698')] -[2023-09-19 11:24:02,046][73131] Saving new best policy, reward=160650.698! -[2023-09-19 11:24:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6207.0). Total num frames: 4050944. Throughput: 0: 2980.8, 1: 2979.8. Samples: 1603390. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:07,044][72530] Avg episode reward: [(0, '155932.263'), (1, '160981.958')] -[2023-09-19 11:24:07,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000003936_2015232.pth... -[2023-09-19 11:24:07,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000003976_2035712.pth... -[2023-09-19 11:24:07,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003800_1945600.pth -[2023-09-19 11:24:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003760_1925120.pth -[2023-09-19 11:24:07,062][73131] Saving new best policy, reward=160981.958! -[2023-09-19 11:24:10,788][73145] Updated weights for policy 0, policy_version 4000 (0.0014) -[2023-09-19 11:24:10,788][73219] Updated weights for policy 1, policy_version 3960 (0.0013) -[2023-09-19 11:24:12,043][72530] Fps is (10 sec: 5734.6, 60 sec: 6007.5, 300 sec: 6182.6). Total num frames: 4075520. Throughput: 0: 3006.8, 1: 3006.8. Samples: 1640028. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:12,044][72530] Avg episode reward: [(0, '156741.192'), (1, '161232.794')] -[2023-09-19 11:24:12,051][73131] Saving new best policy, reward=161232.794! -[2023-09-19 11:24:17,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6219.9). Total num frames: 4116480. Throughput: 0: 3213.3, 1: 3213.3. Samples: 1680080. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:17,044][72530] Avg episode reward: [(0, '158187.015'), (1, '161349.690')] -[2023-09-19 11:24:17,045][73131] Saving new best policy, reward=161349.690! -[2023-09-19 11:24:22,043][72530] Fps is (10 sec: 7372.6, 60 sec: 6144.0, 300 sec: 6225.9). Total num frames: 4149248. Throughput: 0: 3037.6, 1: 3037.4. Samples: 1701154. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:22,044][72530] Avg episode reward: [(0, '158725.033'), (1, '161643.895')] -[2023-09-19 11:24:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004032_2064384.pth... -[2023-09-19 11:24:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004072_2084864.pth... -[2023-09-19 11:24:22,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003848_1970176.pth -[2023-09-19 11:24:22,061][73131] Saving new best policy, reward=161643.895! -[2023-09-19 11:24:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003888_1990656.pth -[2023-09-19 11:24:23,177][73219] Updated weights for policy 1, policy_version 4040 (0.0014) -[2023-09-19 11:24:23,178][73145] Updated weights for policy 0, policy_version 4080 (0.0016) -[2023-09-19 11:24:27,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6202.5). Total num frames: 4173824. Throughput: 0: 3023.4, 1: 3023.4. Samples: 1737188. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:27,044][72530] Avg episode reward: [(0, '155903.476'), (1, '161815.854')] -[2023-09-19 11:24:27,045][73131] Saving new best policy, reward=161815.854! -[2023-09-19 11:24:32,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6237.4). Total num frames: 4214784. Throughput: 0: 3098.6, 1: 3097.1. Samples: 1777900. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:24:32,044][72530] Avg episode reward: [(0, '154152.668'), (1, '161862.881')] -[2023-09-19 11:24:32,045][73131] Saving new best policy, reward=161862.881! -[2023-09-19 11:24:36,067][73145] Updated weights for policy 0, policy_version 4160 (0.0013) -[2023-09-19 11:24:36,069][73219] Updated weights for policy 1, policy_version 4120 (0.0014) -[2023-09-19 11:24:37,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6214.6). Total num frames: 4239360. Throughput: 0: 3068.9, 1: 3068.7. Samples: 1794800. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:24:37,044][72530] Avg episode reward: [(0, '151755.204'), (1, '161941.149')] -[2023-09-19 11:24:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004160_2129920.pth... -[2023-09-19 11:24:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004120_2109440.pth... -[2023-09-19 11:24:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000003976_2035712.pth -[2023-09-19 11:24:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000003936_2015232.pth -[2023-09-19 11:24:37,065][73131] Saving new best policy, reward=161941.149! -[2023-09-19 11:24:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 4272128. Throughput: 0: 3155.4, 1: 3155.3. Samples: 1835558. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:24:42,044][72530] Avg episode reward: [(0, '150680.731'), (1, '161901.771')] -[2023-09-19 11:24:47,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 4304896. Throughput: 0: 3207.8, 1: 3207.7. Samples: 1872614. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:47,044][72530] Avg episode reward: [(0, '148141.874'), (1, '161823.095')] -[2023-09-19 11:24:48,828][73219] Updated weights for policy 1, policy_version 4200 (0.0010) -[2023-09-19 11:24:48,829][73145] Updated weights for policy 0, policy_version 4240 (0.0014) -[2023-09-19 11:24:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4337664. Throughput: 0: 3210.0, 1: 3211.3. Samples: 1892352. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:52,044][72530] Avg episode reward: [(0, '144621.889'), (1, '160527.319')] -[2023-09-19 11:24:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004216_2158592.pth... -[2023-09-19 11:24:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004256_2179072.pth... -[2023-09-19 11:24:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004032_2064384.pth -[2023-09-19 11:24:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004072_2084864.pth -[2023-09-19 11:24:57,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 4370432. Throughput: 0: 3240.7, 1: 3240.7. Samples: 1931690. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:24:57,044][72530] Avg episode reward: [(0, '144800.686'), (1, '160295.845')] -[2023-09-19 11:25:01,923][73219] Updated weights for policy 1, policy_version 4280 (0.0013) -[2023-09-19 11:25:01,923][73145] Updated weights for policy 0, policy_version 4320 (0.0013) -[2023-09-19 11:25:02,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 4403200. Throughput: 0: 3184.6, 1: 3184.7. Samples: 1966700. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:02,044][72530] Avg episode reward: [(0, '145084.319'), (1, '160219.957')] -[2023-09-19 11:25:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4427776. Throughput: 0: 3125.7, 1: 3127.1. Samples: 1982528. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:07,045][72530] Avg episode reward: [(0, '144470.225'), (1, '159962.850')] -[2023-09-19 11:25:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004304_2203648.pth... -[2023-09-19 11:25:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004344_2224128.pth... -[2023-09-19 11:25:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004120_2109440.pth -[2023-09-19 11:25:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004160_2129920.pth -[2023-09-19 11:25:12,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4460544. Throughput: 0: 3189.5, 1: 3189.4. Samples: 2024240. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:25:12,044][72530] Avg episode reward: [(0, '144911.603'), (1, '159940.569')] -[2023-09-19 11:25:14,575][73145] Updated weights for policy 0, policy_version 4400 (0.0013) -[2023-09-19 11:25:14,577][73219] Updated weights for policy 1, policy_version 4360 (0.0014) -[2023-09-19 11:25:17,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4493312. Throughput: 0: 3180.8, 1: 3181.4. Samples: 2064198. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:17,044][72530] Avg episode reward: [(0, '140838.640'), (1, '160023.760')] -[2023-09-19 11:25:22,043][72530] Fps is (10 sec: 7372.7, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 4534272. Throughput: 0: 3210.7, 1: 3210.9. Samples: 2083770. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:22,044][72530] Avg episode reward: [(0, '145736.624'), (1, '161193.551')] -[2023-09-19 11:25:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004408_2256896.pth... -[2023-09-19 11:25:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004448_2277376.pth... -[2023-09-19 11:25:22,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004256_2179072.pth -[2023-09-19 11:25:22,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004216_2158592.pth -[2023-09-19 11:25:27,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4558848. Throughput: 0: 3177.1, 1: 3178.3. Samples: 2121554. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:27,044][72530] Avg episode reward: [(0, '146210.154'), (1, '161328.801')] -[2023-09-19 11:25:27,759][73219] Updated weights for policy 1, policy_version 4440 (0.0013) -[2023-09-19 11:25:27,759][73145] Updated weights for policy 0, policy_version 4480 (0.0013) -[2023-09-19 11:25:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4591616. Throughput: 0: 3156.1, 1: 3156.0. Samples: 2156656. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:32,044][72530] Avg episode reward: [(0, '148003.467'), (1, '161381.807')] -[2023-09-19 11:25:37,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4624384. Throughput: 0: 3173.7, 1: 3172.3. Samples: 2177924. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:37,044][72530] Avg episode reward: [(0, '152496.039'), (1, '161644.736')] -[2023-09-19 11:25:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004496_2301952.pth... -[2023-09-19 11:25:37,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004536_2322432.pth... -[2023-09-19 11:25:37,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004304_2203648.pth -[2023-09-19 11:25:37,071][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004344_2224128.pth -[2023-09-19 11:25:40,045][73219] Updated weights for policy 1, policy_version 4520 (0.0014) -[2023-09-19 11:25:40,045][73145] Updated weights for policy 0, policy_version 4560 (0.0014) -[2023-09-19 11:25:42,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4657152. Throughput: 0: 3196.5, 1: 3196.4. Samples: 2219366. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:25:42,044][72530] Avg episode reward: [(0, '151440.031'), (1, '161613.003')] -[2023-09-19 11:25:47,043][72530] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4689920. Throughput: 0: 3225.5, 1: 3225.4. Samples: 2256988. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:47,044][72530] Avg episode reward: [(0, '155863.155'), (1, '161674.332')] -[2023-09-19 11:25:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4722688. Throughput: 0: 3263.8, 1: 3262.1. Samples: 2276194. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:52,044][72530] Avg episode reward: [(0, '154315.726'), (1, '161202.405')] -[2023-09-19 11:25:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004592_2351104.pth... -[2023-09-19 11:25:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004632_2371584.pth... -[2023-09-19 11:25:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004408_2256896.pth -[2023-09-19 11:25:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004448_2277376.pth -[2023-09-19 11:25:52,782][73219] Updated weights for policy 1, policy_version 4600 (0.0014) -[2023-09-19 11:25:52,782][73145] Updated weights for policy 0, policy_version 4640 (0.0014) -[2023-09-19 11:25:57,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4755456. Throughput: 0: 3211.2, 1: 3211.4. Samples: 2313256. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:25:57,044][72530] Avg episode reward: [(0, '150010.060'), (1, '160917.790')] -[2023-09-19 11:26:02,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 4788224. Throughput: 0: 3193.2, 1: 3192.7. Samples: 2351560. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:26:02,045][72530] Avg episode reward: [(0, '150263.831'), (1, '161029.917')] -[2023-09-19 11:26:05,903][73219] Updated weights for policy 1, policy_version 4680 (0.0013) -[2023-09-19 11:26:05,903][73145] Updated weights for policy 0, policy_version 4720 (0.0012) -[2023-09-19 11:26:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6220.4). Total num frames: 4812800. Throughput: 0: 3179.7, 1: 3179.7. Samples: 2369942. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:26:07,044][72530] Avg episode reward: [(0, '146564.474'), (1, '161044.989')] -[2023-09-19 11:26:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004680_2396160.pth... -[2023-09-19 11:26:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004720_2416640.pth... -[2023-09-19 11:26:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004536_2322432.pth -[2023-09-19 11:26:07,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004496_2301952.pth -[2023-09-19 11:26:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6220.4). Total num frames: 4845568. Throughput: 0: 3166.9, 1: 3166.1. Samples: 2406540. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:26:12,045][72530] Avg episode reward: [(0, '146583.424'), (1, '161100.378')] -[2023-09-19 11:26:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4870144. Throughput: 0: 3143.0, 1: 3143.1. Samples: 2439528. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:26:17,044][72530] Avg episode reward: [(0, '141842.532'), (1, '161215.328')] -[2023-09-19 11:26:19,884][73219] Updated weights for policy 1, policy_version 4760 (0.0016) -[2023-09-19 11:26:19,884][73145] Updated weights for policy 0, policy_version 4800 (0.0015) -[2023-09-19 11:26:22,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 4902912. Throughput: 0: 3118.8, 1: 3118.8. Samples: 2458616. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:26:22,044][72530] Avg episode reward: [(0, '142452.610'), (1, '161748.126')] -[2023-09-19 11:26:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004768_2441216.pth... -[2023-09-19 11:26:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004808_2461696.pth... -[2023-09-19 11:26:22,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004592_2351104.pth -[2023-09-19 11:26:22,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004632_2371584.pth -[2023-09-19 11:26:27,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4935680. Throughput: 0: 3120.4, 1: 3120.6. Samples: 2500210. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:26:27,044][72530] Avg episode reward: [(0, '137895.579'), (1, '162103.130')] -[2023-09-19 11:26:27,045][73131] Saving new best policy, reward=162103.130! -[2023-09-19 11:26:32,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4968448. Throughput: 0: 3138.1, 1: 3139.4. Samples: 2539476. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:26:32,044][72530] Avg episode reward: [(0, '135551.322'), (1, '162133.015')] -[2023-09-19 11:26:32,045][73131] Saving new best policy, reward=162133.015! -[2023-09-19 11:26:32,157][73219] Updated weights for policy 1, policy_version 4840 (0.0010) -[2023-09-19 11:26:32,158][73145] Updated weights for policy 0, policy_version 4880 (0.0012) -[2023-09-19 11:26:37,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5001216. Throughput: 0: 3107.1, 1: 3108.5. Samples: 2555894. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:26:37,044][72530] Avg episode reward: [(0, '131208.771'), (1, '162213.080')] -[2023-09-19 11:26:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000004904_2510848.pth... -[2023-09-19 11:26:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004864_2490368.pth... -[2023-09-19 11:26:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004680_2396160.pth -[2023-09-19 11:26:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004720_2416640.pth -[2023-09-19 11:26:37,065][73131] Saving new best policy, reward=162213.080! -[2023-09-19 11:26:42,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5033984. Throughput: 0: 3145.9, 1: 3145.9. Samples: 2596386. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:26:42,044][72530] Avg episode reward: [(0, '130942.941'), (1, '162247.080')] -[2023-09-19 11:26:42,045][73131] Saving new best policy, reward=162247.080! -[2023-09-19 11:26:44,999][73145] Updated weights for policy 0, policy_version 4960 (0.0011) -[2023-09-19 11:26:44,999][73219] Updated weights for policy 1, policy_version 4920 (0.0015) -[2023-09-19 11:26:47,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5066752. Throughput: 0: 3150.9, 1: 3150.9. Samples: 2635140. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:26:47,044][72530] Avg episode reward: [(0, '129912.953'), (1, '162118.110')] -[2023-09-19 11:26:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5099520. Throughput: 0: 3158.5, 1: 3159.3. Samples: 2654242. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:26:52,044][72530] Avg episode reward: [(0, '128999.147'), (1, '162160.112')] -[2023-09-19 11:26:52,050][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005000_2560000.pth... -[2023-09-19 11:26:52,050][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000004960_2539520.pth... -[2023-09-19 11:26:52,057][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004808_2461696.pth -[2023-09-19 11:26:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004768_2441216.pth -[2023-09-19 11:26:57,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5132288. Throughput: 0: 3168.7, 1: 3168.5. Samples: 2691716. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:26:57,045][72530] Avg episode reward: [(0, '133141.196'), (1, '161915.344')] -[2023-09-19 11:26:57,786][73219] Updated weights for policy 1, policy_version 5000 (0.0015) -[2023-09-19 11:26:57,787][73145] Updated weights for policy 0, policy_version 5040 (0.0011) -[2023-09-19 11:27:02,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5165056. Throughput: 0: 3210.7, 1: 3210.9. Samples: 2728500. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:27:02,044][72530] Avg episode reward: [(0, '134217.012'), (1, '161927.050')] -[2023-09-19 11:27:07,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5189632. Throughput: 0: 3215.4, 1: 3215.5. Samples: 2748004. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:27:07,044][72530] Avg episode reward: [(0, '139484.494'), (1, '161928.042')] -[2023-09-19 11:27:07,058][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005056_2588672.pth... -[2023-09-19 11:27:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004864_2490368.pth -[2023-09-19 11:27:07,070][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005096_2609152.pth... -[2023-09-19 11:27:07,073][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000004904_2510848.pth -[2023-09-19 11:27:10,937][73145] Updated weights for policy 0, policy_version 5120 (0.0015) -[2023-09-19 11:27:10,937][73219] Updated weights for policy 1, policy_version 5080 (0.0015) -[2023-09-19 11:27:12,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 5222400. Throughput: 0: 3173.6, 1: 3173.6. Samples: 2785834. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:27:12,044][72530] Avg episode reward: [(0, '140906.115'), (1, '161939.285')] -[2023-09-19 11:27:17,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 5255168. Throughput: 0: 3151.1, 1: 3149.6. Samples: 2823006. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:17,045][72530] Avg episode reward: [(0, '145817.026'), (1, '161027.902')] -[2023-09-19 11:27:22,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 5287936. Throughput: 0: 3185.7, 1: 3185.7. Samples: 2842604. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:22,044][72530] Avg episode reward: [(0, '142531.824'), (1, '161093.623')] -[2023-09-19 11:27:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005184_2654208.pth... -[2023-09-19 11:27:22,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005144_2633728.pth... -[2023-09-19 11:27:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005000_2560000.pth -[2023-09-19 11:27:22,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000004960_2539520.pth -[2023-09-19 11:27:23,721][73145] Updated weights for policy 0, policy_version 5200 (0.0010) -[2023-09-19 11:27:23,721][73219] Updated weights for policy 1, policy_version 5160 (0.0013) -[2023-09-19 11:27:27,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 5320704. Throughput: 0: 3188.8, 1: 3189.0. Samples: 2883390. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:27:27,044][72530] Avg episode reward: [(0, '142889.954'), (1, '161067.955')] -[2023-09-19 11:27:32,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6248.1). Total num frames: 5353472. Throughput: 0: 3127.8, 1: 3128.0. Samples: 2916652. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:32,044][72530] Avg episode reward: [(0, '142889.954'), (1, '161121.140')] -[2023-09-19 11:27:37,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5378048. Throughput: 0: 3133.1, 1: 3132.4. Samples: 2936188. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:37,045][72530] Avg episode reward: [(0, '142062.299'), (1, '159603.215')] -[2023-09-19 11:27:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005232_2678784.pth... -[2023-09-19 11:27:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005272_2699264.pth... -[2023-09-19 11:27:37,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005056_2588672.pth -[2023-09-19 11:27:37,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005096_2609152.pth -[2023-09-19 11:27:37,396][73145] Updated weights for policy 0, policy_version 5280 (0.0011) -[2023-09-19 11:27:37,396][73219] Updated weights for policy 1, policy_version 5240 (0.0014) -[2023-09-19 11:27:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5410816. Throughput: 0: 3131.2, 1: 3132.0. Samples: 2973558. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:42,045][72530] Avg episode reward: [(0, '142062.299'), (1, '159574.648')] -[2023-09-19 11:27:47,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5443584. Throughput: 0: 3132.0, 1: 3132.0. Samples: 3010382. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:27:47,045][72530] Avg episode reward: [(0, '143548.226'), (1, '160565.889')] -[2023-09-19 11:27:50,044][73145] Updated weights for policy 0, policy_version 5360 (0.0013) -[2023-09-19 11:27:50,045][73219] Updated weights for policy 1, policy_version 5320 (0.0012) -[2023-09-19 11:27:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5476352. Throughput: 0: 3144.9, 1: 3145.5. Samples: 3031074. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:52,044][72530] Avg episode reward: [(0, '143548.226'), (1, '160563.875')] -[2023-09-19 11:27:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005368_2748416.pth... -[2023-09-19 11:27:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005328_2727936.pth... -[2023-09-19 11:27:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005184_2654208.pth -[2023-09-19 11:27:52,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005144_2633728.pth -[2023-09-19 11:27:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5500928. Throughput: 0: 3089.5, 1: 3090.1. Samples: 3063918. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:27:57,045][72530] Avg episode reward: [(0, '146270.433'), (1, '160418.331')] -[2023-09-19 11:28:02,043][72530] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5533696. Throughput: 0: 3077.2, 1: 3077.4. Samples: 3099964. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:28:02,044][72530] Avg episode reward: [(0, '146167.425'), (1, '160418.331')] -[2023-09-19 11:28:04,104][73219] Updated weights for policy 1, policy_version 5400 (0.0013) -[2023-09-19 11:28:04,105][73145] Updated weights for policy 0, policy_version 5440 (0.0014) -[2023-09-19 11:28:07,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5566464. Throughput: 0: 3076.4, 1: 3075.6. Samples: 3119446. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:07,044][72530] Avg episode reward: [(0, '146743.667'), (1, '161871.065')] -[2023-09-19 11:28:07,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005456_2793472.pth... -[2023-09-19 11:28:07,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005416_2772992.pth... -[2023-09-19 11:28:07,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005272_2699264.pth -[2023-09-19 11:28:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005232_2678784.pth -[2023-09-19 11:28:12,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5599232. Throughput: 0: 3039.7, 1: 3039.5. Samples: 3156956. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:12,044][72530] Avg episode reward: [(0, '146503.770'), (1, '161818.415')] -[2023-09-19 11:28:17,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5623808. Throughput: 0: 2895.2, 1: 2895.1. Samples: 3177218. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:28:17,044][72530] Avg episode reward: [(0, '146720.459'), (1, '161769.023')] -[2023-09-19 11:28:17,068][73145] Updated weights for policy 0, policy_version 5520 (0.0013) -[2023-09-19 11:28:17,069][73219] Updated weights for policy 1, policy_version 5480 (0.0012) -[2023-09-19 11:28:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5656576. Throughput: 0: 3086.3, 1: 3086.2. Samples: 3213950. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:28:22,044][72530] Avg episode reward: [(0, '147474.306'), (1, '161768.790')] -[2023-09-19 11:28:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005504_2818048.pth... -[2023-09-19 11:28:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005544_2838528.pth... -[2023-09-19 11:28:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005328_2727936.pth -[2023-09-19 11:28:22,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005368_2748416.pth -[2023-09-19 11:28:27,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5689344. Throughput: 0: 3045.5, 1: 3044.6. Samples: 3247612. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:27,044][72530] Avg episode reward: [(0, '146171.268'), (1, '161578.478')] -[2023-09-19 11:28:30,639][73219] Updated weights for policy 1, policy_version 5560 (0.0007) -[2023-09-19 11:28:30,640][73145] Updated weights for policy 0, policy_version 5600 (0.0014) -[2023-09-19 11:28:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6248.1). Total num frames: 5713920. Throughput: 0: 3050.4, 1: 3051.6. Samples: 3284968. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:32,044][72530] Avg episode reward: [(0, '143678.262'), (1, '161606.329')] -[2023-09-19 11:28:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5746688. Throughput: 0: 3003.7, 1: 3003.7. Samples: 3301410. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:37,044][72530] Avg episode reward: [(0, '147339.327'), (1, '161677.535')] -[2023-09-19 11:28:37,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005632_2883584.pth... -[2023-09-19 11:28:37,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005592_2863104.pth... -[2023-09-19 11:28:37,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005456_2793472.pth -[2023-09-19 11:28:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005416_2772992.pth -[2023-09-19 11:28:42,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5779456. Throughput: 0: 3087.8, 1: 3087.2. Samples: 3341792. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:28:42,044][72530] Avg episode reward: [(0, '149917.742'), (1, '161865.342')] -[2023-09-19 11:28:43,733][73145] Updated weights for policy 0, policy_version 5680 (0.0015) -[2023-09-19 11:28:43,734][73219] Updated weights for policy 1, policy_version 5640 (0.0014) -[2023-09-19 11:28:47,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5812224. Throughput: 0: 3101.5, 1: 3101.5. Samples: 3379098. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:47,044][72530] Avg episode reward: [(0, '150038.622'), (1, '161982.019')] -[2023-09-19 11:28:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5844992. Throughput: 0: 3141.9, 1: 3141.4. Samples: 3402194. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:28:52,044][72530] Avg episode reward: [(0, '144197.749'), (1, '162024.019')] -[2023-09-19 11:28:52,065][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005696_2916352.pth... -[2023-09-19 11:28:52,067][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005736_2936832.pth... -[2023-09-19 11:28:52,069][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005504_2818048.pth -[2023-09-19 11:28:52,072][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005544_2838528.pth -[2023-09-19 11:28:56,545][73145] Updated weights for policy 0, policy_version 5760 (0.0009) -[2023-09-19 11:28:56,546][73219] Updated weights for policy 1, policy_version 5720 (0.0014) -[2023-09-19 11:28:57,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5877760. Throughput: 0: 3141.5, 1: 3142.4. Samples: 3439730. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:28:57,044][72530] Avg episode reward: [(0, '138885.538'), (1, '162345.734')] -[2023-09-19 11:28:57,045][73131] Saving new best policy, reward=162345.734! -[2023-09-19 11:29:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5902336. Throughput: 0: 3289.1, 1: 3289.5. Samples: 3473254. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:02,044][72530] Avg episode reward: [(0, '134517.899'), (1, '162329.045')] -[2023-09-19 11:29:07,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 5943296. Throughput: 0: 3101.0, 1: 3100.8. Samples: 3493030. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:07,045][72530] Avg episode reward: [(0, '128642.101'), (1, '162318.341')] -[2023-09-19 11:29:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005784_2961408.pth... -[2023-09-19 11:29:07,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005824_2981888.pth... -[2023-09-19 11:29:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005592_2863104.pth -[2023-09-19 11:29:07,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005632_2883584.pth -[2023-09-19 11:29:09,339][73145] Updated weights for policy 0, policy_version 5840 (0.0012) -[2023-09-19 11:29:09,340][73219] Updated weights for policy 1, policy_version 5800 (0.0013) -[2023-09-19 11:29:12,043][72530] Fps is (10 sec: 7372.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5976064. Throughput: 0: 3166.1, 1: 3166.1. Samples: 3532560. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:12,044][72530] Avg episode reward: [(0, '118954.396'), (1, '162291.376')] -[2023-09-19 11:29:17,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 6008832. Throughput: 0: 3187.8, 1: 3186.9. Samples: 3571832. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:29:17,044][72530] Avg episode reward: [(0, '114412.470'), (1, '162295.847')] -[2023-09-19 11:29:21,826][73145] Updated weights for policy 0, policy_version 5920 (0.0012) -[2023-09-19 11:29:21,826][73219] Updated weights for policy 1, policy_version 5880 (0.0015) -[2023-09-19 11:29:22,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 6041600. Throughput: 0: 3238.0, 1: 3237.2. Samples: 3592796. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:22,044][72530] Avg episode reward: [(0, '112785.270'), (1, '162279.161')] -[2023-09-19 11:29:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005880_3010560.pth... -[2023-09-19 11:29:22,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000005920_3031040.pth... -[2023-09-19 11:29:22,057][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005696_2916352.pth -[2023-09-19 11:29:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005736_2936832.pth -[2023-09-19 11:29:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6066176. Throughput: 0: 3176.2, 1: 3176.2. Samples: 3627652. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:27,044][72530] Avg episode reward: [(0, '112913.414'), (1, '161752.422')] -[2023-09-19 11:29:32,053][72530] Fps is (10 sec: 5728.9, 60 sec: 6416.0, 300 sec: 6303.5). Total num frames: 6098944. Throughput: 0: 3140.7, 1: 3142.2. Samples: 3661888. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:32,055][72530] Avg episode reward: [(0, '114352.426'), (1, '160799.718')] -[2023-09-19 11:29:36,186][73145] Updated weights for policy 0, policy_version 6000 (0.0013) -[2023-09-19 11:29:36,187][73219] Updated weights for policy 1, policy_version 5960 (0.0013) -[2023-09-19 11:29:37,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6123520. Throughput: 0: 3071.6, 1: 3072.0. Samples: 3678658. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:37,044][72530] Avg episode reward: [(0, '114436.532'), (1, '160786.962')] -[2023-09-19 11:29:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000005960_3051520.pth... -[2023-09-19 11:29:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006000_3072000.pth... -[2023-09-19 11:29:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005784_2961408.pth -[2023-09-19 11:29:37,067][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005824_2981888.pth -[2023-09-19 11:29:42,043][72530] Fps is (10 sec: 5740.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6156288. Throughput: 0: 3094.3, 1: 3094.0. Samples: 3718202. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:42,044][72530] Avg episode reward: [(0, '115826.562'), (1, '160797.026')] -[2023-09-19 11:29:47,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6189056. Throughput: 0: 3107.9, 1: 3107.4. Samples: 3752942. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:29:47,044][72530] Avg episode reward: [(0, '116442.200'), (1, '160802.531')] -[2023-09-19 11:29:49,303][73219] Updated weights for policy 1, policy_version 6040 (0.0013) -[2023-09-19 11:29:49,304][73145] Updated weights for policy 0, policy_version 6080 (0.0015) -[2023-09-19 11:29:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6213632. Throughput: 0: 3115.6, 1: 3116.0. Samples: 3773450. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:29:52,044][72530] Avg episode reward: [(0, '116782.945'), (1, '160812.704')] -[2023-09-19 11:29:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006048_3096576.pth... -[2023-09-19 11:29:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006088_3117056.pth... -[2023-09-19 11:29:52,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005880_3010560.pth -[2023-09-19 11:29:52,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000005920_3031040.pth -[2023-09-19 11:29:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6246400. Throughput: 0: 3059.6, 1: 3059.6. Samples: 3807922. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:29:57,044][72530] Avg episode reward: [(0, '116782.945'), (1, '160801.177')] -[2023-09-19 11:30:02,043][72530] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6262.0). Total num frames: 6275072. Throughput: 0: 2823.3, 1: 2823.1. Samples: 3825920. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:02,045][72530] Avg episode reward: [(0, '113033.870'), (1, '161341.845')] -[2023-09-19 11:30:03,323][73145] Updated weights for policy 0, policy_version 6160 (0.0013) -[2023-09-19 11:30:03,323][73219] Updated weights for policy 1, policy_version 6120 (0.0015) -[2023-09-19 11:30:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6248.1). Total num frames: 6303744. Throughput: 0: 2978.1, 1: 2978.2. Samples: 3860830. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:07,045][72530] Avg episode reward: [(0, '113033.870'), (1, '161341.845')] -[2023-09-19 11:30:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006136_3141632.pth... -[2023-09-19 11:30:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006176_3162112.pth... -[2023-09-19 11:30:07,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006000_3072000.pth -[2023-09-19 11:30:07,066][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000005960_3051520.pth -[2023-09-19 11:30:12,043][72530] Fps is (10 sec: 6144.1, 60 sec: 6007.5, 300 sec: 6248.1). Total num frames: 6336512. Throughput: 0: 3016.4, 1: 3017.1. Samples: 3899160. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:30:12,044][72530] Avg episode reward: [(0, '110251.349'), (1, '161327.497')] -[2023-09-19 11:30:16,823][73219] Updated weights for policy 1, policy_version 6200 (0.0013) -[2023-09-19 11:30:16,823][73145] Updated weights for policy 0, policy_version 6240 (0.0011) -[2023-09-19 11:30:17,043][72530] Fps is (10 sec: 6553.8, 60 sec: 6007.5, 300 sec: 6220.4). Total num frames: 6369280. Throughput: 0: 3019.8, 1: 3018.5. Samples: 3933552. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:30:17,044][72530] Avg episode reward: [(0, '110867.577'), (1, '161327.497')] -[2023-09-19 11:30:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5870.9, 300 sec: 6220.4). Total num frames: 6393856. Throughput: 0: 3036.0, 1: 3036.0. Samples: 3951900. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:22,044][72530] Avg episode reward: [(0, '112712.732'), (1, '161204.256')] -[2023-09-19 11:30:22,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006264_3207168.pth... -[2023-09-19 11:30:22,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006224_3186688.pth... -[2023-09-19 11:30:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006088_3117056.pth -[2023-09-19 11:30:22,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006048_3096576.pth -[2023-09-19 11:30:27,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5870.9, 300 sec: 6192.6). Total num frames: 6418432. Throughput: 0: 2950.9, 1: 2950.4. Samples: 3983760. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:27,044][72530] Avg episode reward: [(0, '113392.546'), (1, '161125.113')] -[2023-09-19 11:30:31,119][73145] Updated weights for policy 0, policy_version 6320 (0.0012) -[2023-09-19 11:30:31,119][73219] Updated weights for policy 1, policy_version 6280 (0.0013) -[2023-09-19 11:30:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5871.9, 300 sec: 6192.6). Total num frames: 6451200. Throughput: 0: 2960.9, 1: 2961.4. Samples: 4019448. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:32,044][72530] Avg episode reward: [(0, '113475.235'), (1, '161107.375')] -[2023-09-19 11:30:37,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5870.9, 300 sec: 6164.8). Total num frames: 6475776. Throughput: 0: 2884.0, 1: 2883.7. Samples: 4032994. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:37,045][72530] Avg episode reward: [(0, '118367.276'), (1, '160625.932')] -[2023-09-19 11:30:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006344_3248128.pth... -[2023-09-19 11:30:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006304_3227648.pth... -[2023-09-19 11:30:37,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006176_3162112.pth -[2023-09-19 11:30:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006136_3141632.pth -[2023-09-19 11:30:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6164.8). Total num frames: 6508544. Throughput: 0: 2926.3, 1: 2927.1. Samples: 4071324. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:42,044][72530] Avg episode reward: [(0, '121712.371'), (1, '160963.772')] -[2023-09-19 11:30:44,991][73145] Updated weights for policy 0, policy_version 6400 (0.0014) -[2023-09-19 11:30:44,991][73219] Updated weights for policy 1, policy_version 6360 (0.0016) -[2023-09-19 11:30:47,043][72530] Fps is (10 sec: 6553.7, 60 sec: 5870.9, 300 sec: 6164.8). Total num frames: 6541312. Throughput: 0: 3155.9, 1: 3155.8. Samples: 4109948. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:47,044][72530] Avg episode reward: [(0, '130906.514'), (1, '160996.680')] -[2023-09-19 11:30:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6007.5, 300 sec: 6164.8). Total num frames: 6574080. Throughput: 0: 2977.7, 1: 2978.5. Samples: 4128860. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:30:52,044][72530] Avg episode reward: [(0, '132721.030'), (1, '160740.250')] -[2023-09-19 11:30:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006400_3276800.pth... -[2023-09-19 11:30:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006440_3297280.pth... -[2023-09-19 11:30:52,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006264_3207168.pth -[2023-09-19 11:30:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006224_3186688.pth -[2023-09-19 11:30:57,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 6164.8). Total num frames: 6606848. Throughput: 0: 2999.1, 1: 2998.3. Samples: 4169040. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:30:57,044][72530] Avg episode reward: [(0, '135319.795'), (1, '160381.100')] -[2023-09-19 11:30:57,551][73145] Updated weights for policy 0, policy_version 6480 (0.0013) -[2023-09-19 11:30:57,551][73219] Updated weights for policy 1, policy_version 6440 (0.0014) -[2023-09-19 11:31:02,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6075.8, 300 sec: 6192.6). Total num frames: 6639616. Throughput: 0: 2999.7, 1: 2999.9. Samples: 4203534. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:31:02,044][72530] Avg episode reward: [(0, '138147.376'), (1, '160443.505')] -[2023-09-19 11:31:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6164.8). Total num frames: 6664192. Throughput: 0: 2995.4, 1: 2995.4. Samples: 4221484. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:31:07,044][72530] Avg episode reward: [(0, '140607.383'), (1, '160444.041')] -[2023-09-19 11:31:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006488_3321856.pth... -[2023-09-19 11:31:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006528_3342336.pth... -[2023-09-19 11:31:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006304_3227648.pth -[2023-09-19 11:31:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006344_3248128.pth -[2023-09-19 11:31:11,438][73219] Updated weights for policy 1, policy_version 6520 (0.0012) -[2023-09-19 11:31:11,438][73145] Updated weights for policy 0, policy_version 6560 (0.0013) -[2023-09-19 11:31:12,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6192.6). Total num frames: 6696960. Throughput: 0: 3048.7, 1: 3048.4. Samples: 4258132. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:31:12,044][72530] Avg episode reward: [(0, '140794.377'), (1, '161324.369')] -[2023-09-19 11:31:17,043][72530] Fps is (10 sec: 6553.6, 60 sec: 6007.4, 300 sec: 6192.6). Total num frames: 6729728. Throughput: 0: 3077.7, 1: 3077.3. Samples: 4296424. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:31:17,045][72530] Avg episode reward: [(0, '146158.608'), (1, '161421.999')] -[2023-09-19 11:31:22,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 6762496. Throughput: 0: 3152.2, 1: 3152.3. Samples: 4316696. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:31:22,044][72530] Avg episode reward: [(0, '147169.785'), (1, '161725.816')] -[2023-09-19 11:31:22,057][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006624_3391488.pth... -[2023-09-19 11:31:22,057][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006584_3371008.pth... -[2023-09-19 11:31:22,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006400_3276800.pth -[2023-09-19 11:31:22,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006440_3297280.pth -[2023-09-19 11:31:24,435][73145] Updated weights for policy 0, policy_version 6640 (0.0016) -[2023-09-19 11:31:24,436][73219] Updated weights for policy 1, policy_version 6600 (0.0016) -[2023-09-19 11:31:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 6787072. Throughput: 0: 3107.4, 1: 3106.5. Samples: 4350952. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:31:27,044][72530] Avg episode reward: [(0, '147753.216'), (1, '161687.670')] -[2023-09-19 11:31:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 6819840. Throughput: 0: 3071.0, 1: 3071.2. Samples: 4386346. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:31:32,044][72530] Avg episode reward: [(0, '147753.216'), (1, '161687.670')] -[2023-09-19 11:31:37,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6164.8). Total num frames: 6852608. Throughput: 0: 3060.3, 1: 3059.2. Samples: 4404236. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:31:37,044][72530] Avg episode reward: [(0, '155757.680'), (1, '162057.882')] -[2023-09-19 11:31:37,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006672_3416064.pth... -[2023-09-19 11:31:37,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006712_3436544.pth... -[2023-09-19 11:31:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006528_3342336.pth -[2023-09-19 11:31:37,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006488_3321856.pth -[2023-09-19 11:31:38,241][73145] Updated weights for policy 0, policy_version 6720 (0.0015) -[2023-09-19 11:31:38,242][73219] Updated weights for policy 1, policy_version 6680 (0.0014) -[2023-09-19 11:31:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 6877184. Throughput: 0: 3009.5, 1: 3010.6. Samples: 4439946. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:31:42,045][72530] Avg episode reward: [(0, '155757.680'), (1, '162057.882')] -[2023-09-19 11:31:47,043][72530] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 6909952. Throughput: 0: 3026.3, 1: 3026.0. Samples: 4475886. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:31:47,044][72530] Avg episode reward: [(0, '156604.047'), (1, '162030.805')] -[2023-09-19 11:31:51,596][73145] Updated weights for policy 0, policy_version 6800 (0.0012) -[2023-09-19 11:31:51,596][73219] Updated weights for policy 1, policy_version 6760 (0.0015) -[2023-09-19 11:31:52,043][72530] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 6942720. Throughput: 0: 3052.5, 1: 3052.3. Samples: 4496200. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:31:52,044][72530] Avg episode reward: [(0, '154864.725'), (1, '162040.931')] -[2023-09-19 11:31:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006800_3481600.pth... -[2023-09-19 11:31:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006760_3461120.pth... -[2023-09-19 11:31:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006624_3391488.pth -[2023-09-19 11:31:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006584_3371008.pth -[2023-09-19 11:31:57,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 6975488. Throughput: 0: 3054.7, 1: 3054.8. Samples: 4533058. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:31:57,044][72530] Avg episode reward: [(0, '150963.742'), (1, '162180.754')] -[2023-09-19 11:32:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6137.1). Total num frames: 7000064. Throughput: 0: 3051.5, 1: 3052.9. Samples: 4571120. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:32:02,044][72530] Avg episode reward: [(0, '150392.522'), (1, '162181.685')] -[2023-09-19 11:32:04,785][73145] Updated weights for policy 0, policy_version 6880 (0.0009) -[2023-09-19 11:32:04,786][73219] Updated weights for policy 1, policy_version 6840 (0.0010) -[2023-09-19 11:32:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 7032832. Throughput: 0: 3026.7, 1: 3026.7. Samples: 4589098. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:32:07,044][72530] Avg episode reward: [(0, '145583.491'), (1, '162297.023')] -[2023-09-19 11:32:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006848_3506176.pth... -[2023-09-19 11:32:07,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006888_3526656.pth... -[2023-09-19 11:32:07,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006672_3416064.pth -[2023-09-19 11:32:07,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006712_3436544.pth -[2023-09-19 11:32:12,043][72530] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 7065600. Throughput: 0: 3022.0, 1: 3022.0. Samples: 4622936. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:32:12,045][72530] Avg episode reward: [(0, '144862.441'), (1, '162339.817')] -[2023-09-19 11:32:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6109.3). Total num frames: 7090176. Throughput: 0: 3051.5, 1: 3052.3. Samples: 4661014. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:32:17,045][72530] Avg episode reward: [(0, '140160.091'), (1, '162747.216')] -[2023-09-19 11:32:17,046][73131] Saving new best policy, reward=162747.216! -[2023-09-19 11:32:18,733][73145] Updated weights for policy 0, policy_version 6960 (0.0013) -[2023-09-19 11:32:18,734][73219] Updated weights for policy 1, policy_version 6920 (0.0014) -[2023-09-19 11:32:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6109.3). Total num frames: 7122944. Throughput: 0: 3035.2, 1: 3036.6. Samples: 4677464. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:32:22,045][72530] Avg episode reward: [(0, '138598.620'), (1, '162798.619')] -[2023-09-19 11:32:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000006936_3551232.pth... -[2023-09-19 11:32:22,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000006976_3571712.pth... -[2023-09-19 11:32:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006760_3461120.pth -[2023-09-19 11:32:22,063][73131] Saving new best policy, reward=162798.619! -[2023-09-19 11:32:22,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006800_3481600.pth -[2023-09-19 11:32:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6081.5). Total num frames: 7147520. Throughput: 0: 3005.6, 1: 3006.0. Samples: 4710464. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:32:27,044][72530] Avg episode reward: [(0, '140889.402'), (1, '162879.076')] -[2023-09-19 11:32:27,046][73131] Saving new best policy, reward=162879.076! -[2023-09-19 11:32:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 6007.4, 300 sec: 6109.3). Total num frames: 7180288. Throughput: 0: 3006.3, 1: 3006.6. Samples: 4746466. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:32:32,045][72530] Avg episode reward: [(0, '146675.404'), (1, '162955.653')] -[2023-09-19 11:32:32,046][73131] Saving new best policy, reward=162955.653! -[2023-09-19 11:32:33,165][73219] Updated weights for policy 1, policy_version 7000 (0.0013) -[2023-09-19 11:32:33,166][73145] Updated weights for policy 0, policy_version 7040 (0.0014) -[2023-09-19 11:32:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6081.5). Total num frames: 7204864. Throughput: 0: 2926.2, 1: 2927.6. Samples: 4759620. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:32:37,044][72530] Avg episode reward: [(0, '146675.404'), (1, '162959.948')] -[2023-09-19 11:32:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007016_3592192.pth... -[2023-09-19 11:32:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007056_3612672.pth... -[2023-09-19 11:32:37,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006888_3526656.pth -[2023-09-19 11:32:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006848_3506176.pth -[2023-09-19 11:32:37,061][73131] Saving new best policy, reward=162959.948! -[2023-09-19 11:32:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5870.9, 300 sec: 6053.7). Total num frames: 7229440. Throughput: 0: 2894.3, 1: 2894.4. Samples: 4793548. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:32:42,045][72530] Avg episode reward: [(0, '152846.566'), (1, '163027.662')] -[2023-09-19 11:32:42,046][73131] Saving new best policy, reward=163027.662! -[2023-09-19 11:32:47,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 6053.8). Total num frames: 7262208. Throughput: 0: 2871.0, 1: 2869.7. Samples: 4829452. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:32:47,044][72530] Avg episode reward: [(0, '152846.566'), (1, '163027.662')] -[2023-09-19 11:32:47,786][73145] Updated weights for policy 0, policy_version 7120 (0.0013) -[2023-09-19 11:32:47,786][73219] Updated weights for policy 1, policy_version 7080 (0.0013) -[2023-09-19 11:32:52,043][72530] Fps is (10 sec: 6553.5, 60 sec: 5870.9, 300 sec: 6081.5). Total num frames: 7294976. Throughput: 0: 2870.4, 1: 2870.4. Samples: 4847430. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:32:52,044][72530] Avg episode reward: [(0, '158683.654'), (1, '163077.657')] -[2023-09-19 11:32:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007104_3637248.pth... -[2023-09-19 11:32:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007144_3657728.pth... -[2023-09-19 11:32:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000006936_3551232.pth -[2023-09-19 11:32:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000006976_3571712.pth -[2023-09-19 11:32:52,062][73131] Saving new best policy, reward=163077.657! -[2023-09-19 11:32:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 6053.7). Total num frames: 7319552. Throughput: 0: 2884.6, 1: 2885.0. Samples: 4882566. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:32:57,044][72530] Avg episode reward: [(0, '159312.798'), (1, '163099.001')] -[2023-09-19 11:32:57,046][73131] Saving new best policy, reward=163099.001! -[2023-09-19 11:33:01,325][73145] Updated weights for policy 0, policy_version 7200 (0.0015) -[2023-09-19 11:33:01,326][73219] Updated weights for policy 1, policy_version 7160 (0.0014) -[2023-09-19 11:33:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 6053.8). Total num frames: 7352320. Throughput: 0: 2874.4, 1: 2873.4. Samples: 4919666. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:02,044][72530] Avg episode reward: [(0, '154677.624'), (1, '162411.671')] -[2023-09-19 11:33:07,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 6053.7). Total num frames: 7385088. Throughput: 0: 2888.5, 1: 2887.3. Samples: 4937376. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:07,044][72530] Avg episode reward: [(0, '154955.709'), (1, '162052.665')] -[2023-09-19 11:33:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007232_3702784.pth... -[2023-09-19 11:33:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007192_3682304.pth... -[2023-09-19 11:33:07,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007056_3612672.pth -[2023-09-19 11:33:07,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007016_3592192.pth -[2023-09-19 11:33:12,043][72530] Fps is (10 sec: 6553.5, 60 sec: 5870.9, 300 sec: 6081.5). Total num frames: 7417856. Throughput: 0: 2946.5, 1: 2945.3. Samples: 4975594. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:12,044][72530] Avg episode reward: [(0, '152248.569'), (1, '160136.631')] -[2023-09-19 11:33:14,671][73219] Updated weights for policy 1, policy_version 7240 (0.0012) -[2023-09-19 11:33:14,672][73145] Updated weights for policy 0, policy_version 7280 (0.0010) -[2023-09-19 11:33:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6053.7). Total num frames: 7442432. Throughput: 0: 2951.6, 1: 2951.3. Samples: 5012094. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:17,044][72530] Avg episode reward: [(0, '148744.362'), (1, '160112.522')] -[2023-09-19 11:33:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6053.7). Total num frames: 7475200. Throughput: 0: 3001.9, 1: 3001.6. Samples: 5029778. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:22,044][72530] Avg episode reward: [(0, '143773.891'), (1, '160005.506')] -[2023-09-19 11:33:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007280_3727360.pth... -[2023-09-19 11:33:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007320_3747840.pth... -[2023-09-19 11:33:22,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007144_3657728.pth -[2023-09-19 11:33:22,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007104_3637248.pth -[2023-09-19 11:33:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6053.7). Total num frames: 7499776. Throughput: 0: 2989.4, 1: 2990.7. Samples: 5062652. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:33:27,045][72530] Avg episode reward: [(0, '140255.889'), (1, '159095.967')] -[2023-09-19 11:33:28,731][73145] Updated weights for policy 0, policy_version 7360 (0.0015) -[2023-09-19 11:33:28,731][73219] Updated weights for policy 1, policy_version 7320 (0.0015) -[2023-09-19 11:33:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 6053.8). Total num frames: 7532544. Throughput: 0: 2981.7, 1: 2982.1. Samples: 5097822. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:33:32,044][72530] Avg episode reward: [(0, '141984.249'), (1, '158018.359')] -[2023-09-19 11:33:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 6026.0). Total num frames: 7557120. Throughput: 0: 2948.8, 1: 2948.9. Samples: 5112824. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:33:37,044][72530] Avg episode reward: [(0, '143692.297'), (1, '152337.381')] -[2023-09-19 11:33:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007360_3768320.pth... -[2023-09-19 11:33:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007400_3788800.pth... -[2023-09-19 11:33:37,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007192_3682304.pth -[2023-09-19 11:33:37,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007232_3702784.pth -[2023-09-19 11:33:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6026.0). Total num frames: 7589888. Throughput: 0: 2966.8, 1: 2966.4. Samples: 5149558. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:33:42,045][72530] Avg episode reward: [(0, '143692.297'), (1, '151993.886')] -[2023-09-19 11:33:43,226][73219] Updated weights for policy 1, policy_version 7400 (0.0014) -[2023-09-19 11:33:43,228][73145] Updated weights for policy 0, policy_version 7440 (0.0015) -[2023-09-19 11:33:47,043][72530] Fps is (10 sec: 5734.6, 60 sec: 5870.9, 300 sec: 5998.2). Total num frames: 7614464. Throughput: 0: 2915.9, 1: 2916.2. Samples: 5182110. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:33:47,044][72530] Avg episode reward: [(0, '148573.588'), (1, '147997.923')] -[2023-09-19 11:33:52,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5871.0, 300 sec: 5998.2). Total num frames: 7647232. Throughput: 0: 2925.6, 1: 2925.8. Samples: 5200686. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:52,044][72530] Avg episode reward: [(0, '148799.645'), (1, '147997.923')] -[2023-09-19 11:33:52,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007448_3813376.pth... -[2023-09-19 11:33:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007488_3833856.pth... -[2023-09-19 11:33:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007280_3727360.pth -[2023-09-19 11:33:52,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007320_3747840.pth -[2023-09-19 11:33:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5870.9, 300 sec: 5998.2). Total num frames: 7671808. Throughput: 0: 2852.0, 1: 2851.8. Samples: 5232264. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:33:57,044][72530] Avg episode reward: [(0, '149325.137'), (1, '139943.474')] -[2023-09-19 11:33:57,818][73219] Updated weights for policy 1, policy_version 7480 (0.0015) -[2023-09-19 11:33:57,818][73145] Updated weights for policy 0, policy_version 7520 (0.0012) -[2023-09-19 11:34:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5970.4). Total num frames: 7704576. Throughput: 0: 2837.4, 1: 2838.8. Samples: 5267522. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:02,044][72530] Avg episode reward: [(0, '151014.673'), (1, '136184.034')] -[2023-09-19 11:34:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5942.7). Total num frames: 7729152. Throughput: 0: 2822.8, 1: 2822.9. Samples: 5283834. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:07,045][72530] Avg episode reward: [(0, '151810.022'), (1, '129794.580')] -[2023-09-19 11:34:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007528_3854336.pth... -[2023-09-19 11:34:07,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007568_3874816.pth... -[2023-09-19 11:34:07,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007400_3788800.pth -[2023-09-19 11:34:07,071][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007360_3768320.pth -[2023-09-19 11:34:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5914.9). Total num frames: 7753728. Throughput: 0: 2825.1, 1: 2824.1. Samples: 5316866. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:34:12,044][72530] Avg episode reward: [(0, '148050.631'), (1, '130670.879')] -[2023-09-19 11:34:12,371][73145] Updated weights for policy 0, policy_version 7600 (0.0013) -[2023-09-19 11:34:12,371][73219] Updated weights for policy 1, policy_version 7560 (0.0012) -[2023-09-19 11:34:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5914.9). Total num frames: 7786496. Throughput: 0: 2825.1, 1: 2824.7. Samples: 5352062. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:17,044][72530] Avg episode reward: [(0, '149579.084'), (1, '130319.380')] -[2023-09-19 11:34:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5914.9). Total num frames: 7811072. Throughput: 0: 2840.5, 1: 2840.4. Samples: 5368464. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:22,044][72530] Avg episode reward: [(0, '149283.565'), (1, '126902.166')] -[2023-09-19 11:34:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007608_3895296.pth... -[2023-09-19 11:34:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007648_3915776.pth... -[2023-09-19 11:34:22,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007448_3813376.pth -[2023-09-19 11:34:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007488_3833856.pth -[2023-09-19 11:34:27,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5887.3). Total num frames: 7835648. Throughput: 0: 2775.7, 1: 2776.0. Samples: 5399384. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:27,045][72530] Avg episode reward: [(0, '149283.565'), (1, '126902.166')] -[2023-09-19 11:34:27,264][73219] Updated weights for policy 1, policy_version 7640 (0.0012) -[2023-09-19 11:34:27,264][73145] Updated weights for policy 0, policy_version 7680 (0.0015) -[2023-09-19 11:34:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5914.9). Total num frames: 7868416. Throughput: 0: 2824.0, 1: 2824.0. Samples: 5436266. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:32,044][72530] Avg episode reward: [(0, '144232.887'), (1, '135739.316')] -[2023-09-19 11:34:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5887.1). Total num frames: 7892992. Throughput: 0: 2799.2, 1: 2799.5. Samples: 5452628. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:37,044][72530] Avg episode reward: [(0, '144291.890'), (1, '135739.316')] -[2023-09-19 11:34:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007688_3936256.pth... -[2023-09-19 11:34:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007728_3956736.pth... -[2023-09-19 11:34:37,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007528_3854336.pth -[2023-09-19 11:34:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007568_3874816.pth -[2023-09-19 11:34:42,043][72530] Fps is (10 sec: 5324.8, 60 sec: 5529.6, 300 sec: 5873.3). Total num frames: 7921664. Throughput: 0: 2800.1, 1: 2800.4. Samples: 5484288. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:42,044][72530] Avg episode reward: [(0, '144843.661'), (1, '143710.582')] -[2023-09-19 11:34:42,048][73219] Updated weights for policy 1, policy_version 7720 (0.0013) -[2023-09-19 11:34:42,048][73145] Updated weights for policy 0, policy_version 7760 (0.0015) -[2023-09-19 11:34:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.8, 300 sec: 5887.1). Total num frames: 7950336. Throughput: 0: 2757.9, 1: 2756.6. Samples: 5515674. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:34:47,044][72530] Avg episode reward: [(0, '143205.243'), (1, '147140.750')] -[2023-09-19 11:34:52,043][72530] Fps is (10 sec: 5324.6, 60 sec: 5461.3, 300 sec: 5859.4). Total num frames: 7974912. Throughput: 0: 2780.3, 1: 2779.1. Samples: 5534010. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:34:52,044][72530] Avg episode reward: [(0, '141821.454'), (1, '149043.623')] -[2023-09-19 11:34:52,061][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007776_3981312.pth... -[2023-09-19 11:34:52,063][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007816_4001792.pth... -[2023-09-19 11:34:52,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007608_3895296.pth -[2023-09-19 11:34:52,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007648_3915776.pth -[2023-09-19 11:34:56,255][73145] Updated weights for policy 0, policy_version 7840 (0.0012) -[2023-09-19 11:34:56,255][73219] Updated weights for policy 1, policy_version 7800 (0.0014) -[2023-09-19 11:34:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5873.2). Total num frames: 8007680. Throughput: 0: 2815.3, 1: 2815.3. Samples: 5570242. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:34:57,044][72530] Avg episode reward: [(0, '138372.363'), (1, '156468.221')] -[2023-09-19 11:35:02,043][72530] Fps is (10 sec: 5734.6, 60 sec: 5461.3, 300 sec: 5859.4). Total num frames: 8032256. Throughput: 0: 2781.8, 1: 2782.4. Samples: 5602448. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:02,044][72530] Avg episode reward: [(0, '138372.363'), (1, '157445.674')] -[2023-09-19 11:35:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5859.4). Total num frames: 8065024. Throughput: 0: 2774.6, 1: 2774.7. Samples: 5618186. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:07,044][72530] Avg episode reward: [(0, '142866.072'), (1, '153809.911')] -[2023-09-19 11:35:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007896_4042752.pth... -[2023-09-19 11:35:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007856_4022272.pth... -[2023-09-19 11:35:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007728_3956736.pth -[2023-09-19 11:35:07,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007688_3936256.pth -[2023-09-19 11:35:11,540][73219] Updated weights for policy 1, policy_version 7880 (0.0011) -[2023-09-19 11:35:11,541][73145] Updated weights for policy 0, policy_version 7920 (0.0013) -[2023-09-19 11:35:12,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5831.6). Total num frames: 8089600. Throughput: 0: 2803.7, 1: 2803.8. Samples: 5651722. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:35:12,044][72530] Avg episode reward: [(0, '143261.806'), (1, '153809.911')] -[2023-09-19 11:35:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5859.4). Total num frames: 8122368. Throughput: 0: 2571.9, 1: 2571.7. Samples: 5667726. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:17,044][72530] Avg episode reward: [(0, '145757.121'), (1, '154277.285')] -[2023-09-19 11:35:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5859.4). Total num frames: 8146944. Throughput: 0: 2785.8, 1: 2785.5. Samples: 5703340. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:22,045][72530] Avg episode reward: [(0, '147899.769'), (1, '154650.090')] -[2023-09-19 11:35:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000007936_4063232.pth... -[2023-09-19 11:35:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000007976_4083712.pth... -[2023-09-19 11:35:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007816_4001792.pth -[2023-09-19 11:35:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007776_3981312.pth -[2023-09-19 11:35:25,473][73145] Updated weights for policy 0, policy_version 8000 (0.0013) -[2023-09-19 11:35:25,474][73219] Updated weights for policy 1, policy_version 7960 (0.0013) -[2023-09-19 11:35:27,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5859.4). Total num frames: 8179712. Throughput: 0: 2814.4, 1: 2814.4. Samples: 5737588. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:35:27,044][72530] Avg episode reward: [(0, '152749.656'), (1, '154559.667')] -[2023-09-19 11:35:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5859.4). Total num frames: 8204288. Throughput: 0: 2811.2, 1: 2811.5. Samples: 5768692. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:35:32,044][72530] Avg episode reward: [(0, '154504.272'), (1, '154707.152')] -[2023-09-19 11:35:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5831.6). Total num frames: 8228864. Throughput: 0: 2771.9, 1: 2773.0. Samples: 5783532. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:37,044][72530] Avg episode reward: [(0, '154504.272'), (1, '154289.118')] -[2023-09-19 11:35:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008056_4124672.pth... -[2023-09-19 11:35:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008016_4104192.pth... -[2023-09-19 11:35:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007896_4042752.pth -[2023-09-19 11:35:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007856_4022272.pth -[2023-09-19 11:35:41,038][73145] Updated weights for policy 0, policy_version 8080 (0.0011) -[2023-09-19 11:35:41,039][73219] Updated weights for policy 1, policy_version 8040 (0.0014) -[2023-09-19 11:35:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5529.6, 300 sec: 5803.8). Total num frames: 8253440. Throughput: 0: 2734.4, 1: 2734.8. Samples: 5816354. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:42,044][72530] Avg episode reward: [(0, '161395.630'), (1, '158048.524')] -[2023-09-19 11:35:42,045][73130] Saving new best policy, reward=161395.630! -[2023-09-19 11:35:47,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5803.8). Total num frames: 8286208. Throughput: 0: 2761.4, 1: 2761.2. Samples: 5850962. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:35:47,044][72530] Avg episode reward: [(0, '159754.689'), (1, '158048.524')] -[2023-09-19 11:35:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5776.1). Total num frames: 8310784. Throughput: 0: 2780.5, 1: 2780.5. Samples: 5868434. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:35:52,045][72530] Avg episode reward: [(0, '160738.664'), (1, '159082.738')] -[2023-09-19 11:35:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008096_4145152.pth... -[2023-09-19 11:35:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008136_4165632.pth... -[2023-09-19 11:35:52,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000007936_4063232.pth -[2023-09-19 11:35:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000007976_4083712.pth -[2023-09-19 11:35:55,537][73145] Updated weights for policy 0, policy_version 8160 (0.0015) -[2023-09-19 11:35:55,537][73219] Updated weights for policy 1, policy_version 8120 (0.0014) -[2023-09-19 11:35:57,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5748.3). Total num frames: 8335360. Throughput: 0: 2770.3, 1: 2770.4. Samples: 5901056. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:35:57,044][72530] Avg episode reward: [(0, '162764.036'), (1, '158805.704')] -[2023-09-19 11:35:57,046][73130] Saving new best policy, reward=162764.036! -[2023-09-19 11:36:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5776.1). Total num frames: 8368128. Throughput: 0: 2965.8, 1: 2965.9. Samples: 5934650. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:36:02,044][72530] Avg episode reward: [(0, '162349.625'), (1, '156768.108')] -[2023-09-19 11:36:07,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.4, 300 sec: 5748.3). Total num frames: 8392704. Throughput: 0: 2770.8, 1: 2770.8. Samples: 5952710. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:36:07,043][72530] Avg episode reward: [(0, '161456.743'), (1, '158066.203')] -[2023-09-19 11:36:07,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008176_4186112.pth... -[2023-09-19 11:36:07,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008216_4206592.pth... -[2023-09-19 11:36:07,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008016_4104192.pth -[2023-09-19 11:36:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008056_4124672.pth -[2023-09-19 11:36:10,148][73145] Updated weights for policy 0, policy_version 8240 (0.0015) -[2023-09-19 11:36:10,148][73219] Updated weights for policy 1, policy_version 8200 (0.0015) -[2023-09-19 11:36:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.8, 300 sec: 5748.3). Total num frames: 8425472. Throughput: 0: 2751.9, 1: 2751.9. Samples: 5985260. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:36:12,044][72530] Avg episode reward: [(0, '161456.743'), (1, '157423.763')] -[2023-09-19 11:36:17,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5720.5). Total num frames: 8450048. Throughput: 0: 2744.1, 1: 2743.7. Samples: 6015644. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:36:17,044][72530] Avg episode reward: [(0, '161135.132'), (1, '158376.754')] -[2023-09-19 11:36:22,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5720.5). Total num frames: 8474624. Throughput: 0: 2786.1, 1: 2785.0. Samples: 6034230. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:36:22,044][72530] Avg episode reward: [(0, '160598.162'), (1, '158376.754')] -[2023-09-19 11:36:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008256_4227072.pth... -[2023-09-19 11:36:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008296_4247552.pth... -[2023-09-19 11:36:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008136_4165632.pth -[2023-09-19 11:36:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008096_4145152.pth -[2023-09-19 11:36:25,677][73219] Updated weights for policy 1, policy_version 8280 (0.0011) -[2023-09-19 11:36:25,678][73145] Updated weights for policy 0, policy_version 8320 (0.0011) -[2023-09-19 11:36:27,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5692.7). Total num frames: 8499200. Throughput: 0: 2751.8, 1: 2751.2. Samples: 6063990. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:36:27,044][72530] Avg episode reward: [(0, '159747.417'), (1, '153840.543')] -[2023-09-19 11:36:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5692.8). Total num frames: 8531968. Throughput: 0: 2749.7, 1: 2749.3. Samples: 6098416. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:36:32,044][72530] Avg episode reward: [(0, '154229.110'), (1, '153123.695')] -[2023-09-19 11:36:37,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5597.9, 300 sec: 5720.5). Total num frames: 8564736. Throughput: 0: 2758.1, 1: 2758.1. Samples: 6116658. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:36:37,044][72530] Avg episode reward: [(0, '149061.433'), (1, '154038.880')] -[2023-09-19 11:36:37,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008344_4272128.pth... -[2023-09-19 11:36:37,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008384_4292608.pth... -[2023-09-19 11:36:37,057][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008176_4186112.pth -[2023-09-19 11:36:37,057][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008216_4206592.pth -[2023-09-19 11:36:39,594][73219] Updated weights for policy 1, policy_version 8360 (0.0015) -[2023-09-19 11:36:39,594][73145] Updated weights for policy 0, policy_version 8400 (0.0016) -[2023-09-19 11:36:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5692.7). Total num frames: 8589312. Throughput: 0: 2789.9, 1: 2790.9. Samples: 6152192. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:36:42,044][72530] Avg episode reward: [(0, '144058.161'), (1, '154059.173')] -[2023-09-19 11:36:47,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5692.7). Total num frames: 8622080. Throughput: 0: 2794.4, 1: 2794.6. Samples: 6186154. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:36:47,044][72530] Avg episode reward: [(0, '140921.567'), (1, '153742.312')] -[2023-09-19 11:36:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5665.0). Total num frames: 8646656. Throughput: 0: 2793.8, 1: 2793.7. Samples: 6204150. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:36:52,044][72530] Avg episode reward: [(0, '137894.258'), (1, '154893.749')] -[2023-09-19 11:36:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008424_4313088.pth... -[2023-09-19 11:36:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008464_4333568.pth... -[2023-09-19 11:36:52,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008256_4227072.pth -[2023-09-19 11:36:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008296_4247552.pth -[2023-09-19 11:36:53,707][73145] Updated weights for policy 0, policy_version 8480 (0.0014) -[2023-09-19 11:36:53,707][73219] Updated weights for policy 1, policy_version 8440 (0.0015) -[2023-09-19 11:36:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5692.7). Total num frames: 8679424. Throughput: 0: 2832.6, 1: 2832.4. Samples: 6240182. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:36:57,044][72530] Avg episode reward: [(0, '137894.258'), (1, '155112.103')] -[2023-09-19 11:37:02,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5692.7). Total num frames: 8712192. Throughput: 0: 2894.2, 1: 2894.2. Samples: 6276120. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:37:02,045][72530] Avg episode reward: [(0, '135885.742'), (1, '160768.230')] -[2023-09-19 11:37:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 8736768. Throughput: 0: 2876.4, 1: 2876.3. Samples: 6293104. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:37:07,045][72530] Avg episode reward: [(0, '135885.742'), (1, '160768.230')] -[2023-09-19 11:37:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008512_4358144.pth... -[2023-09-19 11:37:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008552_4378624.pth... -[2023-09-19 11:37:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008344_4272128.pth -[2023-09-19 11:37:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008384_4292608.pth -[2023-09-19 11:37:07,529][73145] Updated weights for policy 0, policy_version 8560 (0.0012) -[2023-09-19 11:37:07,529][73219] Updated weights for policy 1, policy_version 8520 (0.0013) -[2023-09-19 11:37:12,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5692.7). Total num frames: 8769536. Throughput: 0: 2951.7, 1: 2951.7. Samples: 6329646. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:37:12,044][72530] Avg episode reward: [(0, '142564.431'), (1, '161744.693')] -[2023-09-19 11:37:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 8794112. Throughput: 0: 2942.3, 1: 2942.5. Samples: 6363234. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:37:17,044][72530] Avg episode reward: [(0, '142915.479'), (1, '161744.693')] -[2023-09-19 11:37:21,639][73219] Updated weights for policy 1, policy_version 8600 (0.0011) -[2023-09-19 11:37:21,640][73145] Updated weights for policy 0, policy_version 8640 (0.0013) -[2023-09-19 11:37:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5870.9, 300 sec: 5692.7). Total num frames: 8826880. Throughput: 0: 2933.3, 1: 2933.3. Samples: 6380658. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:37:22,045][72530] Avg episode reward: [(0, '143397.695'), (1, '160966.343')] -[2023-09-19 11:37:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008600_4403200.pth... -[2023-09-19 11:37:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008640_4423680.pth... -[2023-09-19 11:37:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008424_4313088.pth -[2023-09-19 11:37:22,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008464_4333568.pth -[2023-09-19 11:37:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5665.0). Total num frames: 8851456. Throughput: 0: 2899.7, 1: 2898.4. Samples: 6413104. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:37:27,044][72530] Avg episode reward: [(0, '144691.029'), (1, '161003.911')] -[2023-09-19 11:37:32,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 8876032. Throughput: 0: 2857.8, 1: 2857.5. Samples: 6443340. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:37:32,044][72530] Avg episode reward: [(0, '145505.352'), (1, '159886.354')] -[2023-09-19 11:37:37,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5597.8, 300 sec: 5665.0). Total num frames: 8900608. Throughput: 0: 2838.1, 1: 2838.1. Samples: 6459580. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:37:37,045][72530] Avg episode reward: [(0, '148461.800'), (1, '159898.604')] -[2023-09-19 11:37:37,054][73145] Updated weights for policy 0, policy_version 8720 (0.0016) -[2023-09-19 11:37:37,054][73219] Updated weights for policy 1, policy_version 8680 (0.0016) -[2023-09-19 11:37:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008720_4464640.pth... -[2023-09-19 11:37:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008680_4444160.pth... -[2023-09-19 11:37:37,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008512_4358144.pth -[2023-09-19 11:37:37,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008552_4378624.pth -[2023-09-19 11:37:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 8933376. Throughput: 0: 2844.0, 1: 2844.8. Samples: 6496178. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:37:42,045][72530] Avg episode reward: [(0, '148461.800'), (1, '159908.679')] -[2023-09-19 11:37:47,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5637.2). Total num frames: 8957952. Throughput: 0: 2809.0, 1: 2810.2. Samples: 6528986. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:37:47,044][72530] Avg episode reward: [(0, '139457.708'), (1, '160880.552')] -[2023-09-19 11:37:52,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5637.2). Total num frames: 8982528. Throughput: 0: 2798.8, 1: 2799.8. Samples: 6545042. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:37:52,045][72530] Avg episode reward: [(0, '139457.708'), (1, '160880.552')] -[2023-09-19 11:37:52,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008752_4481024.pth... -[2023-09-19 11:37:52,057][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008792_4501504.pth... -[2023-09-19 11:37:52,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008600_4403200.pth -[2023-09-19 11:37:52,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008640_4423680.pth -[2023-09-19 11:37:52,458][73219] Updated weights for policy 1, policy_version 8760 (0.0012) -[2023-09-19 11:37:52,459][73145] Updated weights for policy 0, policy_version 8800 (0.0011) -[2023-09-19 11:37:57,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 9007104. Throughput: 0: 2704.4, 1: 2704.2. Samples: 6573032. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:37:57,044][72530] Avg episode reward: [(0, '140620.851'), (1, '160021.637')] -[2023-09-19 11:38:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 9039872. Throughput: 0: 2697.8, 1: 2697.6. Samples: 6606026. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:02,045][72530] Avg episode reward: [(0, '142260.932'), (1, '161163.491')] -[2023-09-19 11:38:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5581.7). Total num frames: 9064448. Throughput: 0: 2677.0, 1: 2676.9. Samples: 6621582. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:38:07,045][72530] Avg episode reward: [(0, '140483.928'), (1, '159989.158')] -[2023-09-19 11:38:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008832_4521984.pth... -[2023-09-19 11:38:07,057][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008872_4542464.pth... -[2023-09-19 11:38:07,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008680_4444160.pth -[2023-09-19 11:38:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008720_4464640.pth -[2023-09-19 11:38:07,390][73145] Updated weights for policy 0, policy_version 8880 (0.0012) -[2023-09-19 11:38:07,390][73219] Updated weights for policy 1, policy_version 8840 (0.0013) -[2023-09-19 11:38:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 9097216. Throughput: 0: 2721.5, 1: 2721.4. Samples: 6658036. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:12,045][72530] Avg episode reward: [(0, '138787.732'), (1, '159982.414')] -[2023-09-19 11:38:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5581.7). Total num frames: 9121792. Throughput: 0: 2759.0, 1: 2758.8. Samples: 6691642. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:17,044][72530] Avg episode reward: [(0, '138787.732'), (1, '159979.750')] -[2023-09-19 11:38:22,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 9146368. Throughput: 0: 2732.9, 1: 2733.0. Samples: 6705544. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:22,045][72530] Avg episode reward: [(0, '137723.465'), (1, '159412.068')] -[2023-09-19 11:38:22,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008912_4562944.pth... -[2023-09-19 11:38:22,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000008952_4583424.pth... -[2023-09-19 11:38:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008752_4481024.pth -[2023-09-19 11:38:22,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008792_4501504.pth -[2023-09-19 11:38:22,663][73219] Updated weights for policy 1, policy_version 8920 (0.0014) -[2023-09-19 11:38:22,663][73145] Updated weights for policy 0, policy_version 8960 (0.0016) -[2023-09-19 11:38:27,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5553.9). Total num frames: 9170944. Throughput: 0: 2668.7, 1: 2667.9. Samples: 6736324. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:27,044][72530] Avg episode reward: [(0, '137729.460'), (1, '159412.184')] -[2023-09-19 11:38:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5581.7). Total num frames: 9203712. Throughput: 0: 2658.7, 1: 2657.8. Samples: 6768230. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:32,044][72530] Avg episode reward: [(0, '137016.334'), (1, '160295.806')] -[2023-09-19 11:38:37,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5553.9). Total num frames: 9228288. Throughput: 0: 2671.2, 1: 2670.4. Samples: 6785414. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:38:37,044][72530] Avg episode reward: [(0, '135131.016'), (1, '159223.120')] -[2023-09-19 11:38:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009032_4624384.pth... -[2023-09-19 11:38:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000008992_4603904.pth... -[2023-09-19 11:38:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008832_4521984.pth -[2023-09-19 11:38:37,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008872_4542464.pth -[2023-09-19 11:38:37,583][73145] Updated weights for policy 0, policy_version 9040 (0.0013) -[2023-09-19 11:38:37,583][73219] Updated weights for policy 1, policy_version 9000 (0.0014) -[2023-09-19 11:38:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5553.9). Total num frames: 9252864. Throughput: 0: 2718.7, 1: 2718.9. Samples: 6817726. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:38:42,044][72530] Avg episode reward: [(0, '136734.966'), (1, '159206.896')] -[2023-09-19 11:38:47,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9277440. Throughput: 0: 2507.6, 1: 2694.9. Samples: 6840140. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:47,044][72530] Avg episode reward: [(0, '137794.300'), (1, '161324.133')] -[2023-09-19 11:38:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5553.9). Total num frames: 9310208. Throughput: 0: 2701.8, 1: 2702.8. Samples: 6864788. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:52,044][72530] Avg episode reward: [(0, '137794.300'), (1, '161324.133')] -[2023-09-19 11:38:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009112_4665344.pth... -[2023-09-19 11:38:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009072_4644864.pth... -[2023-09-19 11:38:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008912_4562944.pth -[2023-09-19 11:38:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000008952_4583424.pth -[2023-09-19 11:38:53,194][73219] Updated weights for policy 1, policy_version 9080 (0.0013) -[2023-09-19 11:38:53,194][73145] Updated weights for policy 0, policy_version 9120 (0.0011) -[2023-09-19 11:38:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9334784. Throughput: 0: 2646.8, 1: 2646.8. Samples: 6896248. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:38:57,044][72530] Avg episode reward: [(0, '137042.173'), (1, '160811.317')] -[2023-09-19 11:39:02,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9359360. Throughput: 0: 2603.1, 1: 2603.4. Samples: 6925932. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:39:02,044][72530] Avg episode reward: [(0, '137522.213'), (1, '160783.376')] -[2023-09-19 11:39:07,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9383936. Throughput: 0: 2590.8, 1: 2591.2. Samples: 6938732. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:39:07,044][72530] Avg episode reward: [(0, '138050.348'), (1, '161331.845')] -[2023-09-19 11:39:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009184_4702208.pth... -[2023-09-19 11:39:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009144_4681728.pth... -[2023-09-19 11:39:07,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009032_4624384.pth -[2023-09-19 11:39:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000008992_4603904.pth -[2023-09-19 11:39:09,053][73219] Updated weights for policy 1, policy_version 9160 (0.0013) -[2023-09-19 11:39:09,054][73145] Updated weights for policy 0, policy_version 9200 (0.0011) -[2023-09-19 11:39:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9416704. Throughput: 0: 2658.6, 1: 2658.4. Samples: 6975588. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:39:12,044][72530] Avg episode reward: [(0, '140041.122'), (1, '160346.458')] -[2023-09-19 11:39:17,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9441280. Throughput: 0: 2631.2, 1: 2630.8. Samples: 7005018. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:17,044][72530] Avg episode reward: [(0, '140041.122'), (1, '160346.458')] -[2023-09-19 11:39:22,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9465856. Throughput: 0: 2645.9, 1: 2646.0. Samples: 7023548. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:22,045][72530] Avg episode reward: [(0, '141321.149'), (1, '160469.876')] -[2023-09-19 11:39:22,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009264_4743168.pth... -[2023-09-19 11:39:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009224_4722688.pth... -[2023-09-19 11:39:22,057][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009112_4665344.pth -[2023-09-19 11:39:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009072_4644864.pth -[2023-09-19 11:39:23,736][73145] Updated weights for policy 0, policy_version 9280 (0.0016) -[2023-09-19 11:39:23,736][73219] Updated weights for policy 1, policy_version 9240 (0.0012) -[2023-09-19 11:39:27,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9498624. Throughput: 0: 2682.5, 1: 2682.2. Samples: 7059138. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:27,044][72530] Avg episode reward: [(0, '143117.745'), (1, '160469.876')] -[2023-09-19 11:39:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5324.8, 300 sec: 5526.1). Total num frames: 9523200. Throughput: 0: 2906.0, 1: 2718.8. Samples: 7093254. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:32,044][72530] Avg episode reward: [(0, '139677.367'), (1, '160492.361')] -[2023-09-19 11:39:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.4, 300 sec: 5540.0). Total num frames: 9555968. Throughput: 0: 2725.9, 1: 2724.9. Samples: 7110074. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:39:37,044][72530] Avg episode reward: [(0, '137931.041'), (1, '160516.950')] -[2023-09-19 11:39:37,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009352_4788224.pth... -[2023-09-19 11:39:37,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009312_4767744.pth... -[2023-09-19 11:39:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009184_4702208.pth -[2023-09-19 11:39:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009144_4681728.pth -[2023-09-19 11:39:38,025][73219] Updated weights for policy 1, policy_version 9320 (0.0014) -[2023-09-19 11:39:38,025][73145] Updated weights for policy 0, policy_version 9360 (0.0014) -[2023-09-19 11:39:42,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9580544. Throughput: 0: 2739.1, 1: 2740.1. Samples: 7142812. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:42,045][72530] Avg episode reward: [(0, '135675.016'), (1, '160545.833')] -[2023-09-19 11:39:47,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9605120. Throughput: 0: 2730.0, 1: 2729.9. Samples: 7171628. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:47,044][72530] Avg episode reward: [(0, '134266.068'), (1, '160294.298')] -[2023-09-19 11:39:52,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 9629696. Throughput: 0: 2788.4, 1: 2787.8. Samples: 7189664. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:52,045][72530] Avg episode reward: [(0, '134266.068'), (1, '160294.298')] -[2023-09-19 11:39:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009384_4804608.pth... -[2023-09-19 11:39:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009424_4825088.pth... -[2023-09-19 11:39:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009264_4743168.pth -[2023-09-19 11:39:52,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009224_4722688.pth -[2023-09-19 11:39:53,740][73145] Updated weights for policy 0, policy_version 9440 (0.0014) -[2023-09-19 11:39:53,740][73219] Updated weights for policy 1, policy_version 9400 (0.0011) -[2023-09-19 11:39:57,043][72530] Fps is (10 sec: 5734.2, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9662464. Throughput: 0: 2746.2, 1: 2746.2. Samples: 7222744. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:39:57,045][72530] Avg episode reward: [(0, '127890.525'), (1, '162301.441')] -[2023-09-19 11:40:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 9687040. Throughput: 0: 2803.2, 1: 2803.9. Samples: 7257338. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:40:02,044][72530] Avg episode reward: [(0, '126377.631'), (1, '162308.865')] -[2023-09-19 11:40:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.8, 300 sec: 5526.1). Total num frames: 9719808. Throughput: 0: 2767.5, 1: 2767.4. Samples: 7272620. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:40:07,045][72530] Avg episode reward: [(0, '126851.345'), (1, '162306.418')] -[2023-09-19 11:40:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009512_4870144.pth... -[2023-09-19 11:40:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009472_4849664.pth... -[2023-09-19 11:40:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009352_4788224.pth -[2023-09-19 11:40:07,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009312_4767744.pth -[2023-09-19 11:40:08,284][73145] Updated weights for policy 0, policy_version 9520 (0.0013) -[2023-09-19 11:40:08,284][73219] Updated weights for policy 1, policy_version 9480 (0.0012) -[2023-09-19 11:40:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 9744384. Throughput: 0: 2756.3, 1: 2757.5. Samples: 7307258. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:40:12,044][72530] Avg episode reward: [(0, '124049.731'), (1, '162755.710')] -[2023-09-19 11:40:17,043][72530] Fps is (10 sec: 5734.6, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 9777152. Throughput: 0: 2743.8, 1: 2743.8. Samples: 7340196. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:40:17,044][72530] Avg episode reward: [(0, '125342.829'), (1, '162760.657')] -[2023-09-19 11:40:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 9801728. Throughput: 0: 2757.9, 1: 2758.1. Samples: 7358298. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:40:22,044][72530] Avg episode reward: [(0, '126869.413'), (1, '162774.053')] -[2023-09-19 11:40:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009552_4890624.pth... -[2023-09-19 11:40:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009592_4911104.pth... -[2023-09-19 11:40:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009384_4804608.pth -[2023-09-19 11:40:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009424_4825088.pth -[2023-09-19 11:40:22,807][73219] Updated weights for policy 1, policy_version 9560 (0.0014) -[2023-09-19 11:40:22,808][73145] Updated weights for policy 0, policy_version 9600 (0.0015) -[2023-09-19 11:40:27,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 9826304. Throughput: 0: 2762.2, 1: 2761.2. Samples: 7391364. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:40:27,044][72530] Avg episode reward: [(0, '126869.413'), (1, '163199.726')] -[2023-09-19 11:40:27,045][73131] Saving new best policy, reward=163199.726! -[2023-09-19 11:40:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 9859072. Throughput: 0: 2813.1, 1: 2813.0. Samples: 7424800. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:40:32,044][72530] Avg episode reward: [(0, '127876.969'), (1, '163246.788')] -[2023-09-19 11:40:32,045][73131] Saving new best policy, reward=163246.788! -[2023-09-19 11:40:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 9883648. Throughput: 0: 2764.8, 1: 2765.1. Samples: 7438508. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:40:37,044][72530] Avg episode reward: [(0, '128261.706'), (1, '163246.788')] -[2023-09-19 11:40:37,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009632_4931584.pth... -[2023-09-19 11:40:37,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009672_4952064.pth... -[2023-09-19 11:40:37,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009472_4849664.pth -[2023-09-19 11:40:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009512_4870144.pth -[2023-09-19 11:40:37,743][73145] Updated weights for policy 0, policy_version 9680 (0.0015) -[2023-09-19 11:40:37,744][73219] Updated weights for policy 1, policy_version 9640 (0.0013) -[2023-09-19 11:40:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.4, 300 sec: 5498.4). Total num frames: 9908224. Throughput: 0: 2790.9, 1: 2791.1. Samples: 7473936. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:40:42,044][72530] Avg episode reward: [(0, '127401.924'), (1, '163340.260')] -[2023-09-19 11:40:42,046][73131] Saving new best policy, reward=163340.260! -[2023-09-19 11:40:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 9940992. Throughput: 0: 2779.8, 1: 2779.3. Samples: 7507494. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:40:47,044][72530] Avg episode reward: [(0, '126125.877'), (1, '163360.879')] -[2023-09-19 11:40:47,045][73131] Saving new best policy, reward=163360.879! -[2023-09-19 11:40:51,985][73219] Updated weights for policy 1, policy_version 9720 (0.0013) -[2023-09-19 11:40:51,986][73145] Updated weights for policy 0, policy_version 9760 (0.0013) -[2023-09-19 11:40:52,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 9973760. Throughput: 0: 2804.9, 1: 2805.1. Samples: 7525070. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:40:52,044][72530] Avg episode reward: [(0, '124501.715'), (1, '163360.926')] -[2023-09-19 11:40:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009720_4976640.pth... -[2023-09-19 11:40:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009760_4997120.pth... -[2023-09-19 11:40:52,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009552_4890624.pth -[2023-09-19 11:40:52,060][73131] Saving new best policy, reward=163360.926! -[2023-09-19 11:40:52,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009592_4911104.pth -[2023-09-19 11:40:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 9998336. Throughput: 0: 2820.8, 1: 2820.6. Samples: 7561120. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:40:57,044][72530] Avg episode reward: [(0, '121716.208'), (1, '162980.361')] -[2023-09-19 11:41:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10031104. Throughput: 0: 2839.9, 1: 2840.1. Samples: 7595796. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:41:02,044][72530] Avg episode reward: [(0, '123047.712'), (1, '162118.736')] -[2023-09-19 11:41:05,816][73145] Updated weights for policy 0, policy_version 9840 (0.0013) -[2023-09-19 11:41:05,816][73219] Updated weights for policy 1, policy_version 9800 (0.0011) -[2023-09-19 11:41:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 10055680. Throughput: 0: 2842.4, 1: 2842.3. Samples: 7614106. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:07,044][72530] Avg episode reward: [(0, '121775.268'), (1, '162040.085')] -[2023-09-19 11:41:07,093][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009808_5021696.pth... -[2023-09-19 11:41:07,097][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009632_4931584.pth -[2023-09-19 11:41:07,108][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009848_5042176.pth... -[2023-09-19 11:41:07,112][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009672_4952064.pth -[2023-09-19 11:41:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10088448. Throughput: 0: 2884.6, 1: 2885.2. Samples: 7651008. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:12,044][72530] Avg episode reward: [(0, '121120.688'), (1, '162013.153')] -[2023-09-19 11:41:17,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5581.7). Total num frames: 10121216. Throughput: 0: 2908.3, 1: 2908.4. Samples: 7686552. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:17,044][72530] Avg episode reward: [(0, '124879.108'), (1, '161974.000')] -[2023-09-19 11:41:19,279][73145] Updated weights for policy 0, policy_version 9920 (0.0012) -[2023-09-19 11:41:19,279][73219] Updated weights for policy 1, policy_version 9880 (0.0014) -[2023-09-19 11:41:22,043][72530] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5609.4). Total num frames: 10153984. Throughput: 0: 2966.0, 1: 2965.8. Samples: 7705438. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:22,044][72530] Avg episode reward: [(0, '124879.108'), (1, '161966.557')] -[2023-09-19 11:41:22,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000009936_5087232.pth... -[2023-09-19 11:41:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009896_5066752.pth... -[2023-09-19 11:41:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009720_4976640.pth -[2023-09-19 11:41:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009760_4997120.pth -[2023-09-19 11:41:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5581.7). Total num frames: 10178560. Throughput: 0: 2936.0, 1: 2936.1. Samples: 7738182. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:27,045][72530] Avg episode reward: [(0, '127104.430'), (1, '161743.941')] -[2023-09-19 11:41:32,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10203136. Throughput: 0: 2947.0, 1: 2947.1. Samples: 7772726. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:32,044][72530] Avg episode reward: [(0, '128493.741'), (1, '161743.941')] -[2023-09-19 11:41:33,872][73219] Updated weights for policy 1, policy_version 9960 (0.0008) -[2023-09-19 11:41:33,873][73145] Updated weights for policy 0, policy_version 10000 (0.0013) -[2023-09-19 11:41:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5581.7). Total num frames: 10235904. Throughput: 0: 2945.2, 1: 2945.2. Samples: 7790142. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:37,044][72530] Avg episode reward: [(0, '127668.095'), (1, '162203.421')] -[2023-09-19 11:41:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000009976_5107712.pth... -[2023-09-19 11:41:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010016_5128192.pth... -[2023-09-19 11:41:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009808_5021696.pth -[2023-09-19 11:41:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009848_5042176.pth -[2023-09-19 11:41:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5553.9). Total num frames: 10260480. Throughput: 0: 2933.3, 1: 2932.4. Samples: 7825076. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:42,044][72530] Avg episode reward: [(0, '129918.117'), (1, '162258.975')] -[2023-09-19 11:41:47,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 5581.7). Total num frames: 10293248. Throughput: 0: 2916.2, 1: 2916.0. Samples: 7858242. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:47,044][72530] Avg episode reward: [(0, '127362.808'), (1, '162303.333')] -[2023-09-19 11:41:47,993][73219] Updated weights for policy 1, policy_version 10040 (0.0015) -[2023-09-19 11:41:47,993][73145] Updated weights for policy 0, policy_version 10080 (0.0013) -[2023-09-19 11:41:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10317824. Throughput: 0: 2920.8, 1: 2920.8. Samples: 7876978. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:52,044][72530] Avg episode reward: [(0, '126406.406'), (1, '162333.274')] -[2023-09-19 11:41:52,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010096_5169152.pth... -[2023-09-19 11:41:52,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010056_5148672.pth... -[2023-09-19 11:41:52,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000009936_5087232.pth -[2023-09-19 11:41:52,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009896_5066752.pth -[2023-09-19 11:41:57,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 5553.9). Total num frames: 10350592. Throughput: 0: 2853.3, 1: 2852.7. Samples: 7907776. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:41:57,044][72530] Avg episode reward: [(0, '123833.586'), (1, '162357.732')] -[2023-09-19 11:42:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10375168. Throughput: 0: 2849.7, 1: 2849.6. Samples: 7943016. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:02,044][72530] Avg episode reward: [(0, '119983.882'), (1, '162619.479')] -[2023-09-19 11:42:02,675][73219] Updated weights for policy 1, policy_version 10120 (0.0014) -[2023-09-19 11:42:02,676][73145] Updated weights for policy 0, policy_version 10160 (0.0015) -[2023-09-19 11:42:07,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5734.4, 300 sec: 5526.1). Total num frames: 10399744. Throughput: 0: 2834.9, 1: 2834.8. Samples: 7960578. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:07,045][72530] Avg episode reward: [(0, '119983.882'), (1, '162619.479')] -[2023-09-19 11:42:07,089][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010144_5193728.pth... -[2023-09-19 11:42:07,090][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010184_5214208.pth... -[2023-09-19 11:42:07,093][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000009976_5107712.pth -[2023-09-19 11:42:07,096][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010016_5128192.pth -[2023-09-19 11:42:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 10432512. Throughput: 0: 2856.8, 1: 2857.6. Samples: 7995330. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:12,044][72530] Avg episode reward: [(0, '117278.968'), (1, '162975.817')] -[2023-09-19 11:42:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 10457088. Throughput: 0: 2837.4, 1: 2838.5. Samples: 8028138. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:17,044][72530] Avg episode reward: [(0, '117278.968'), (1, '162975.817')] -[2023-09-19 11:42:17,136][73219] Updated weights for policy 1, policy_version 10200 (0.0013) -[2023-09-19 11:42:17,136][73145] Updated weights for policy 0, policy_version 10240 (0.0014) -[2023-09-19 11:42:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5553.9). Total num frames: 10489856. Throughput: 0: 2827.2, 1: 2828.1. Samples: 8044630. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:22,044][72530] Avg episode reward: [(0, '121515.315'), (1, '162988.813')] -[2023-09-19 11:42:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010224_5234688.pth... -[2023-09-19 11:42:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010264_5255168.pth... -[2023-09-19 11:42:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010056_5148672.pth -[2023-09-19 11:42:22,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010096_5169152.pth -[2023-09-19 11:42:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5553.9). Total num frames: 10514432. Throughput: 0: 2792.9, 1: 2792.8. Samples: 8076430. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:42:27,044][72530] Avg episode reward: [(0, '118834.382'), (1, '162994.952')] -[2023-09-19 11:42:31,795][73145] Updated weights for policy 0, policy_version 10320 (0.0014) -[2023-09-19 11:42:31,795][73219] Updated weights for policy 1, policy_version 10280 (0.0014) -[2023-09-19 11:42:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5581.7). Total num frames: 10547200. Throughput: 0: 2815.4, 1: 2815.5. Samples: 8111630. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:42:32,044][72530] Avg episode reward: [(0, '118369.672'), (1, '161883.949')] -[2023-09-19 11:42:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5553.9). Total num frames: 10571776. Throughput: 0: 2810.9, 1: 2810.8. Samples: 8129954. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:42:37,044][72530] Avg episode reward: [(0, '121189.235'), (1, '161513.570')] -[2023-09-19 11:42:37,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010304_5275648.pth... -[2023-09-19 11:42:37,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010344_5296128.pth... -[2023-09-19 11:42:37,057][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010144_5193728.pth -[2023-09-19 11:42:37,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010184_5214208.pth -[2023-09-19 11:42:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5581.7). Total num frames: 10604544. Throughput: 0: 2861.8, 1: 2861.8. Samples: 8165336. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:42,044][72530] Avg episode reward: [(0, '121261.492'), (1, '161513.431')] -[2023-09-19 11:42:45,967][73219] Updated weights for policy 1, policy_version 10360 (0.0013) -[2023-09-19 11:42:45,968][73145] Updated weights for policy 0, policy_version 10400 (0.0014) -[2023-09-19 11:42:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5581.7). Total num frames: 10629120. Throughput: 0: 2844.1, 1: 2844.1. Samples: 8198984. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:47,044][72530] Avg episode reward: [(0, '118032.842'), (1, '157614.308')] -[2023-09-19 11:42:52,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5609.4). Total num frames: 10661888. Throughput: 0: 2843.3, 1: 2844.3. Samples: 8216516. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:42:52,044][72530] Avg episode reward: [(0, '118032.842'), (1, '156763.229')] -[2023-09-19 11:42:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010392_5320704.pth... -[2023-09-19 11:42:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010432_5341184.pth... -[2023-09-19 11:42:52,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010224_5234688.pth -[2023-09-19 11:42:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010264_5255168.pth -[2023-09-19 11:42:57,043][72530] Fps is (10 sec: 6553.7, 60 sec: 5734.4, 300 sec: 5609.4). Total num frames: 10694656. Throughput: 0: 2862.2, 1: 2861.2. Samples: 8252880. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:42:57,044][72530] Avg episode reward: [(0, '113836.289'), (1, '152570.666')] -[2023-09-19 11:42:59,347][73219] Updated weights for policy 1, policy_version 10440 (0.0013) -[2023-09-19 11:42:59,348][73145] Updated weights for policy 0, policy_version 10480 (0.0014) -[2023-09-19 11:43:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5609.4). Total num frames: 10719232. Throughput: 0: 2911.6, 1: 2911.6. Samples: 8290182. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:02,044][72530] Avg episode reward: [(0, '113836.289'), (1, '152570.666')] -[2023-09-19 11:43:07,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5734.4, 300 sec: 5581.7). Total num frames: 10743808. Throughput: 0: 2894.9, 1: 2894.2. Samples: 8305140. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:43:07,045][72530] Avg episode reward: [(0, '115267.182'), (1, '149740.570')] -[2023-09-19 11:43:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010512_5382144.pth... -[2023-09-19 11:43:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010472_5361664.pth... -[2023-09-19 11:43:07,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010304_5275648.pth -[2023-09-19 11:43:07,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010344_5296128.pth -[2023-09-19 11:43:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5609.4). Total num frames: 10776576. Throughput: 0: 2920.7, 1: 2921.1. Samples: 8339310. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:43:12,044][72530] Avg episode reward: [(0, '112738.453'), (1, '147363.933')] -[2023-09-19 11:43:13,928][73145] Updated weights for policy 0, policy_version 10560 (0.0015) -[2023-09-19 11:43:13,928][73219] Updated weights for policy 1, policy_version 10520 (0.0013) -[2023-09-19 11:43:17,043][72530] Fps is (10 sec: 6553.7, 60 sec: 5870.9, 300 sec: 5637.2). Total num frames: 10809344. Throughput: 0: 2925.9, 1: 2925.9. Samples: 8374958. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:17,044][72530] Avg episode reward: [(0, '112692.445'), (1, '139212.880')] -[2023-09-19 11:43:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5637.2). Total num frames: 10833920. Throughput: 0: 2885.6, 1: 2885.7. Samples: 8389664. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:22,044][72530] Avg episode reward: [(0, '111524.427'), (1, '138648.058')] -[2023-09-19 11:43:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010560_5406720.pth... -[2023-09-19 11:43:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010600_5427200.pth... -[2023-09-19 11:43:22,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010392_5320704.pth -[2023-09-19 11:43:22,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010432_5341184.pth -[2023-09-19 11:43:27,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5734.4, 300 sec: 5609.4). Total num frames: 10858496. Throughput: 0: 2801.5, 1: 2801.5. Samples: 8417468. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:27,044][72530] Avg episode reward: [(0, '111524.427'), (1, '136941.524')] -[2023-09-19 11:43:29,536][73145] Updated weights for policy 0, policy_version 10640 (0.0016) -[2023-09-19 11:43:29,536][73219] Updated weights for policy 1, policy_version 10600 (0.0013) -[2023-09-19 11:43:32,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5597.9, 300 sec: 5609.4). Total num frames: 10883072. Throughput: 0: 2830.2, 1: 2830.5. Samples: 8453716. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:32,044][72530] Avg episode reward: [(0, '107385.986'), (1, '135321.775')] -[2023-09-19 11:43:37,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5734.4, 300 sec: 5637.2). Total num frames: 10915840. Throughput: 0: 2819.7, 1: 2819.8. Samples: 8470292. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:43:37,044][72530] Avg episode reward: [(0, '107385.986'), (1, '135321.775')] -[2023-09-19 11:43:37,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010640_5447680.pth... -[2023-09-19 11:43:37,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010680_5468160.pth... -[2023-09-19 11:43:37,056][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010512_5382144.pth -[2023-09-19 11:43:37,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010472_5361664.pth -[2023-09-19 11:43:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5637.2). Total num frames: 10940416. Throughput: 0: 2781.7, 1: 2783.1. Samples: 8503296. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:42,044][72530] Avg episode reward: [(0, '100071.875'), (1, '136457.439')] -[2023-09-19 11:43:44,019][73219] Updated weights for policy 1, policy_version 10680 (0.0013) -[2023-09-19 11:43:44,019][73145] Updated weights for policy 0, policy_version 10720 (0.0016) -[2023-09-19 11:43:47,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5597.9, 300 sec: 5609.4). Total num frames: 10964992. Throughput: 0: 2720.2, 1: 2719.9. Samples: 8534988. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:43:47,044][72530] Avg episode reward: [(0, '99838.586'), (1, '136457.439')] -[2023-09-19 11:43:52,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 10989568. Throughput: 0: 2733.8, 1: 2733.7. Samples: 8551178. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:43:52,044][72530] Avg episode reward: [(0, '96058.014'), (1, '144304.212')] -[2023-09-19 11:43:52,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010752_5505024.pth... -[2023-09-19 11:43:52,051][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010712_5484544.pth... -[2023-09-19 11:43:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010600_5427200.pth -[2023-09-19 11:43:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010560_5406720.pth -[2023-09-19 11:43:57,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5609.4). Total num frames: 11014144. Throughput: 0: 2672.1, 1: 2671.8. Samples: 8579786. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:43:57,044][72530] Avg episode reward: [(0, '94439.101'), (1, '146370.302')] -[2023-09-19 11:44:00,053][73145] Updated weights for policy 0, policy_version 10800 (0.0011) -[2023-09-19 11:44:00,053][73219] Updated weights for policy 1, policy_version 10760 (0.0014) -[2023-09-19 11:44:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11046912. Throughput: 0: 2656.6, 1: 2656.7. Samples: 8614056. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:44:02,044][72530] Avg episode reward: [(0, '94439.101'), (1, '149966.737')] -[2023-09-19 11:44:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 11071488. Throughput: 0: 2659.8, 1: 2659.7. Samples: 8629044. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:44:07,044][72530] Avg episode reward: [(0, '89650.521'), (1, '155598.656')] -[2023-09-19 11:44:07,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010832_5545984.pth... -[2023-09-19 11:44:07,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010792_5525504.pth... -[2023-09-19 11:44:07,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010680_5468160.pth -[2023-09-19 11:44:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010640_5447680.pth -[2023-09-19 11:44:12,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11104256. Throughput: 0: 2748.8, 1: 2748.7. Samples: 8664854. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:44:12,044][72530] Avg episode reward: [(0, '89650.521'), (1, '155598.656')] -[2023-09-19 11:44:14,562][73219] Updated weights for policy 1, policy_version 10840 (0.0014) -[2023-09-19 11:44:14,562][73145] Updated weights for policy 0, policy_version 10880 (0.0013) -[2023-09-19 11:44:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5637.2). Total num frames: 11128832. Throughput: 0: 2724.0, 1: 2723.6. Samples: 8698854. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:44:17,044][72530] Avg episode reward: [(0, '87254.510'), (1, '160076.645')] -[2023-09-19 11:44:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11161600. Throughput: 0: 2733.2, 1: 2733.3. Samples: 8716288. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:44:22,044][72530] Avg episode reward: [(0, '85923.521'), (1, '161617.846')] -[2023-09-19 11:44:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010880_5570560.pth... -[2023-09-19 11:44:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000010920_5591040.pth... -[2023-09-19 11:44:22,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010712_5484544.pth -[2023-09-19 11:44:22,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010752_5505024.pth -[2023-09-19 11:44:27,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11186176. Throughput: 0: 2701.6, 1: 2700.4. Samples: 8746388. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:44:27,044][72530] Avg episode reward: [(0, '86413.887'), (1, '161983.883')] -[2023-09-19 11:44:29,421][73219] Updated weights for policy 1, policy_version 10920 (0.0014) -[2023-09-19 11:44:29,422][73145] Updated weights for policy 0, policy_version 10960 (0.0016) -[2023-09-19 11:44:32,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 11210752. Throughput: 0: 2730.4, 1: 2729.5. Samples: 8780682. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:44:32,044][72530] Avg episode reward: [(0, '88162.418'), (1, '162739.322')] -[2023-09-19 11:44:37,044][72530] Fps is (10 sec: 5734.2, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11243520. Throughput: 0: 2743.6, 1: 2743.8. Samples: 8798112. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:44:37,045][72530] Avg episode reward: [(0, '89655.979'), (1, '162764.902')] -[2023-09-19 11:44:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000010960_5611520.pth... -[2023-09-19 11:44:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011000_5632000.pth... -[2023-09-19 11:44:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010792_5525504.pth -[2023-09-19 11:44:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010832_5545984.pth -[2023-09-19 11:44:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5637.2). Total num frames: 11268096. Throughput: 0: 2814.7, 1: 2814.5. Samples: 8833096. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:44:42,044][72530] Avg episode reward: [(0, '90951.227'), (1, '163562.679')] -[2023-09-19 11:44:42,045][73131] Saving new best policy, reward=163562.679! -[2023-09-19 11:44:43,549][73145] Updated weights for policy 0, policy_version 11040 (0.0014) -[2023-09-19 11:44:43,549][73219] Updated weights for policy 1, policy_version 11000 (0.0012) -[2023-09-19 11:44:47,043][72530] Fps is (10 sec: 5734.6, 60 sec: 5597.9, 300 sec: 5665.0). Total num frames: 11300864. Throughput: 0: 2823.4, 1: 2823.2. Samples: 8868152. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:44:47,044][72530] Avg episode reward: [(0, '90951.227'), (1, '163569.234')] -[2023-09-19 11:44:47,045][73131] Saving new best policy, reward=163569.234! -[2023-09-19 11:44:52,043][72530] Fps is (10 sec: 6553.4, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 11333632. Throughput: 0: 2853.3, 1: 2853.4. Samples: 8885846. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:44:52,044][72530] Avg episode reward: [(0, '93196.735'), (1, '163602.622')] -[2023-09-19 11:44:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011048_5656576.pth... -[2023-09-19 11:44:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011088_5677056.pth... -[2023-09-19 11:44:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000010920_5591040.pth -[2023-09-19 11:44:52,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010880_5570560.pth -[2023-09-19 11:44:52,064][73131] Saving new best policy, reward=163602.622! -[2023-09-19 11:44:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5665.0). Total num frames: 11358208. Throughput: 0: 2817.7, 1: 2817.8. Samples: 8918454. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:44:57,045][72530] Avg episode reward: [(0, '93196.735'), (1, '163602.622')] -[2023-09-19 11:44:58,430][73145] Updated weights for policy 0, policy_version 11120 (0.0014) -[2023-09-19 11:44:58,430][73219] Updated weights for policy 1, policy_version 11080 (0.0010) -[2023-09-19 11:45:02,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5597.9, 300 sec: 5637.2). Total num frames: 11382784. Throughput: 0: 2747.1, 1: 2747.7. Samples: 8946120. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:02,044][72530] Avg episode reward: [(0, '96123.404'), (1, '163617.376')] -[2023-09-19 11:45:02,046][73131] Saving new best policy, reward=163617.376! -[2023-09-19 11:45:07,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5597.9, 300 sec: 5637.2). Total num frames: 11407360. Throughput: 0: 2726.9, 1: 2726.6. Samples: 8961696. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:45:07,044][72530] Avg episode reward: [(0, '97205.408'), (1, '163596.670')] -[2023-09-19 11:45:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011160_5713920.pth... -[2023-09-19 11:45:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011120_5693440.pth... -[2023-09-19 11:45:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011000_5632000.pth -[2023-09-19 11:45:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000010960_5611520.pth -[2023-09-19 11:45:12,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5461.4, 300 sec: 5609.4). Total num frames: 11431936. Throughput: 0: 2703.2, 1: 2703.2. Samples: 8989676. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:45:12,043][72530] Avg episode reward: [(0, '97205.408'), (1, '163596.670')] -[2023-09-19 11:45:15,308][73145] Updated weights for policy 0, policy_version 11200 (0.0014) -[2023-09-19 11:45:15,308][73219] Updated weights for policy 1, policy_version 11160 (0.0012) -[2023-09-19 11:45:17,043][72530] Fps is (10 sec: 4096.1, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 11448320. Throughput: 0: 2651.6, 1: 2652.6. Samples: 9019372. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:45:17,044][72530] Avg episode reward: [(0, '102260.462'), (1, '163634.922')] -[2023-09-19 11:45:17,045][73131] Saving new best policy, reward=163634.922! -[2023-09-19 11:45:22,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5324.8, 300 sec: 5609.4). Total num frames: 11481088. Throughput: 0: 2640.3, 1: 2641.3. Samples: 9035780. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:45:22,044][72530] Avg episode reward: [(0, '105256.610'), (1, '163636.135')] -[2023-09-19 11:45:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011192_5730304.pth... -[2023-09-19 11:45:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011232_5750784.pth... -[2023-09-19 11:45:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011088_5677056.pth -[2023-09-19 11:45:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011048_5656576.pth -[2023-09-19 11:45:22,064][73131] Saving new best policy, reward=163636.135! -[2023-09-19 11:45:27,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 11505664. Throughput: 0: 2614.9, 1: 2616.2. Samples: 9068494. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:27,044][72530] Avg episode reward: [(0, '106969.066'), (1, '163623.728')] -[2023-09-19 11:45:30,281][73145] Updated weights for policy 0, policy_version 11280 (0.0010) -[2023-09-19 11:45:30,281][73219] Updated weights for policy 1, policy_version 11240 (0.0014) -[2023-09-19 11:45:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 11538432. Throughput: 0: 2590.7, 1: 2592.2. Samples: 9101380. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:32,044][72530] Avg episode reward: [(0, '107549.962'), (1, '163617.201')] -[2023-09-19 11:45:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5609.4). Total num frames: 11563008. Throughput: 0: 2590.5, 1: 2590.6. Samples: 9118994. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:37,044][72530] Avg episode reward: [(0, '105811.432'), (1, '163639.722')] -[2023-09-19 11:45:37,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011272_5771264.pth... -[2023-09-19 11:45:37,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011312_5791744.pth... -[2023-09-19 11:45:37,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011120_5693440.pth -[2023-09-19 11:45:37,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011160_5713920.pth -[2023-09-19 11:45:37,061][73131] Saving new best policy, reward=163639.722! -[2023-09-19 11:45:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5609.4). Total num frames: 11595776. Throughput: 0: 2606.7, 1: 2606.6. Samples: 9153050. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:42,044][72530] Avg episode reward: [(0, '107882.450'), (1, '163839.533')] -[2023-09-19 11:45:42,044][73131] Saving new best policy, reward=163839.533! -[2023-09-19 11:45:44,762][73219] Updated weights for policy 1, policy_version 11320 (0.0013) -[2023-09-19 11:45:44,763][73145] Updated weights for policy 0, policy_version 11360 (0.0014) -[2023-09-19 11:45:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 11620352. Throughput: 0: 2698.0, 1: 2697.3. Samples: 9188910. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:47,044][72530] Avg episode reward: [(0, '107882.450'), (1, '163858.005')] -[2023-09-19 11:45:47,046][73131] Saving new best policy, reward=163858.005! -[2023-09-19 11:45:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5609.4). Total num frames: 11653120. Throughput: 0: 2722.0, 1: 2721.0. Samples: 9206630. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:52,044][72530] Avg episode reward: [(0, '105400.770'), (1, '163575.436')] -[2023-09-19 11:45:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011360_5816320.pth... -[2023-09-19 11:45:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011400_5836800.pth... -[2023-09-19 11:45:52,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011192_5730304.pth -[2023-09-19 11:45:52,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011232_5750784.pth -[2023-09-19 11:45:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 11677696. Throughput: 0: 2764.1, 1: 2764.3. Samples: 9238454. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:45:57,044][72530] Avg episode reward: [(0, '105400.770'), (1, '163575.436')] -[2023-09-19 11:45:59,327][73219] Updated weights for policy 1, policy_version 11400 (0.0016) -[2023-09-19 11:45:59,327][73145] Updated weights for policy 0, policy_version 11440 (0.0016) -[2023-09-19 11:46:02,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5581.7). Total num frames: 11702272. Throughput: 0: 2625.6, 1: 2624.6. Samples: 9255630. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:46:02,044][72530] Avg episode reward: [(0, '104718.135'), (1, '163329.699')] -[2023-09-19 11:46:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5581.7). Total num frames: 11735040. Throughput: 0: 2830.1, 1: 2829.0. Samples: 9290438. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:46:07,044][72530] Avg episode reward: [(0, '104541.048'), (1, '163378.382')] -[2023-09-19 11:46:07,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011440_5857280.pth... -[2023-09-19 11:46:07,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011480_5877760.pth... -[2023-09-19 11:46:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011272_5771264.pth -[2023-09-19 11:46:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011312_5791744.pth -[2023-09-19 11:46:12,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5553.9). Total num frames: 11759616. Throughput: 0: 2839.9, 1: 2838.8. Samples: 9324034. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:46:12,044][72530] Avg episode reward: [(0, '106063.260'), (1, '163399.235')] -[2023-09-19 11:46:13,614][73219] Updated weights for policy 1, policy_version 11480 (0.0013) -[2023-09-19 11:46:13,614][73145] Updated weights for policy 0, policy_version 11520 (0.0014) -[2023-09-19 11:46:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 11792384. Throughput: 0: 2832.0, 1: 2830.7. Samples: 9356202. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:46:17,044][72530] Avg episode reward: [(0, '105598.661'), (1, '162236.464')] -[2023-09-19 11:46:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5553.9). Total num frames: 11816960. Throughput: 0: 2812.6, 1: 2812.9. Samples: 9372142. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:46:22,044][72530] Avg episode reward: [(0, '105598.661'), (1, '158593.843')] -[2023-09-19 11:46:22,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011520_5898240.pth... -[2023-09-19 11:46:22,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011560_5918720.pth... -[2023-09-19 11:46:22,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011360_5816320.pth -[2023-09-19 11:46:22,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011400_5836800.pth -[2023-09-19 11:46:27,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5553.9). Total num frames: 11841536. Throughput: 0: 2754.3, 1: 2754.3. Samples: 9400936. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:46:27,044][72530] Avg episode reward: [(0, '104505.657'), (1, '155073.280')] -[2023-09-19 11:46:30,228][73145] Updated weights for policy 0, policy_version 11600 (0.0013) -[2023-09-19 11:46:30,228][73219] Updated weights for policy 1, policy_version 11560 (0.0013) -[2023-09-19 11:46:32,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 11866112. Throughput: 0: 2687.7, 1: 2687.8. Samples: 9430808. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:46:32,044][72530] Avg episode reward: [(0, '105201.453'), (1, '155073.280')] -[2023-09-19 11:46:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 11890688. Throughput: 0: 2652.0, 1: 2653.2. Samples: 9445364. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:46:37,044][72530] Avg episode reward: [(0, '106467.037'), (1, '155392.306')] -[2023-09-19 11:46:37,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011592_5935104.pth... -[2023-09-19 11:46:37,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011632_5955584.pth... -[2023-09-19 11:46:37,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011440_5857280.pth -[2023-09-19 11:46:37,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011480_5877760.pth -[2023-09-19 11:46:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 11915264. Throughput: 0: 2662.8, 1: 2663.7. Samples: 9478146. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:46:42,044][72530] Avg episode reward: [(0, '108179.215'), (1, '155385.005')] -[2023-09-19 11:46:45,489][73145] Updated weights for policy 0, policy_version 11680 (0.0014) -[2023-09-19 11:46:45,490][73219] Updated weights for policy 1, policy_version 11640 (0.0014) -[2023-09-19 11:46:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 11948032. Throughput: 0: 2837.7, 1: 2837.9. Samples: 9511032. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:46:47,044][72530] Avg episode reward: [(0, '108179.215'), (1, '155360.629')] -[2023-09-19 11:46:52,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 11972608. Throughput: 0: 2646.1, 1: 2645.9. Samples: 9528578. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:46:52,044][72530] Avg episode reward: [(0, '110744.067'), (1, '155267.645')] -[2023-09-19 11:46:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011712_5996544.pth... -[2023-09-19 11:46:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011672_5976064.pth... -[2023-09-19 11:46:52,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011560_5918720.pth -[2023-09-19 11:46:52,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011520_5898240.pth -[2023-09-19 11:46:57,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 11997184. Throughput: 0: 2648.3, 1: 2648.4. Samples: 9562384. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:46:57,044][72530] Avg episode reward: [(0, '110744.067'), (1, '155267.645')] -[2023-09-19 11:46:59,831][73145] Updated weights for policy 0, policy_version 11760 (0.0013) -[2023-09-19 11:46:59,832][73219] Updated weights for policy 1, policy_version 11720 (0.0013) -[2023-09-19 11:47:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 12029952. Throughput: 0: 2681.6, 1: 2681.6. Samples: 9597548. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:47:02,044][72530] Avg episode reward: [(0, '113492.232'), (1, '163938.746')] -[2023-09-19 11:47:02,046][73131] Saving new best policy, reward=163938.746! -[2023-09-19 11:47:07,043][72530] Fps is (10 sec: 6553.5, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 12062720. Throughput: 0: 2695.1, 1: 2694.9. Samples: 9614692. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:47:07,044][72530] Avg episode reward: [(0, '114645.759'), (1, '163938.746')] -[2023-09-19 11:47:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011760_6021120.pth... -[2023-09-19 11:47:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011800_6041600.pth... -[2023-09-19 11:47:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011592_5935104.pth -[2023-09-19 11:47:07,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011632_5955584.pth -[2023-09-19 11:47:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 12087296. Throughput: 0: 2755.8, 1: 2755.8. Samples: 9648956. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:47:12,044][72530] Avg episode reward: [(0, '117620.147'), (1, '164058.186')] -[2023-09-19 11:47:12,046][73131] Saving new best policy, reward=164058.186! -[2023-09-19 11:47:15,016][73145] Updated weights for policy 0, policy_version 11840 (0.0014) -[2023-09-19 11:47:15,017][73219] Updated weights for policy 1, policy_version 11800 (0.0013) -[2023-09-19 11:47:17,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 12111872. Throughput: 0: 2736.9, 1: 2737.0. Samples: 9677132. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:47:17,044][72530] Avg episode reward: [(0, '120342.444'), (1, '164061.858')] -[2023-09-19 11:47:17,045][73131] Saving new best policy, reward=164061.858! -[2023-09-19 11:47:22,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5498.4). Total num frames: 12136448. Throughput: 0: 2756.4, 1: 2755.3. Samples: 9693392. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:47:22,045][72530] Avg episode reward: [(0, '121613.231'), (1, '164054.108')] -[2023-09-19 11:47:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011832_6057984.pth... -[2023-09-19 11:47:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011872_6078464.pth... -[2023-09-19 11:47:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011672_5976064.pth -[2023-09-19 11:47:22,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011712_5996544.pth -[2023-09-19 11:47:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 12169216. Throughput: 0: 2794.7, 1: 2793.4. Samples: 9729608. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:47:27,044][72530] Avg episode reward: [(0, '120341.617'), (1, '164136.092')] -[2023-09-19 11:47:27,045][73131] Saving new best policy, reward=164136.092! -[2023-09-19 11:47:29,294][73145] Updated weights for policy 0, policy_version 11920 (0.0016) -[2023-09-19 11:47:29,294][73219] Updated weights for policy 1, policy_version 11880 (0.0016) -[2023-09-19 11:47:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 12193792. Throughput: 0: 2811.6, 1: 2812.1. Samples: 9764102. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:47:32,044][72530] Avg episode reward: [(0, '120341.617'), (1, '163669.701')] -[2023-09-19 11:47:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12226560. Throughput: 0: 2781.4, 1: 2781.4. Samples: 9778906. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:47:37,044][72530] Avg episode reward: [(0, '118487.302'), (1, '163451.635')] -[2023-09-19 11:47:37,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000011920_6103040.pth... -[2023-09-19 11:47:37,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000011960_6123520.pth... -[2023-09-19 11:47:37,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011800_6041600.pth -[2023-09-19 11:47:37,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011760_6021120.pth -[2023-09-19 11:47:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12251136. Throughput: 0: 2801.6, 1: 2801.3. Samples: 9814516. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:47:42,044][72530] Avg episode reward: [(0, '119270.006'), (1, '163451.635')] -[2023-09-19 11:47:43,762][73145] Updated weights for policy 0, policy_version 12000 (0.0016) -[2023-09-19 11:47:43,762][73219] Updated weights for policy 1, policy_version 11960 (0.0015) -[2023-09-19 11:47:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12283904. Throughput: 0: 2806.5, 1: 2806.4. Samples: 9850126. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:47:47,044][72530] Avg episode reward: [(0, '118016.428'), (1, '163248.371')] -[2023-09-19 11:47:52,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 12308480. Throughput: 0: 2818.1, 1: 2818.1. Samples: 9868322. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:47:52,044][72530] Avg episode reward: [(0, '115308.107'), (1, '163245.962')] -[2023-09-19 11:47:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012040_6164480.pth... -[2023-09-19 11:47:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012000_6144000.pth... -[2023-09-19 11:47:52,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011872_6078464.pth -[2023-09-19 11:47:52,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011832_6057984.pth -[2023-09-19 11:47:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5498.4). Total num frames: 12341248. Throughput: 0: 2811.7, 1: 2811.7. Samples: 9902008. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:47:57,044][72530] Avg episode reward: [(0, '115899.171'), (1, '163286.602')] -[2023-09-19 11:47:57,983][73219] Updated weights for policy 1, policy_version 12040 (0.0014) -[2023-09-19 11:47:57,985][73145] Updated weights for policy 0, policy_version 12080 (0.0016) -[2023-09-19 11:48:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12365824. Throughput: 0: 2849.0, 1: 2849.1. Samples: 9933548. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:48:02,044][72530] Avg episode reward: [(0, '117424.425'), (1, '163330.945')] -[2023-09-19 11:48:07,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12390400. Throughput: 0: 2815.6, 1: 2815.7. Samples: 9946798. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:48:07,044][72530] Avg episode reward: [(0, '117424.425'), (1, '163807.050')] -[2023-09-19 11:48:07,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012080_6184960.pth... -[2023-09-19 11:48:07,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012120_6205440.pth... -[2023-09-19 11:48:07,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000011920_6103040.pth -[2023-09-19 11:48:07,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000011960_6123520.pth -[2023-09-19 11:48:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 12414976. Throughput: 0: 2772.0, 1: 2772.1. Samples: 9979096. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:48:12,044][72530] Avg episode reward: [(0, '116643.698'), (1, '164043.715')] -[2023-09-19 11:48:13,637][73145] Updated weights for policy 0, policy_version 12160 (0.0014) -[2023-09-19 11:48:13,637][73219] Updated weights for policy 1, policy_version 12120 (0.0013) -[2023-09-19 11:48:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 12447744. Throughput: 0: 2761.6, 1: 2761.6. Samples: 10012644. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:48:17,044][72530] Avg episode reward: [(0, '116643.698'), (1, '164043.715')] -[2023-09-19 11:48:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 12472320. Throughput: 0: 2792.3, 1: 2792.3. Samples: 10030210. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:22,044][72530] Avg episode reward: [(0, '114977.442'), (1, '164100.748')] -[2023-09-19 11:48:22,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012200_6246400.pth... -[2023-09-19 11:48:22,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012160_6225920.pth... -[2023-09-19 11:48:22,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012040_6164480.pth -[2023-09-19 11:48:22,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012000_6144000.pth -[2023-09-19 11:48:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12505088. Throughput: 0: 2773.0, 1: 2773.1. Samples: 10064090. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:27,044][72530] Avg episode reward: [(0, '115199.567'), (1, '164068.846')] -[2023-09-19 11:48:28,286][73219] Updated weights for policy 1, policy_version 12200 (0.0013) -[2023-09-19 11:48:28,286][73145] Updated weights for policy 0, policy_version 12240 (0.0012) -[2023-09-19 11:48:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 12529664. Throughput: 0: 2711.3, 1: 2711.5. Samples: 10094152. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:32,044][72530] Avg episode reward: [(0, '111865.477'), (1, '163548.142')] -[2023-09-19 11:48:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12554240. Throughput: 0: 2682.5, 1: 2682.3. Samples: 10109738. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:37,044][72530] Avg episode reward: [(0, '107535.334'), (1, '163429.327')] -[2023-09-19 11:48:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012240_6266880.pth... -[2023-09-19 11:48:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012280_6287360.pth... -[2023-09-19 11:48:37,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012080_6184960.pth -[2023-09-19 11:48:37,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012120_6205440.pth -[2023-09-19 11:48:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 12587008. Throughput: 0: 2703.6, 1: 2703.6. Samples: 10145332. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:42,044][72530] Avg episode reward: [(0, '107535.334'), (1, '162298.894')] -[2023-09-19 11:48:43,731][73145] Updated weights for policy 0, policy_version 12320 (0.0014) -[2023-09-19 11:48:43,732][73219] Updated weights for policy 1, policy_version 12280 (0.0012) -[2023-09-19 11:48:47,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 12611584. Throughput: 0: 2702.4, 1: 2702.2. Samples: 10176756. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:47,044][72530] Avg episode reward: [(0, '106316.522'), (1, '161408.497')] -[2023-09-19 11:48:52,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 12636160. Throughput: 0: 2750.1, 1: 2749.8. Samples: 10194294. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:52,044][72530] Avg episode reward: [(0, '106316.522'), (1, '161408.497')] -[2023-09-19 11:48:52,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012320_6307840.pth... -[2023-09-19 11:48:52,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012360_6328320.pth... -[2023-09-19 11:48:52,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012160_6225920.pth -[2023-09-19 11:48:52,059][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012200_6246400.pth -[2023-09-19 11:48:57,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 12660736. Throughput: 0: 2734.8, 1: 2734.8. Samples: 10225232. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:48:57,045][72530] Avg episode reward: [(0, '104226.634'), (1, '160377.599')] -[2023-09-19 11:48:58,673][73145] Updated weights for policy 0, policy_version 12400 (0.0012) -[2023-09-19 11:48:58,673][73219] Updated weights for policy 1, policy_version 12360 (0.0015) -[2023-09-19 11:49:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 12693504. Throughput: 0: 2747.6, 1: 2746.8. Samples: 10259896. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:02,045][72530] Avg episode reward: [(0, '104136.332'), (1, '159195.252')] -[2023-09-19 11:49:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12718080. Throughput: 0: 2694.6, 1: 2695.7. Samples: 10272774. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:07,045][72530] Avg episode reward: [(0, '106355.574'), (1, '159230.111')] -[2023-09-19 11:49:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012440_6369280.pth... -[2023-09-19 11:49:07,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012400_6348800.pth... -[2023-09-19 11:49:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012280_6287360.pth -[2023-09-19 11:49:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012240_6266880.pth -[2023-09-19 11:49:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12742656. Throughput: 0: 2680.0, 1: 2680.4. Samples: 10305312. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:12,045][72530] Avg episode reward: [(0, '110372.318'), (1, '159771.618')] -[2023-09-19 11:49:14,377][73145] Updated weights for policy 0, policy_version 12480 (0.0012) -[2023-09-19 11:49:14,378][73219] Updated weights for policy 1, policy_version 12440 (0.0012) -[2023-09-19 11:49:17,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5442.8). Total num frames: 12767232. Throughput: 0: 2711.7, 1: 2712.5. Samples: 10338240. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:17,044][72530] Avg episode reward: [(0, '110372.318'), (1, '159771.618')] -[2023-09-19 11:49:22,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12800000. Throughput: 0: 2729.1, 1: 2729.3. Samples: 10355370. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:22,045][72530] Avg episode reward: [(0, '110181.871'), (1, '160985.288')] -[2023-09-19 11:49:22,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012520_6410240.pth... -[2023-09-19 11:49:22,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012480_6389760.pth... -[2023-09-19 11:49:22,066][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012320_6307840.pth -[2023-09-19 11:49:22,066][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012360_6328320.pth -[2023-09-19 11:49:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 12824576. Throughput: 0: 2671.7, 1: 2671.6. Samples: 10385784. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:27,044][72530] Avg episode reward: [(0, '110198.661'), (1, '160985.288')] -[2023-09-19 11:49:29,030][73145] Updated weights for policy 0, policy_version 12560 (0.0012) -[2023-09-19 11:49:29,031][73219] Updated weights for policy 1, policy_version 12520 (0.0011) -[2023-09-19 11:49:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12857344. Throughput: 0: 2719.7, 1: 2720.0. Samples: 10421540. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:32,044][72530] Avg episode reward: [(0, '111544.611'), (1, '162823.390')] -[2023-09-19 11:49:37,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 12881920. Throughput: 0: 2719.0, 1: 2719.2. Samples: 10439012. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:49:37,044][72530] Avg episode reward: [(0, '112652.375'), (1, '162884.861')] -[2023-09-19 11:49:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012600_6451200.pth... -[2023-09-19 11:49:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012560_6430720.pth... -[2023-09-19 11:49:37,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012440_6369280.pth -[2023-09-19 11:49:37,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012400_6348800.pth -[2023-09-19 11:49:42,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5442.8). Total num frames: 12906496. Throughput: 0: 2710.0, 1: 2711.1. Samples: 10469180. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:49:42,045][72530] Avg episode reward: [(0, '112673.657'), (1, '163872.242')] -[2023-09-19 11:49:44,447][73219] Updated weights for policy 1, policy_version 12600 (0.0014) -[2023-09-19 11:49:44,448][73145] Updated weights for policy 0, policy_version 12640 (0.0013) -[2023-09-19 11:49:47,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5415.1). Total num frames: 12931072. Throughput: 0: 2683.4, 1: 2683.4. Samples: 10501404. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:49:47,044][72530] Avg episode reward: [(0, '113377.242'), (1, '163154.750')] -[2023-09-19 11:49:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 12963840. Throughput: 0: 2727.1, 1: 2727.0. Samples: 10518208. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:49:52,045][72530] Avg episode reward: [(0, '113377.242'), (1, '163154.750')] -[2023-09-19 11:49:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012680_6492160.pth... -[2023-09-19 11:49:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012640_6471680.pth... -[2023-09-19 11:49:52,058][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012520_6410240.pth -[2023-09-19 11:49:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012480_6389760.pth -[2023-09-19 11:49:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 12988416. Throughput: 0: 2708.9, 1: 2708.4. Samples: 10549088. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:49:57,044][72530] Avg episode reward: [(0, '113159.962'), (1, '161958.914')] -[2023-09-19 11:49:59,407][73219] Updated weights for policy 1, policy_version 12680 (0.0015) -[2023-09-19 11:49:59,407][73145] Updated weights for policy 0, policy_version 12720 (0.0014) -[2023-09-19 11:50:02,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5442.8). Total num frames: 13012992. Throughput: 0: 2709.9, 1: 2709.0. Samples: 10582090. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:50:02,044][72530] Avg episode reward: [(0, '113659.562'), (1, '161957.798')] -[2023-09-19 11:50:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13045760. Throughput: 0: 2706.5, 1: 2706.4. Samples: 10598952. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:50:07,044][72530] Avg episode reward: [(0, '114392.887'), (1, '161274.669')] -[2023-09-19 11:50:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012720_6512640.pth... -[2023-09-19 11:50:07,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012760_6533120.pth... -[2023-09-19 11:50:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012560_6430720.pth -[2023-09-19 11:50:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012600_6451200.pth -[2023-09-19 11:50:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 13062144. Throughput: 0: 2697.0, 1: 2697.2. Samples: 10628524. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:50:12,044][72530] Avg episode reward: [(0, '113293.487'), (1, '160635.441')] -[2023-09-19 11:50:15,160][73145] Updated weights for policy 0, policy_version 12800 (0.0014) -[2023-09-19 11:50:15,160][73219] Updated weights for policy 1, policy_version 12760 (0.0016) -[2023-09-19 11:50:17,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13094912. Throughput: 0: 2646.0, 1: 2645.9. Samples: 10659676. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:50:17,044][72530] Avg episode reward: [(0, '113293.487'), (1, '160786.247')] -[2023-09-19 11:50:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 13119488. Throughput: 0: 2645.4, 1: 2645.3. Samples: 10677092. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:50:22,044][72530] Avg episode reward: [(0, '113047.949'), (1, '161538.199')] -[2023-09-19 11:50:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012792_6549504.pth... -[2023-09-19 11:50:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012832_6569984.pth... -[2023-09-19 11:50:22,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012640_6471680.pth -[2023-09-19 11:50:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012680_6492160.pth -[2023-09-19 11:50:27,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5442.8). Total num frames: 13144064. Throughput: 0: 2674.3, 1: 2673.4. Samples: 10709824. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:50:27,044][72530] Avg episode reward: [(0, '113325.734'), (1, '161538.199')] -[2023-09-19 11:50:30,173][73219] Updated weights for policy 1, policy_version 12840 (0.0011) -[2023-09-19 11:50:30,174][73145] Updated weights for policy 0, policy_version 12880 (0.0019) -[2023-09-19 11:50:32,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 13176832. Throughput: 0: 2678.6, 1: 2678.5. Samples: 10742474. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:50:32,045][72530] Avg episode reward: [(0, '110953.125'), (1, '162797.965')] -[2023-09-19 11:50:37,043][72530] Fps is (10 sec: 5734.2, 60 sec: 5324.8, 300 sec: 5442.8). Total num frames: 13201408. Throughput: 0: 2676.9, 1: 2675.9. Samples: 10759082. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:50:37,044][72530] Avg episode reward: [(0, '111513.061'), (1, '162852.658')] -[2023-09-19 11:50:37,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012872_6590464.pth... -[2023-09-19 11:50:37,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000012912_6610944.pth... -[2023-09-19 11:50:37,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012760_6533120.pth -[2023-09-19 11:50:37,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012720_6512640.pth -[2023-09-19 11:50:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13234176. Throughput: 0: 2723.3, 1: 2723.3. Samples: 10794184. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:50:42,044][72530] Avg episode reward: [(0, '113571.621'), (1, '162934.778')] -[2023-09-19 11:50:44,663][73145] Updated weights for policy 0, policy_version 12960 (0.0011) -[2023-09-19 11:50:44,664][73219] Updated weights for policy 1, policy_version 12920 (0.0014) -[2023-09-19 11:50:47,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 13258752. Throughput: 0: 2728.1, 1: 2728.3. Samples: 10827630. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:50:47,044][72530] Avg episode reward: [(0, '113123.608'), (1, '163033.885')] -[2023-09-19 11:50:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13291520. Throughput: 0: 2735.7, 1: 2735.9. Samples: 10845174. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) -[2023-09-19 11:50:52,044][72530] Avg episode reward: [(0, '114036.709'), (1, '163233.390')] -[2023-09-19 11:50:52,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000012960_6635520.pth... -[2023-09-19 11:50:52,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013000_6656000.pth... -[2023-09-19 11:50:52,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012792_6549504.pth -[2023-09-19 11:50:52,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012832_6569984.pth -[2023-09-19 11:50:57,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13316096. Throughput: 0: 2784.1, 1: 2784.4. Samples: 10879108. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:50:57,044][72530] Avg episode reward: [(0, '114403.782'), (1, '162804.648')] -[2023-09-19 11:50:58,834][73219] Updated weights for policy 1, policy_version 13000 (0.0013) -[2023-09-19 11:50:58,834][73145] Updated weights for policy 0, policy_version 13040 (0.0013) -[2023-09-19 11:51:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 13348864. Throughput: 0: 2823.3, 1: 2823.4. Samples: 10913776. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:02,045][72530] Avg episode reward: [(0, '114403.782'), (1, '162804.648')] -[2023-09-19 11:51:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13373440. Throughput: 0: 2802.2, 1: 2802.3. Samples: 10929298. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:07,044][72530] Avg episode reward: [(0, '115552.472'), (1, '161727.982')] -[2023-09-19 11:51:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013040_6676480.pth... -[2023-09-19 11:51:07,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013080_6696960.pth... -[2023-09-19 11:51:07,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012872_6590464.pth -[2023-09-19 11:51:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000012912_6610944.pth -[2023-09-19 11:51:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5442.8). Total num frames: 13398016. Throughput: 0: 2794.4, 1: 2794.4. Samples: 10961322. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:12,045][72530] Avg episode reward: [(0, '114818.267'), (1, '161727.982')] -[2023-09-19 11:51:14,001][73219] Updated weights for policy 1, policy_version 13080 (0.0013) -[2023-09-19 11:51:14,001][73145] Updated weights for policy 0, policy_version 13120 (0.0014) -[2023-09-19 11:51:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 13430784. Throughput: 0: 2802.5, 1: 2802.7. Samples: 10994708. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:17,044][72530] Avg episode reward: [(0, '112267.470'), (1, '161652.380')] -[2023-09-19 11:51:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5470.6). Total num frames: 13455360. Throughput: 0: 2816.0, 1: 2816.1. Samples: 11012526. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:22,044][72530] Avg episode reward: [(0, '113122.257'), (1, '162464.319')] -[2023-09-19 11:51:22,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013120_6717440.pth... -[2023-09-19 11:51:22,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013160_6737920.pth... -[2023-09-19 11:51:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013000_6656000.pth -[2023-09-19 11:51:22,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000012960_6635520.pth -[2023-09-19 11:51:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5498.4). Total num frames: 13488128. Throughput: 0: 2804.3, 1: 2804.5. Samples: 11046580. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:51:27,044][72530] Avg episode reward: [(0, '114512.117'), (1, '162679.576')] -[2023-09-19 11:51:28,153][73145] Updated weights for policy 0, policy_version 13200 (0.0012) -[2023-09-19 11:51:28,154][73219] Updated weights for policy 1, policy_version 13160 (0.0017) -[2023-09-19 11:51:32,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13512704. Throughput: 0: 2817.9, 1: 2818.0. Samples: 11081242. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) -[2023-09-19 11:51:32,044][72530] Avg episode reward: [(0, '115152.477'), (1, '162979.727')] -[2023-09-19 11:51:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13537280. Throughput: 0: 2793.9, 1: 2793.8. Samples: 11096620. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:51:37,044][72530] Avg episode reward: [(0, '115152.477'), (1, '163119.801')] -[2023-09-19 11:51:37,050][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013200_6758400.pth... -[2023-09-19 11:51:37,051][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013240_6778880.pth... -[2023-09-19 11:51:37,055][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013040_6676480.pth -[2023-09-19 11:51:37,060][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013080_6696960.pth -[2023-09-19 11:51:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13570048. Throughput: 0: 2789.2, 1: 2788.7. Samples: 11130112. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:51:42,045][72530] Avg episode reward: [(0, '113913.820'), (1, '164168.870')] -[2023-09-19 11:51:42,046][73131] Saving new best policy, reward=164168.870! -[2023-09-19 11:51:43,105][73219] Updated weights for policy 1, policy_version 13240 (0.0012) -[2023-09-19 11:51:43,105][73145] Updated weights for policy 0, policy_version 13280 (0.0013) -[2023-09-19 11:51:47,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13594624. Throughput: 0: 2792.0, 1: 2791.7. Samples: 11165042. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:47,045][72530] Avg episode reward: [(0, '113329.232'), (1, '164168.870')] -[2023-09-19 11:51:52,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13627392. Throughput: 0: 2810.2, 1: 2810.3. Samples: 11182220. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:52,044][72530] Avg episode reward: [(0, '115366.353'), (1, '164073.991')] -[2023-09-19 11:51:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013288_6803456.pth... -[2023-09-19 11:51:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013328_6823936.pth... -[2023-09-19 11:51:52,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013160_6737920.pth -[2023-09-19 11:51:52,062][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013120_6717440.pth -[2023-09-19 11:51:57,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13651968. Throughput: 0: 2834.8, 1: 2834.5. Samples: 11216442. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:51:57,044][72530] Avg episode reward: [(0, '112745.770'), (1, '164094.197')] -[2023-09-19 11:51:57,114][73145] Updated weights for policy 0, policy_version 13360 (0.0013) -[2023-09-19 11:51:57,114][73219] Updated weights for policy 1, policy_version 13320 (0.0014) -[2023-09-19 11:52:02,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13684736. Throughput: 0: 2842.9, 1: 2843.0. Samples: 11250572. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:52:02,044][72530] Avg episode reward: [(0, '112085.582'), (1, '164047.200')] -[2023-09-19 11:52:07,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13709312. Throughput: 0: 2825.9, 1: 2825.9. Samples: 11266858. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:52:07,044][72530] Avg episode reward: [(0, '113004.087'), (1, '163717.311')] -[2023-09-19 11:52:07,056][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013368_6844416.pth... -[2023-09-19 11:52:07,056][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013408_6864896.pth... -[2023-09-19 11:52:07,062][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013240_6778880.pth -[2023-09-19 11:52:07,063][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013200_6758400.pth -[2023-09-19 11:52:11,691][73145] Updated weights for policy 0, policy_version 13440 (0.0009) -[2023-09-19 11:52:11,692][73219] Updated weights for policy 1, policy_version 13400 (0.0012) -[2023-09-19 11:52:12,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5526.1). Total num frames: 13742080. Throughput: 0: 2834.2, 1: 2834.1. Samples: 11301652. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:52:12,043][72530] Avg episode reward: [(0, '113246.114'), (1, '163818.756')] -[2023-09-19 11:52:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13766656. Throughput: 0: 2831.7, 1: 2831.6. Samples: 11336090. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:52:17,044][72530] Avg episode reward: [(0, '112684.823'), (1, '163893.278')] -[2023-09-19 11:52:22,043][72530] Fps is (10 sec: 5734.2, 60 sec: 5734.4, 300 sec: 5526.1). Total num frames: 13799424. Throughput: 0: 2853.1, 1: 2853.3. Samples: 11353408. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:52:22,044][72530] Avg episode reward: [(0, '112684.823'), (1, '163936.233')] -[2023-09-19 11:52:22,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013456_6889472.pth... -[2023-09-19 11:52:22,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013496_6909952.pth... -[2023-09-19 11:52:22,059][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013288_6803456.pth -[2023-09-19 11:52:22,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013328_6823936.pth -[2023-09-19 11:52:26,737][73219] Updated weights for policy 1, policy_version 13480 (0.0011) -[2023-09-19 11:52:26,738][73145] Updated weights for policy 0, policy_version 13520 (0.0013) -[2023-09-19 11:52:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13824000. Throughput: 0: 2835.0, 1: 2835.0. Samples: 11385262. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:52:27,044][72530] Avg episode reward: [(0, '110980.059'), (1, '163809.491')] -[2023-09-19 11:52:32,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13848576. Throughput: 0: 2781.7, 1: 2781.7. Samples: 11415394. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:52:32,044][72530] Avg episode reward: [(0, '110632.259'), (1, '163809.491')] -[2023-09-19 11:52:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13873152. Throughput: 0: 2779.5, 1: 2779.4. Samples: 11432370. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) -[2023-09-19 11:52:37,044][72530] Avg episode reward: [(0, '110937.608'), (1, '162945.052')] -[2023-09-19 11:52:37,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013528_6926336.pth... -[2023-09-19 11:52:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013568_6946816.pth... -[2023-09-19 11:52:37,065][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013368_6844416.pth -[2023-09-19 11:52:37,070][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013408_6864896.pth -[2023-09-19 11:52:41,868][73145] Updated weights for policy 0, policy_version 13600 (0.0011) -[2023-09-19 11:52:41,868][73219] Updated weights for policy 1, policy_version 13560 (0.0013) -[2023-09-19 11:52:42,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13905920. Throughput: 0: 2761.7, 1: 2761.8. Samples: 11464998. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:52:42,044][72530] Avg episode reward: [(0, '108749.531'), (1, '162877.908')] -[2023-09-19 11:52:47,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13930496. Throughput: 0: 2736.3, 1: 2736.0. Samples: 11496824. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:52:47,044][72530] Avg episode reward: [(0, '107670.741'), (1, '161974.202')] -[2023-09-19 11:52:52,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5461.3, 300 sec: 5470.6). Total num frames: 13955072. Throughput: 0: 2755.3, 1: 2755.3. Samples: 11514836. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:52:52,044][72530] Avg episode reward: [(0, '109926.959'), (1, '161647.451')] -[2023-09-19 11:52:52,084][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013656_6991872.pth... -[2023-09-19 11:52:52,087][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013616_6971392.pth... -[2023-09-19 11:52:52,090][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013496_6909952.pth -[2023-09-19 11:52:52,092][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013456_6889472.pth -[2023-09-19 11:52:56,529][73219] Updated weights for policy 1, policy_version 13640 (0.0015) -[2023-09-19 11:52:56,529][73145] Updated weights for policy 0, policy_version 13680 (0.0013) -[2023-09-19 11:52:57,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13987840. Throughput: 0: 2750.2, 1: 2750.3. Samples: 11549176. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:52:57,044][72530] Avg episode reward: [(0, '109926.959'), (1, '161647.451')] -[2023-09-19 11:53:02,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14012416. Throughput: 0: 2745.7, 1: 2746.6. Samples: 11583242. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:02,044][72530] Avg episode reward: [(0, '108812.138'), (1, '160242.020')] -[2023-09-19 11:53:07,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 14045184. Throughput: 0: 2738.7, 1: 2739.8. Samples: 11599938. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:07,044][72530] Avg episode reward: [(0, '109280.885'), (1, '160242.020')] -[2023-09-19 11:53:07,053][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013696_7012352.pth... -[2023-09-19 11:53:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013736_7032832.pth... -[2023-09-19 11:53:07,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013528_6926336.pth -[2023-09-19 11:53:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013568_6946816.pth -[2023-09-19 11:53:10,573][73145] Updated weights for policy 0, policy_version 13760 (0.0015) -[2023-09-19 11:53:10,573][73219] Updated weights for policy 1, policy_version 13720 (0.0015) -[2023-09-19 11:53:12,043][72530] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14069760. Throughput: 0: 2777.5, 1: 2777.6. Samples: 11635244. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:53:12,044][72530] Avg episode reward: [(0, '107640.671'), (1, '160924.342')] -[2023-09-19 11:53:17,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 14102528. Throughput: 0: 2813.6, 1: 2813.8. Samples: 11668628. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) -[2023-09-19 11:53:17,045][72530] Avg episode reward: [(0, '106086.364'), (1, '160888.486')] -[2023-09-19 11:53:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14127104. Throughput: 0: 2820.8, 1: 2820.8. Samples: 11686240. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:22,044][72530] Avg episode reward: [(0, '105829.988'), (1, '160815.863')] -[2023-09-19 11:53:22,079][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013824_7077888.pth... -[2023-09-19 11:53:22,082][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013656_6991872.pth -[2023-09-19 11:53:22,084][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013784_7057408.pth... -[2023-09-19 11:53:22,091][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013616_6971392.pth -[2023-09-19 11:53:25,330][73145] Updated weights for policy 0, policy_version 13840 (0.0015) -[2023-09-19 11:53:25,331][73219] Updated weights for policy 1, policy_version 13800 (0.0013) -[2023-09-19 11:53:27,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 14159872. Throughput: 0: 2819.4, 1: 2819.5. Samples: 11718750. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:27,044][72530] Avg episode reward: [(0, '105059.149'), (1, '160896.515')] -[2023-09-19 11:53:32,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 14184448. Throughput: 0: 2847.1, 1: 2847.4. Samples: 11753076. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:32,044][72530] Avg episode reward: [(0, '104051.203'), (1, '160838.692')] -[2023-09-19 11:53:37,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 14209024. Throughput: 0: 2817.1, 1: 2817.1. Samples: 11768374. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:37,044][72530] Avg episode reward: [(0, '108523.798'), (1, '161743.570')] -[2023-09-19 11:53:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013856_7094272.pth... -[2023-09-19 11:53:37,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013896_7114752.pth... -[2023-09-19 11:53:37,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013736_7032832.pth -[2023-09-19 11:53:37,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013696_7012352.pth -[2023-09-19 11:53:40,930][73145] Updated weights for policy 0, policy_version 13920 (0.0017) -[2023-09-19 11:53:40,930][73219] Updated weights for policy 1, policy_version 13880 (0.0016) -[2023-09-19 11:53:42,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14233600. Throughput: 0: 2753.8, 1: 2753.5. Samples: 11797004. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:42,044][72530] Avg episode reward: [(0, '108523.798'), (1, '161743.570')] -[2023-09-19 11:53:47,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14258176. Throughput: 0: 2724.0, 1: 2723.2. Samples: 11828370. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:47,044][72530] Avg episode reward: [(0, '108688.999'), (1, '160237.416')] -[2023-09-19 11:53:52,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14282752. Throughput: 0: 2709.0, 1: 2708.0. Samples: 11843704. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:52,045][72530] Avg episode reward: [(0, '112242.915'), (1, '159716.483')] -[2023-09-19 11:53:52,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000013928_7131136.pth... -[2023-09-19 11:53:52,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000013968_7151616.pth... -[2023-09-19 11:53:52,064][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013784_7057408.pth -[2023-09-19 11:53:52,064][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013824_7077888.pth -[2023-09-19 11:53:57,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 14307328. Throughput: 0: 2611.5, 1: 2612.1. Samples: 11870304. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:53:57,044][72530] Avg episode reward: [(0, '112242.915'), (1, '159730.340')] -[2023-09-19 11:53:57,810][73145] Updated weights for policy 0, policy_version 14000 (0.0013) -[2023-09-19 11:53:57,810][73219] Updated weights for policy 1, policy_version 13960 (0.0012) -[2023-09-19 11:54:02,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5470.6). Total num frames: 14331904. Throughput: 0: 2594.1, 1: 2593.9. Samples: 11902086. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:54:02,044][72530] Avg episode reward: [(0, '116633.173'), (1, '156732.668')] -[2023-09-19 11:54:07,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5188.3, 300 sec: 5470.6). Total num frames: 14356480. Throughput: 0: 2554.6, 1: 2554.5. Samples: 11916152. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:54:07,044][72530] Avg episode reward: [(0, '119968.454'), (1, '156732.668')] -[2023-09-19 11:54:07,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014000_7168000.pth... -[2023-09-19 11:54:07,054][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014040_7188480.pth... -[2023-09-19 11:54:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013856_7094272.pth -[2023-09-19 11:54:07,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013896_7114752.pth -[2023-09-19 11:54:12,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5470.6). Total num frames: 14381056. Throughput: 0: 2531.6, 1: 2531.5. Samples: 11946590. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:54:12,044][72530] Avg episode reward: [(0, '126021.713'), (1, '155674.672')] -[2023-09-19 11:54:13,622][73219] Updated weights for policy 1, policy_version 14040 (0.0013) -[2023-09-19 11:54:13,623][73145] Updated weights for policy 0, policy_version 14080 (0.0014) -[2023-09-19 11:54:17,043][72530] Fps is (10 sec: 5734.5, 60 sec: 5188.3, 300 sec: 5470.6). Total num frames: 14413824. Throughput: 0: 2509.2, 1: 2509.1. Samples: 11978896. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:54:17,044][72530] Avg episode reward: [(0, '128184.575'), (1, '154083.119')] -[2023-09-19 11:54:22,043][72530] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5470.6). Total num frames: 14438400. Throughput: 0: 2497.2, 1: 2498.2. Samples: 11993164. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:54:22,044][72530] Avg episode reward: [(0, '128184.575'), (1, '154040.425')] -[2023-09-19 11:54:22,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014080_7208960.pth... -[2023-09-19 11:54:22,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014120_7229440.pth... -[2023-09-19 11:54:22,058][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000013928_7131136.pth -[2023-09-19 11:54:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000013968_7151616.pth -[2023-09-19 11:54:27,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5051.7, 300 sec: 5442.8). Total num frames: 14462976. Throughput: 0: 2535.1, 1: 2535.4. Samples: 12025176. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:54:27,044][72530] Avg episode reward: [(0, '140472.932'), (1, '154651.060')] -[2023-09-19 11:54:29,535][73145] Updated weights for policy 0, policy_version 14160 (0.0011) -[2023-09-19 11:54:29,536][73219] Updated weights for policy 1, policy_version 14120 (0.0014) -[2023-09-19 11:54:32,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5051.7, 300 sec: 5442.8). Total num frames: 14487552. Throughput: 0: 2522.9, 1: 2522.9. Samples: 12055430. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:54:32,044][72530] Avg episode reward: [(0, '141718.503'), (1, '154651.060')] -[2023-09-19 11:54:37,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5051.7, 300 sec: 5442.8). Total num frames: 14512128. Throughput: 0: 2513.4, 1: 2513.2. Samples: 12069904. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:54:37,044][72530] Avg episode reward: [(0, '150806.538'), (1, '155291.907')] -[2023-09-19 11:54:37,054][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014152_7245824.pth... -[2023-09-19 11:54:37,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014192_7266304.pth... -[2023-09-19 11:54:37,060][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000014000_7168000.pth -[2023-09-19 11:54:37,063][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000014040_7188480.pth -[2023-09-19 11:54:42,043][72530] Fps is (10 sec: 4915.1, 60 sec: 5051.7, 300 sec: 5442.8). Total num frames: 14536704. Throughput: 0: 2545.0, 1: 2545.5. Samples: 12099376. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:54:42,045][72530] Avg episode reward: [(0, '151204.698'), (1, '156645.148')] -[2023-09-19 11:54:46,277][73145] Updated weights for policy 0, policy_version 14240 (0.0011) -[2023-09-19 11:54:46,277][73219] Updated weights for policy 1, policy_version 14200 (0.0011) -[2023-09-19 11:54:47,043][72530] Fps is (10 sec: 4915.3, 60 sec: 5051.7, 300 sec: 5415.1). Total num frames: 14561280. Throughput: 0: 2519.7, 1: 2520.0. Samples: 12128874. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) -[2023-09-19 11:54:47,044][72530] Avg episode reward: [(0, '151204.698'), (1, '156645.148')] -[2023-09-19 11:54:52,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5415.0). Total num frames: 14585856. Throughput: 0: 2545.9, 1: 2545.9. Samples: 12145282. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:54:52,045][72530] Avg episode reward: [(0, '150668.353'), (1, '159741.496')] -[2023-09-19 11:54:52,055][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014224_7282688.pth... -[2023-09-19 11:54:52,055][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014264_7303168.pth... -[2023-09-19 11:54:52,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000014080_7208960.pth -[2023-09-19 11:54:52,065][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000014120_7229440.pth -[2023-09-19 11:54:57,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5415.1). Total num frames: 14610432. Throughput: 0: 2537.3, 1: 2537.6. Samples: 12174958. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) -[2023-09-19 11:54:57,045][72530] Avg episode reward: [(0, '151344.969'), (1, '159741.496')] -[2023-09-19 11:55:02,044][72530] Fps is (10 sec: 4914.7, 60 sec: 5051.6, 300 sec: 5387.3). Total num frames: 14635008. Throughput: 0: 2521.2, 1: 2521.7. Samples: 12205832. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:55:02,045][72530] Avg episode reward: [(0, '151628.001'), (1, '160794.096')] -[2023-09-19 11:55:02,322][73219] Updated weights for policy 1, policy_version 14280 (0.0013) -[2023-09-19 11:55:02,323][73145] Updated weights for policy 0, policy_version 14320 (0.0013) -[2023-09-19 11:55:07,043][72530] Fps is (10 sec: 4915.2, 60 sec: 5051.7, 300 sec: 5415.1). Total num frames: 14659584. Throughput: 0: 2513.9, 1: 2512.9. Samples: 12219370. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) -[2023-09-19 11:55:07,044][72530] Avg episode reward: [(0, '148176.544'), (1, '161684.229')] -[2023-09-19 11:55:07,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014296_7319552.pth... -[2023-09-19 11:55:07,053][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014336_7340032.pth... -[2023-09-19 11:55:07,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000014152_7245824.pth -[2023-09-19 11:55:07,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000014192_7266304.pth -[2023-09-19 11:55:12,043][72530] Fps is (10 sec: 4915.8, 60 sec: 5051.7, 300 sec: 5387.3). Total num frames: 14684160. Throughput: 0: 2444.2, 1: 2444.4. Samples: 12245164. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:55:12,045][72530] Avg episode reward: [(0, '148176.544'), (1, '161684.229')] -[2023-09-19 11:55:17,043][72530] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5387.3). Total num frames: 14708736. Throughput: 0: 2444.5, 1: 2444.5. Samples: 12275438. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:55:17,045][72530] Avg episode reward: [(0, '148556.520'), (1, '161543.907')] -[2023-09-19 11:55:19,505][73219] Updated weights for policy 1, policy_version 14360 (0.0011) -[2023-09-19 11:55:19,506][73145] Updated weights for policy 0, policy_version 14400 (0.0014) -[2023-09-19 11:55:22,043][72530] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5387.3). Total num frames: 14733312. Throughput: 0: 2457.9, 1: 2457.9. Samples: 12291112. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) -[2023-09-19 11:55:22,044][72530] Avg episode reward: [(0, '146305.866'), (1, '161543.272')] -[2023-09-19 11:55:22,052][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014368_7356416.pth... -[2023-09-19 11:55:22,052][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014408_7376896.pth... -[2023-09-19 11:55:22,061][73131] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000014224_7282688.pth -[2023-09-19 11:55:22,061][73130] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000014264_7303168.pth -[2023-09-19 11:55:22,280][72530] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 72530], exiting... -[2023-09-19 11:55:22,281][72530] Runner profile tree view: -main_loop: 2141.4988 -[2023-09-19 11:55:22,282][72530] Collected {1: 7356416, 0: 7376896}, FPS: 5741.9 -[2023-09-19 11:55:22,281][73130] Stopping Batcher_0... -[2023-09-19 11:55:22,281][73131] Stopping Batcher_1... -[2023-09-19 11:55:22,282][73130] Loop batcher_evt_loop terminating... -[2023-09-19 11:55:22,282][73131] Loop batcher_evt_loop terminating... -[2023-09-19 11:55:22,282][73130] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000014408_7376896.pth... -[2023-09-19 11:55:22,283][73131] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000014368_7356416.pth... -[2023-09-19 11:55:22,285][73221] Stopping RolloutWorker_w1... -[2023-09-19 11:55:22,285][73221] Loop rollout_proc1_evt_loop terminating... -[2023-09-19 11:55:22,288][73220] Stopping RolloutWorker_w2... -[2023-09-19 11:55:22,288][73130] Stopping LearnerWorker_p0... -[2023-09-19 11:55:22,288][73220] Loop rollout_proc2_evt_loop terminating... -[2023-09-19 11:55:22,288][73130] Loop learner_proc0_evt_loop terminating... -[2023-09-19 11:55:22,288][73222] Stopping RolloutWorker_w6... -[2023-09-19 11:55:22,288][73131] Stopping LearnerWorker_p1... -[2023-09-19 11:55:22,289][73222] Loop rollout_proc6_evt_loop terminating... -[2023-09-19 11:55:22,289][73131] Loop learner_proc1_evt_loop terminating... -[2023-09-19 11:55:22,290][73218] Stopping RolloutWorker_w0... -[2023-09-19 11:55:22,290][73224] Stopping RolloutWorker_w4... -[2023-09-19 11:55:22,291][73218] Loop rollout_proc0_evt_loop terminating... -[2023-09-19 11:55:22,291][73223] Stopping RolloutWorker_w3... -[2023-09-19 11:55:22,291][73224] Loop rollout_proc4_evt_loop terminating... -[2023-09-19 11:55:22,291][73223] Loop rollout_proc3_evt_loop terminating... -[2023-09-19 11:55:22,292][73226] Stopping RolloutWorker_w7... -[2023-09-19 11:55:22,292][73226] Loop rollout_proc7_evt_loop terminating... -[2023-09-19 11:55:22,295][73229] Stopping RolloutWorker_w5... -[2023-09-19 11:55:22,295][73229] Loop rollout_proc5_evt_loop terminating... -[2023-09-19 11:55:22,296][73145] Weights refcount: 2 0 -[2023-09-19 11:55:22,297][73145] Stopping InferenceWorker_p0-w0... -[2023-09-19 11:55:22,297][73145] Loop inference_proc0-0_evt_loop terminating... -[2023-09-19 11:55:22,301][73219] Weights refcount: 2 0 -[2023-09-19 11:55:22,303][73219] Stopping InferenceWorker_p1-w0... -[2023-09-19 11:55:22,303][73219] Loop inference_proc1-0_evt_loop terminating... +[2023-09-21 15:10:45,648][101116] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-21 15:10:45,774][101118] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-21 15:10:46,022][101035] Using optimizer +[2023-09-21 15:10:46,023][101035] No checkpoints found +[2023-09-21 15:10:46,023][101035] Did not load from checkpoint, starting from scratch! +[2023-09-21 15:10:46,023][101035] Initialized policy 1 weights for model version 0 +[2023-09-21 15:10:46,025][101035] LearnerWorker_p1 finished initialization! +[2023-09-21 15:10:46,025][101035] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-21 15:10:46,158][101034] Using optimizer +[2023-09-21 15:10:46,159][101034] No checkpoints found +[2023-09-21 15:10:46,159][101034] Did not load from checkpoint, starting from scratch! +[2023-09-21 15:10:46,159][101034] Initialized policy 0 weights for model version 0 +[2023-09-21 15:10:46,161][101034] LearnerWorker_p0 finished initialization! +[2023-09-21 15:10:46,161][101034] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-21 15:10:46,569][101117] RunningMeanStd input shape: (376,) +[2023-09-21 15:10:46,570][101117] RunningMeanStd input shape: (1,) +[2023-09-21 15:10:46,602][99566] Inference worker 1-0 is ready! +[2023-09-21 15:10:46,707][101115] RunningMeanStd input shape: (376,) +[2023-09-21 15:10:46,707][101115] RunningMeanStd input shape: (1,) +[2023-09-21 15:10:46,739][99566] Inference worker 0-0 is ready! +[2023-09-21 15:10:46,740][99566] All inference workers are ready! Signal rollout workers to start! +[2023-09-21 15:10:46,835][101122] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,836][101122] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,836][101120] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,837][101120] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,838][101118] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,838][101116] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,839][101118] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,839][101119] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,839][101116] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,840][101119] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,840][101121] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,841][101121] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,843][101124] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,844][101124] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,851][101123] Decorrelating experience for 0 frames... +[2023-09-21 15:10:46,852][101123] Decorrelating experience for 64 frames... +[2023-09-21 15:10:46,887][101120] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,887][101122] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,888][101116] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,892][101119] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,893][101118] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,893][101121] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,897][101124] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,903][101123] Decorrelating experience for 128 frames... +[2023-09-21 15:10:46,985][101116] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,985][101122] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,985][101120] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,989][101119] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,990][101118] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,995][101121] Decorrelating experience for 192 frames... +[2023-09-21 15:10:46,998][101124] Decorrelating experience for 192 frames... +[2023-09-21 15:10:47,003][101123] Decorrelating experience for 192 frames... +[2023-09-21 15:10:47,149][101119] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,153][101120] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,154][101116] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,156][101122] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,157][101118] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,163][101121] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,164][101123] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,170][101124] Decorrelating experience for 256 frames... +[2023-09-21 15:10:47,345][101119] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,348][101120] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,349][101118] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,359][101123] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,363][101122] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,370][101121] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,372][101124] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,377][101116] Decorrelating experience for 320 frames... +[2023-09-21 15:10:47,595][101123] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,596][101120] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,600][101118] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,603][101119] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,609][101122] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,611][101121] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,632][101116] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,633][101124] Decorrelating experience for 384 frames... +[2023-09-21 15:10:47,892][101118] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,903][101119] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,910][101122] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,915][101121] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,915][101123] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,917][101120] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,930][101124] Decorrelating experience for 448 frames... +[2023-09-21 15:10:47,947][101116] Decorrelating experience for 448 frames... +[2023-09-21 15:10:49,496][99566] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-21 15:10:54,497][99566] Fps is (10 sec: 3276.5, 60 sec: 3276.5, 300 sec: 3276.5). Total num frames: 16384. Throughput: 0: 1638.3, 1: 1638.3. Samples: 16384. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:10:54,861][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000024_12288.pth... +[2023-09-21 15:10:54,871][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000024_12288.pth... +[2023-09-21 15:10:59,497][99566] Fps is (10 sec: 5734.3, 60 sec: 5734.3, 300 sec: 5734.3). Total num frames: 57344. Throughput: 0: 2677.6, 1: 2619.8. Samples: 52974. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:10:59,498][99566] Avg episode reward: [(0, '29111.123'), (1, '26500.965')] +[2023-09-21 15:11:03,368][101115] Updated weights for policy 0, policy_version 80 (0.0014) +[2023-09-21 15:11:03,369][101117] Updated weights for policy 1, policy_version 80 (0.0013) +[2023-09-21 15:11:03,395][99566] Heartbeat connected on Batcher_0 +[2023-09-21 15:11:03,398][99566] Heartbeat connected on LearnerWorker_p0 +[2023-09-21 15:11:03,401][99566] Heartbeat connected on Batcher_1 +[2023-09-21 15:11:03,404][99566] Heartbeat connected on LearnerWorker_p1 +[2023-09-21 15:11:03,410][99566] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-21 15:11:03,415][99566] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-21 15:11:03,419][99566] Heartbeat connected on RolloutWorker_w0 +[2023-09-21 15:11:03,420][99566] Heartbeat connected on RolloutWorker_w1 +[2023-09-21 15:11:03,425][99566] Heartbeat connected on RolloutWorker_w2 +[2023-09-21 15:11:03,430][99566] Heartbeat connected on RolloutWorker_w5 +[2023-09-21 15:11:03,432][99566] Heartbeat connected on RolloutWorker_w3 +[2023-09-21 15:11:03,435][99566] Heartbeat connected on RolloutWorker_w4 +[2023-09-21 15:11:03,435][99566] Heartbeat connected on RolloutWorker_w6 +[2023-09-21 15:11:03,441][99566] Heartbeat connected on RolloutWorker_w7 +[2023-09-21 15:11:04,497][99566] Fps is (10 sec: 6553.8, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 81920. Throughput: 0: 2445.3, 1: 2433.3. Samples: 73180. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:04,498][99566] Avg episode reward: [(0, '32653.580'), (1, '29531.158')] +[2023-09-21 15:11:09,497][99566] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 114688. Throughput: 0: 2741.4, 1: 2711.0. Samples: 109048. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:11:09,497][99566] Avg episode reward: [(0, '42202.736'), (1, '37345.618')] +[2023-09-21 15:11:09,500][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000112_57344.pth... +[2023-09-21 15:11:09,500][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000112_57344.pth... +[2023-09-21 15:11:14,496][99566] Fps is (10 sec: 7372.8, 60 sec: 6225.9, 300 sec: 6225.9). Total num frames: 155648. Throughput: 0: 3059.7, 1: 3035.7. Samples: 152388. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:11:14,498][99566] Avg episode reward: [(0, '44269.193'), (1, '39095.771')] +[2023-09-21 15:11:15,281][101117] Updated weights for policy 1, policy_version 160 (0.0014) +[2023-09-21 15:11:15,282][101115] Updated weights for policy 0, policy_version 160 (0.0014) +[2023-09-21 15:11:19,496][99566] Fps is (10 sec: 7372.8, 60 sec: 6280.5, 300 sec: 6280.5). Total num frames: 188416. Throughput: 0: 2891.8, 1: 2871.1. Samples: 172888. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-21 15:11:19,498][99566] Avg episode reward: [(0, '52608.069'), (1, '46035.948')] +[2023-09-21 15:11:24,497][99566] Fps is (10 sec: 7372.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 229376. Throughput: 0: 3103.4, 1: 3086.0. Samples: 216632. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:24,498][99566] Avg episode reward: [(0, '52608.069'), (1, '49544.367')] +[2023-09-21 15:11:24,502][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000224_114688.pth... +[2023-09-21 15:11:24,502][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000224_114688.pth... +[2023-09-21 15:11:24,508][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000024_12288.pth +[2023-09-21 15:11:24,509][101035] Saving new best policy, reward=49544.367! +[2023-09-21 15:11:24,510][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000024_12288.pth +[2023-09-21 15:11:24,511][101034] Saving new best policy, reward=52608.069! +[2023-09-21 15:11:26,547][101117] Updated weights for policy 1, policy_version 240 (0.0011) +[2023-09-21 15:11:26,548][101115] Updated weights for policy 0, policy_version 240 (0.0012) +[2023-09-21 15:11:29,496][99566] Fps is (10 sec: 7372.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 262144. Throughput: 0: 3277.6, 1: 3275.7. Samples: 262134. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-21 15:11:29,497][99566] Avg episode reward: [(0, '66016.832'), (1, '59449.959')] +[2023-09-21 15:11:29,498][101034] Saving new best policy, reward=66016.832! +[2023-09-21 15:11:29,498][101035] Saving new best policy, reward=59449.959! +[2023-09-21 15:11:34,496][99566] Fps is (10 sec: 6963.4, 60 sec: 6644.6, 300 sec: 6644.6). Total num frames: 299008. Throughput: 0: 3152.0, 1: 3139.7. Samples: 283126. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:34,497][99566] Avg episode reward: [(0, '68820.170'), (1, '64166.988')] +[2023-09-21 15:11:34,497][101034] Saving new best policy, reward=68820.170! +[2023-09-21 15:11:34,499][101035] Saving new best policy, reward=64166.988! +[2023-09-21 15:11:37,925][101117] Updated weights for policy 1, policy_version 320 (0.0015) +[2023-09-21 15:11:37,925][101115] Updated weights for policy 0, policy_version 320 (0.0013) +[2023-09-21 15:11:39,497][99566] Fps is (10 sec: 7372.6, 60 sec: 6717.4, 300 sec: 6717.4). Total num frames: 335872. Throughput: 0: 3456.7, 1: 3448.0. Samples: 327094. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:39,498][99566] Avg episode reward: [(0, '75159.121'), (1, '68598.547')] +[2023-09-21 15:11:39,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000328_167936.pth... +[2023-09-21 15:11:39,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000328_167936.pth... +[2023-09-21 15:11:39,513][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000112_57344.pth +[2023-09-21 15:11:39,513][101035] Saving new best policy, reward=68598.547! +[2023-09-21 15:11:39,515][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000112_57344.pth +[2023-09-21 15:11:39,516][101034] Saving new best policy, reward=75159.121! +[2023-09-21 15:11:44,497][99566] Fps is (10 sec: 6963.0, 60 sec: 6702.5, 300 sec: 6702.5). Total num frames: 368640. Throughput: 0: 3484.0, 1: 3483.4. Samples: 366510. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:44,498][99566] Avg episode reward: [(0, '77287.138'), (1, '73006.291')] +[2023-09-21 15:11:44,499][101034] Saving new best policy, reward=77287.138! +[2023-09-21 15:11:44,499][101035] Saving new best policy, reward=73006.291! +[2023-09-21 15:11:49,496][99566] Fps is (10 sec: 6553.8, 60 sec: 6690.1, 300 sec: 6690.1). Total num frames: 401408. Throughput: 0: 3500.4, 1: 3490.9. Samples: 387786. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:49,497][99566] Avg episode reward: [(0, '79728.904'), (1, '73989.310')] +[2023-09-21 15:11:49,498][101034] Saving new best policy, reward=79728.904! +[2023-09-21 15:11:49,498][101035] Saving new best policy, reward=73989.310! +[2023-09-21 15:11:49,630][101117] Updated weights for policy 1, policy_version 400 (0.0014) +[2023-09-21 15:11:49,630][101115] Updated weights for policy 0, policy_version 400 (0.0015) +[2023-09-21 15:11:54,496][99566] Fps is (10 sec: 7372.8, 60 sec: 7099.8, 300 sec: 6805.7). Total num frames: 442368. Throughput: 0: 3564.5, 1: 3565.1. Samples: 429880. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:54,497][99566] Avg episode reward: [(0, '83640.081'), (1, '77200.239')] +[2023-09-21 15:11:54,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000432_221184.pth... +[2023-09-21 15:11:54,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000432_221184.pth... +[2023-09-21 15:11:54,512][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000224_114688.pth +[2023-09-21 15:11:54,512][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000224_114688.pth +[2023-09-21 15:11:54,513][101034] Saving new best policy, reward=83640.081! +[2023-09-21 15:11:54,513][101035] Saving new best policy, reward=77200.239! +[2023-09-21 15:11:59,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6826.7, 300 sec: 6670.6). Total num frames: 466944. Throughput: 0: 3511.4, 1: 3516.1. Samples: 468628. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:11:59,498][99566] Avg episode reward: [(0, '84922.994'), (1, '77768.201')] +[2023-09-21 15:11:59,499][101034] Saving new best policy, reward=84922.994! +[2023-09-21 15:11:59,499][101035] Saving new best policy, reward=77768.201! +[2023-09-21 15:12:02,658][101117] Updated weights for policy 1, policy_version 480 (0.0016) +[2023-09-21 15:12:02,658][101115] Updated weights for policy 0, policy_version 480 (0.0011) +[2023-09-21 15:12:04,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6963.2, 300 sec: 6662.8). Total num frames: 499712. Throughput: 0: 3466.4, 1: 3468.6. Samples: 484962. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:12:04,497][99566] Avg episode reward: [(0, '88681.435'), (1, '80463.062')] +[2023-09-21 15:12:04,498][101034] Saving new best policy, reward=88681.435! +[2023-09-21 15:12:04,498][101035] Saving new best policy, reward=80463.062! +[2023-09-21 15:12:09,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6963.2, 300 sec: 6656.0). Total num frames: 532480. Throughput: 0: 3427.1, 1: 3427.7. Samples: 525100. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:12:09,498][99566] Avg episode reward: [(0, '89644.782'), (1, '81336.452')] +[2023-09-21 15:12:09,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000520_266240.pth... +[2023-09-21 15:12:09,509][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000520_266240.pth... +[2023-09-21 15:12:09,515][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000328_167936.pth +[2023-09-21 15:12:09,516][101035] Saving new best policy, reward=81336.452! +[2023-09-21 15:12:09,517][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000328_167936.pth +[2023-09-21 15:12:09,517][101034] Saving new best policy, reward=89644.782! +[2023-09-21 15:12:14,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6826.7, 300 sec: 6650.0). Total num frames: 565248. Throughput: 0: 3353.4, 1: 3345.7. Samples: 563594. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:12:14,498][99566] Avg episode reward: [(0, '93904.727'), (1, '84817.215')] +[2023-09-21 15:12:14,499][101034] Saving new best policy, reward=93904.727! +[2023-09-21 15:12:14,499][101035] Saving new best policy, reward=84817.215! +[2023-09-21 15:12:15,959][101117] Updated weights for policy 1, policy_version 560 (0.0014) +[2023-09-21 15:12:15,960][101115] Updated weights for policy 0, policy_version 560 (0.0012) +[2023-09-21 15:12:19,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6826.7, 300 sec: 6644.6). Total num frames: 598016. Throughput: 0: 3291.8, 1: 3291.1. Samples: 579360. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:12:19,498][99566] Avg episode reward: [(0, '95010.164'), (1, '85718.694')] +[2023-09-21 15:12:19,499][101034] Saving new best policy, reward=95010.164! +[2023-09-21 15:12:19,499][101035] Saving new best policy, reward=85718.694! +[2023-09-21 15:12:24,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6639.8). Total num frames: 630784. Throughput: 0: 3273.7, 1: 3269.4. Samples: 621534. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:12:24,498][99566] Avg episode reward: [(0, '98412.703'), (1, '89341.648')] +[2023-09-21 15:12:24,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000616_315392.pth... +[2023-09-21 15:12:24,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000616_315392.pth... +[2023-09-21 15:12:24,515][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000432_221184.pth +[2023-09-21 15:12:24,516][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000432_221184.pth +[2023-09-21 15:12:24,516][101034] Saving new best policy, reward=98412.703! +[2023-09-21 15:12:24,517][101035] Saving new best policy, reward=89341.648! +[2023-09-21 15:12:27,494][101115] Updated weights for policy 0, policy_version 640 (0.0013) +[2023-09-21 15:12:27,495][101117] Updated weights for policy 1, policy_version 640 (0.0015) +[2023-09-21 15:12:29,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6635.5). Total num frames: 663552. Throughput: 0: 3307.6, 1: 3308.7. Samples: 664240. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:12:29,497][99566] Avg episode reward: [(0, '100510.374'), (1, '91250.829')] +[2023-09-21 15:12:29,498][101034] Saving new best policy, reward=100510.374! +[2023-09-21 15:12:29,498][101035] Saving new best policy, reward=91250.829! +[2023-09-21 15:12:34,496][99566] Fps is (10 sec: 7372.9, 60 sec: 6758.4, 300 sec: 6709.6). Total num frames: 704512. Throughput: 0: 3325.3, 1: 3326.3. Samples: 687108. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:12:34,498][99566] Avg episode reward: [(0, '106969.058'), (1, '95836.961')] +[2023-09-21 15:12:34,499][101034] Saving new best policy, reward=106969.058! +[2023-09-21 15:12:34,499][101035] Saving new best policy, reward=95836.961! +[2023-09-21 15:12:38,780][101115] Updated weights for policy 0, policy_version 720 (0.0012) +[2023-09-21 15:12:38,780][101117] Updated weights for policy 1, policy_version 720 (0.0015) +[2023-09-21 15:12:39,497][99566] Fps is (10 sec: 7372.7, 60 sec: 6690.1, 300 sec: 6702.5). Total num frames: 737280. Throughput: 0: 3329.2, 1: 3331.7. Samples: 729624. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-21 15:12:39,498][99566] Avg episode reward: [(0, '106969.058'), (1, '97182.862')] +[2023-09-21 15:12:39,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000720_368640.pth... +[2023-09-21 15:12:39,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000720_368640.pth... +[2023-09-21 15:12:39,517][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000520_266240.pth +[2023-09-21 15:12:39,517][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000520_266240.pth +[2023-09-21 15:12:39,517][101035] Saving new best policy, reward=97182.862! +[2023-09-21 15:12:44,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6696.1). Total num frames: 770048. Throughput: 0: 3327.6, 1: 3329.0. Samples: 768176. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-21 15:12:44,497][99566] Avg episode reward: [(0, '113117.062'), (1, '101505.450')] +[2023-09-21 15:12:44,498][101034] Saving new best policy, reward=113117.062! +[2023-09-21 15:12:44,498][101035] Saving new best policy, reward=101505.450! +[2023-09-21 15:12:49,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6690.1). Total num frames: 802816. Throughput: 0: 3349.2, 1: 3355.5. Samples: 786672. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-21 15:12:49,497][99566] Avg episode reward: [(0, '113117.062'), (1, '101932.079')] +[2023-09-21 15:12:49,499][101035] Saving new best policy, reward=101932.079! +[2023-09-21 15:12:51,683][101115] Updated weights for policy 0, policy_version 800 (0.0014) +[2023-09-21 15:12:51,683][101117] Updated weights for policy 1, policy_version 800 (0.0013) +[2023-09-21 15:12:54,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6684.7). Total num frames: 835584. Throughput: 0: 3348.4, 1: 3351.9. Samples: 826612. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:12:54,497][99566] Avg episode reward: [(0, '116085.972'), (1, '103659.764')] +[2023-09-21 15:12:54,505][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000816_417792.pth... +[2023-09-21 15:12:54,505][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000816_417792.pth... +[2023-09-21 15:12:54,509][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000616_315392.pth +[2023-09-21 15:12:54,509][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000616_315392.pth +[2023-09-21 15:12:54,509][101034] Saving new best policy, reward=116085.972! +[2023-09-21 15:12:54,509][101035] Saving new best policy, reward=103659.764! +[2023-09-21 15:12:59,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6679.6). Total num frames: 868352. Throughput: 0: 3349.9, 1: 3347.9. Samples: 864992. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-21 15:12:59,497][99566] Avg episode reward: [(0, '116085.972'), (1, '104473.905')] +[2023-09-21 15:12:59,498][101035] Saving new best policy, reward=104473.905! +[2023-09-21 15:13:04,444][101115] Updated weights for policy 0, policy_version 880 (0.0016) +[2023-09-21 15:13:04,444][101117] Updated weights for policy 1, policy_version 880 (0.0015) +[2023-09-21 15:13:04,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6675.0). Total num frames: 901120. Throughput: 0: 3359.1, 1: 3359.9. Samples: 881714. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:13:04,497][99566] Avg episode reward: [(0, '120958.463'), (1, '109086.335')] +[2023-09-21 15:13:04,498][101034] Saving new best policy, reward=120958.463! +[2023-09-21 15:13:04,498][101035] Saving new best policy, reward=109086.335! +[2023-09-21 15:13:09,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6670.6). Total num frames: 933888. Throughput: 0: 3373.1, 1: 3378.0. Samples: 925336. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:09,497][99566] Avg episode reward: [(0, '121585.003'), (1, '109071.902')] +[2023-09-21 15:13:09,507][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000000912_466944.pth... +[2023-09-21 15:13:09,507][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000000912_466944.pth... +[2023-09-21 15:13:09,516][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000720_368640.pth +[2023-09-21 15:13:09,516][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000720_368640.pth +[2023-09-21 15:13:09,517][101034] Saving new best policy, reward=121585.003! +[2023-09-21 15:13:14,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6666.6). Total num frames: 966656. Throughput: 0: 3353.0, 1: 3356.2. Samples: 966154. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-21 15:13:14,497][99566] Avg episode reward: [(0, '122500.165'), (1, '112494.272')] +[2023-09-21 15:13:14,498][101034] Saving new best policy, reward=122500.165! +[2023-09-21 15:13:14,498][101035] Saving new best policy, reward=112494.272! +[2023-09-21 15:13:16,904][101117] Updated weights for policy 1, policy_version 960 (0.0014) +[2023-09-21 15:13:16,904][101115] Updated weights for policy 0, policy_version 960 (0.0012) +[2023-09-21 15:13:19,497][99566] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6608.2). Total num frames: 991232. Throughput: 0: 3280.2, 1: 3286.4. Samples: 982602. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:19,497][99566] Avg episode reward: [(0, '124457.495'), (1, '113067.463')] +[2023-09-21 15:13:19,499][101034] Saving new best policy, reward=124457.495! +[2023-09-21 15:13:19,499][101035] Saving new best policy, reward=113067.463! +[2023-09-21 15:13:24,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6553.6, 300 sec: 6606.4). Total num frames: 1024000. Throughput: 0: 3215.8, 1: 3213.7. Samples: 1018952. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:24,498][99566] Avg episode reward: [(0, '128172.103'), (1, '116190.090')] +[2023-09-21 15:13:24,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001000_512000.pth... +[2023-09-21 15:13:24,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001000_512000.pth... +[2023-09-21 15:13:24,513][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000816_417792.pth +[2023-09-21 15:13:24,513][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000816_417792.pth +[2023-09-21 15:13:24,513][101035] Saving new best policy, reward=116190.090! +[2023-09-21 15:13:24,514][101034] Saving new best policy, reward=128172.103! +[2023-09-21 15:13:29,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6604.8). Total num frames: 1056768. Throughput: 0: 3251.9, 1: 3247.1. Samples: 1060632. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:29,498][99566] Avg episode reward: [(0, '129655.387'), (1, '116680.591')] +[2023-09-21 15:13:29,499][101034] Saving new best policy, reward=129655.387! +[2023-09-21 15:13:29,499][101035] Saving new best policy, reward=116680.591! +[2023-09-21 15:13:29,751][101117] Updated weights for policy 1, policy_version 1040 (0.0013) +[2023-09-21 15:13:29,751][101115] Updated weights for policy 0, policy_version 1040 (0.0013) +[2023-09-21 15:13:34,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6603.2). Total num frames: 1089536. Throughput: 0: 3241.6, 1: 3235.9. Samples: 1078160. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:34,498][99566] Avg episode reward: [(0, '131544.446'), (1, '116603.723')] +[2023-09-21 15:13:34,499][101034] Saving new best policy, reward=131544.446! +[2023-09-21 15:13:39,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6601.8). Total num frames: 1122304. Throughput: 0: 3209.9, 1: 3206.6. Samples: 1115360. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-21 15:13:39,498][99566] Avg episode reward: [(0, '132903.044'), (1, '116397.835')] +[2023-09-21 15:13:39,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001096_561152.pth... +[2023-09-21 15:13:39,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001096_561152.pth... +[2023-09-21 15:13:39,514][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000000912_466944.pth +[2023-09-21 15:13:39,516][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000000912_466944.pth +[2023-09-21 15:13:39,517][101034] Saving new best policy, reward=132903.044! +[2023-09-21 15:13:42,911][101115] Updated weights for policy 0, policy_version 1120 (0.0014) +[2023-09-21 15:13:42,911][101117] Updated weights for policy 1, policy_version 1120 (0.0013) +[2023-09-21 15:13:44,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6600.4). Total num frames: 1155072. Throughput: 0: 3195.6, 1: 3195.7. Samples: 1152604. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:44,498][99566] Avg episode reward: [(0, '137104.414'), (1, '119496.691')] +[2023-09-21 15:13:44,499][101034] Saving new best policy, reward=137104.414! +[2023-09-21 15:13:44,499][101035] Saving new best policy, reward=119496.691! +[2023-09-21 15:13:49,496][99566] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6553.6). Total num frames: 1179648. Throughput: 0: 3200.2, 1: 3200.0. Samples: 1169724. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:49,497][99566] Avg episode reward: [(0, '137104.414'), (1, '121701.677')] +[2023-09-21 15:13:49,526][101035] Saving new best policy, reward=121701.677! +[2023-09-21 15:13:54,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6597.9). Total num frames: 1220608. Throughput: 0: 3163.7, 1: 3159.2. Samples: 1209864. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:54,498][99566] Avg episode reward: [(0, '142552.888'), (1, '126028.832')] +[2023-09-21 15:13:54,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001192_610304.pth... +[2023-09-21 15:13:54,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001192_610304.pth... +[2023-09-21 15:13:54,515][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001000_512000.pth +[2023-09-21 15:13:54,515][101035] Saving new best policy, reward=126028.832! +[2023-09-21 15:13:54,516][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001000_512000.pth +[2023-09-21 15:13:54,516][101034] Saving new best policy, reward=142552.888! +[2023-09-21 15:13:55,395][101117] Updated weights for policy 1, policy_version 1200 (0.0013) +[2023-09-21 15:13:55,395][101115] Updated weights for policy 0, policy_version 1200 (0.0014) +[2023-09-21 15:13:59,496][99566] Fps is (10 sec: 7372.9, 60 sec: 6417.1, 300 sec: 6596.7). Total num frames: 1253376. Throughput: 0: 3178.4, 1: 3175.2. Samples: 1252066. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:13:59,497][99566] Avg episode reward: [(0, '143172.574'), (1, '126257.348')] +[2023-09-21 15:13:59,498][101034] Saving new best policy, reward=143172.574! +[2023-09-21 15:13:59,498][101035] Saving new best policy, reward=126257.348! +[2023-09-21 15:14:04,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6595.6). Total num frames: 1286144. Throughput: 0: 3190.2, 1: 3193.8. Samples: 1269880. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:04,497][99566] Avg episode reward: [(0, '143312.733'), (1, '133847.080')] +[2023-09-21 15:14:04,498][101035] Saving new best policy, reward=133847.080! +[2023-09-21 15:14:04,498][101034] Saving new best policy, reward=143312.733! +[2023-09-21 15:14:07,603][101117] Updated weights for policy 1, policy_version 1280 (0.0015) +[2023-09-21 15:14:07,604][101115] Updated weights for policy 0, policy_version 1280 (0.0013) +[2023-09-21 15:14:09,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6594.6). Total num frames: 1318912. Throughput: 0: 3247.3, 1: 3248.9. Samples: 1311276. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:14:09,497][99566] Avg episode reward: [(0, '142024.998'), (1, '135803.021')] +[2023-09-21 15:14:09,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001288_659456.pth... +[2023-09-21 15:14:09,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001288_659456.pth... +[2023-09-21 15:14:09,514][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001096_561152.pth +[2023-09-21 15:14:09,514][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001096_561152.pth +[2023-09-21 15:14:09,514][101035] Saving new best policy, reward=135803.021! +[2023-09-21 15:14:14,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6593.6). Total num frames: 1351680. Throughput: 0: 3252.1, 1: 3252.0. Samples: 1353318. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-21 15:14:14,497][99566] Avg episode reward: [(0, '137567.192'), (1, '137580.324')] +[2023-09-21 15:14:14,498][101035] Saving new best policy, reward=137580.324! +[2023-09-21 15:14:19,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6592.6). Total num frames: 1384448. Throughput: 0: 3300.0, 1: 3298.9. Samples: 1375110. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:14:19,497][99566] Avg episode reward: [(0, '135167.354'), (1, '138853.897')] +[2023-09-21 15:14:19,498][101035] Saving new best policy, reward=138853.897! +[2023-09-21 15:14:19,579][101115] Updated weights for policy 0, policy_version 1360 (0.0013) +[2023-09-21 15:14:19,580][101117] Updated weights for policy 1, policy_version 1360 (0.0011) +[2023-09-21 15:14:24,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 1409024. Throughput: 0: 3251.1, 1: 3259.6. Samples: 1408344. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-21 15:14:24,497][99566] Avg episode reward: [(0, '133135.888'), (1, '140760.635')] +[2023-09-21 15:14:24,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001376_704512.pth... +[2023-09-21 15:14:24,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001376_704512.pth... +[2023-09-21 15:14:24,519][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001192_610304.pth +[2023-09-21 15:14:24,519][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001192_610304.pth +[2023-09-21 15:14:24,520][101035] Saving new best policy, reward=140760.635! +[2023-09-21 15:14:29,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 1441792. Throughput: 0: 3224.6, 1: 3224.8. Samples: 1442826. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-21 15:14:29,498][99566] Avg episode reward: [(0, '133811.659'), (1, '141653.024')] +[2023-09-21 15:14:29,499][101035] Saving new best policy, reward=141653.024! +[2023-09-21 15:14:33,167][101115] Updated weights for policy 0, policy_version 1440 (0.0014) +[2023-09-21 15:14:33,167][101117] Updated weights for policy 1, policy_version 1440 (0.0011) +[2023-09-21 15:14:34,496][99566] Fps is (10 sec: 7372.8, 60 sec: 6553.6, 300 sec: 6590.0). Total num frames: 1482752. Throughput: 0: 3281.4, 1: 3281.6. Samples: 1465058. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:34,498][99566] Avg episode reward: [(0, '137138.699'), (1, '143246.225')] +[2023-09-21 15:14:34,499][101035] Saving new best policy, reward=143246.225! +[2023-09-21 15:14:39,496][99566] Fps is (10 sec: 7372.9, 60 sec: 6553.6, 300 sec: 6589.2). Total num frames: 1515520. Throughput: 0: 3298.9, 1: 3303.3. Samples: 1506962. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:39,497][99566] Avg episode reward: [(0, '138323.470'), (1, '146247.815')] +[2023-09-21 15:14:39,505][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001480_757760.pth... +[2023-09-21 15:14:39,505][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001480_757760.pth... +[2023-09-21 15:14:39,514][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001288_659456.pth +[2023-09-21 15:14:39,515][101035] Saving new best policy, reward=146247.815! +[2023-09-21 15:14:39,515][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001288_659456.pth +[2023-09-21 15:14:44,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6588.5). Total num frames: 1548288. Throughput: 0: 3242.7, 1: 3242.9. Samples: 1543918. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:44,497][99566] Avg episode reward: [(0, '146316.476'), (1, '147786.818')] +[2023-09-21 15:14:44,498][101034] Saving new best policy, reward=146316.476! +[2023-09-21 15:14:44,498][101035] Saving new best policy, reward=147786.818! +[2023-09-21 15:14:46,115][101115] Updated weights for policy 0, policy_version 1520 (0.0014) +[2023-09-21 15:14:46,115][101117] Updated weights for policy 1, policy_version 1520 (0.0015) +[2023-09-21 15:14:49,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 1572864. Throughput: 0: 3228.2, 1: 3221.9. Samples: 1560134. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:49,497][99566] Avg episode reward: [(0, '148683.548'), (1, '148519.388')] +[2023-09-21 15:14:49,498][101034] Saving new best policy, reward=148683.548! +[2023-09-21 15:14:49,498][101035] Saving new best policy, reward=148519.388! +[2023-09-21 15:14:54,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 1605632. Throughput: 0: 3169.4, 1: 3167.6. Samples: 1596444. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:14:54,497][99566] Avg episode reward: [(0, '154239.879'), (1, '148233.580')] +[2023-09-21 15:14:54,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001568_802816.pth... +[2023-09-21 15:14:54,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001568_802816.pth... +[2023-09-21 15:14:54,512][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001376_704512.pth +[2023-09-21 15:14:54,514][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001376_704512.pth +[2023-09-21 15:14:54,514][101034] Saving new best policy, reward=154239.879! +[2023-09-21 15:14:59,496][99566] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6520.8). Total num frames: 1630208. Throughput: 0: 3099.2, 1: 3100.9. Samples: 1632322. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-21 15:14:59,498][99566] Avg episode reward: [(0, '154239.879'), (1, '148584.617')] +[2023-09-21 15:14:59,499][101035] Saving new best policy, reward=148584.617! +[2023-09-21 15:14:59,981][101117] Updated weights for policy 1, policy_version 1600 (0.0013) +[2023-09-21 15:14:59,981][101115] Updated weights for policy 0, policy_version 1600 (0.0013) +[2023-09-21 15:15:04,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6521.5). Total num frames: 1662976. Throughput: 0: 3053.0, 1: 3054.0. Samples: 1649926. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:15:04,498][99566] Avg episode reward: [(0, '155604.981'), (1, '151719.185')] +[2023-09-21 15:15:04,499][101034] Saving new best policy, reward=155604.981! +[2023-09-21 15:15:04,499][101035] Saving new best policy, reward=151719.185! +[2023-09-21 15:15:09,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6522.1). Total num frames: 1695744. Throughput: 0: 3104.8, 1: 3104.4. Samples: 1687760. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:15:09,498][99566] Avg episode reward: [(0, '155604.981'), (1, '151719.185')] +[2023-09-21 15:15:09,508][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001656_847872.pth... +[2023-09-21 15:15:09,508][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001656_847872.pth... +[2023-09-21 15:15:09,514][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001480_757760.pth +[2023-09-21 15:15:09,517][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001480_757760.pth +[2023-09-21 15:15:12,938][101115] Updated weights for policy 0, policy_version 1680 (0.0015) +[2023-09-21 15:15:12,938][101117] Updated weights for policy 1, policy_version 1680 (0.0013) +[2023-09-21 15:15:14,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6522.7). Total num frames: 1728512. Throughput: 0: 3141.0, 1: 3140.8. Samples: 1725508. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:15:14,497][99566] Avg episode reward: [(0, '155183.801'), (1, '155086.429')] +[2023-09-21 15:15:14,498][101035] Saving new best policy, reward=155086.429! +[2023-09-21 15:15:19,497][99566] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6492.9). Total num frames: 1753088. Throughput: 0: 3084.1, 1: 3084.4. Samples: 1742640. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-21 15:15:19,498][99566] Avg episode reward: [(0, '155183.801'), (1, '155086.429')] +[2023-09-21 15:15:24,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6494.0). Total num frames: 1785856. Throughput: 0: 3047.1, 1: 3042.9. Samples: 1781016. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:15:24,497][99566] Avg episode reward: [(0, '156547.028'), (1, '156937.777')] +[2023-09-21 15:15:24,507][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001744_892928.pth... +[2023-09-21 15:15:24,507][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001744_892928.pth... +[2023-09-21 15:15:24,514][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001568_802816.pth +[2023-09-21 15:15:24,514][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001568_802816.pth +[2023-09-21 15:15:24,515][101035] Saving new best policy, reward=156937.777! +[2023-09-21 15:15:24,515][101034] Saving new best policy, reward=156547.028! +[2023-09-21 15:15:25,925][101115] Updated weights for policy 0, policy_version 1760 (0.0016) +[2023-09-21 15:15:25,926][101117] Updated weights for policy 1, policy_version 1760 (0.0016) +[2023-09-21 15:15:29,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6495.1). Total num frames: 1818624. Throughput: 0: 3045.6, 1: 3055.0. Samples: 1818448. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:15:29,497][99566] Avg episode reward: [(0, '156547.028'), (1, '156937.777')] +[2023-09-21 15:15:34,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6496.1). Total num frames: 1851392. Throughput: 0: 3056.4, 1: 3058.8. Samples: 1835316. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:15:34,497][99566] Avg episode reward: [(0, '157397.752'), (1, '159312.801')] +[2023-09-21 15:15:34,498][101034] Saving new best policy, reward=157397.752! +[2023-09-21 15:15:34,498][101035] Saving new best policy, reward=159312.801! +[2023-09-21 15:15:39,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6468.9). Total num frames: 1875968. Throughput: 0: 3089.1, 1: 3090.0. Samples: 1874502. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:15:39,497][99566] Avg episode reward: [(0, '156903.730'), (1, '159312.801')] +[2023-09-21 15:15:39,554][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001840_942080.pth... +[2023-09-21 15:15:39,555][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001840_942080.pth... +[2023-09-21 15:15:39,558][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001656_847872.pth +[2023-09-21 15:15:39,559][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001656_847872.pth +[2023-09-21 15:15:39,559][101117] Updated weights for policy 1, policy_version 1840 (0.0011) +[2023-09-21 15:15:39,560][101115] Updated weights for policy 0, policy_version 1840 (0.0013) +[2023-09-21 15:15:44,496][99566] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6498.1). Total num frames: 1916928. Throughput: 0: 3111.8, 1: 3110.3. Samples: 1912318. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:15:44,498][99566] Avg episode reward: [(0, '157395.614'), (1, '159854.521')] +[2023-09-21 15:15:44,499][101035] Saving new best policy, reward=159854.521! +[2023-09-21 15:15:49,496][99566] Fps is (10 sec: 7372.8, 60 sec: 6280.5, 300 sec: 6553.6). Total num frames: 1949696. Throughput: 0: 3142.5, 1: 3145.9. Samples: 1932904. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-21 15:15:49,497][99566] Avg episode reward: [(0, '157502.563'), (1, '159854.521')] +[2023-09-21 15:15:49,498][101034] Saving new best policy, reward=157502.563! +[2023-09-21 15:15:51,986][101115] Updated weights for policy 0, policy_version 1920 (0.0012) +[2023-09-21 15:15:51,986][101117] Updated weights for policy 1, policy_version 1920 (0.0012) +[2023-09-21 15:15:54,496][99566] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6498.1). Total num frames: 1974272. Throughput: 0: 3146.4, 1: 3140.3. Samples: 1970662. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:15:54,497][99566] Avg episode reward: [(0, '154117.366'), (1, '160110.948')] +[2023-09-21 15:15:54,503][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000001928_987136.pth... +[2023-09-21 15:15:54,503][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000001928_987136.pth... +[2023-09-21 15:15:54,507][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001744_892928.pth +[2023-09-21 15:15:54,507][101035] Saving new best policy, reward=160110.948! +[2023-09-21 15:15:54,512][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001744_892928.pth +[2023-09-21 15:15:59,497][99566] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 2015232. Throughput: 0: 3161.0, 1: 3160.7. Samples: 2009984. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:15:59,498][99566] Avg episode reward: [(0, '152354.094'), (1, '160110.948')] +[2023-09-21 15:16:04,496][99566] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6525.8). Total num frames: 2039808. Throughput: 0: 3201.8, 1: 3202.4. Samples: 2030826. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:16:04,497][99566] Avg episode reward: [(0, '145305.518'), (1, '161252.171')] +[2023-09-21 15:16:04,498][101035] Saving new best policy, reward=161252.171! +[2023-09-21 15:16:04,810][101117] Updated weights for policy 1, policy_version 2000 (0.0015) +[2023-09-21 15:16:04,810][101115] Updated weights for policy 0, policy_version 2000 (0.0012) +[2023-09-21 15:16:09,497][99566] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6498.1). Total num frames: 2072576. Throughput: 0: 3152.8, 1: 3155.5. Samples: 2064886. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:16:09,498][99566] Avg episode reward: [(0, '144668.785'), (1, '161252.171')] +[2023-09-21 15:16:09,506][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002024_1036288.pth... +[2023-09-21 15:16:09,506][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002024_1036288.pth... +[2023-09-21 15:16:09,511][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001840_942080.pth +[2023-09-21 15:16:09,513][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001840_942080.pth +[2023-09-21 15:16:14,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6470.3). Total num frames: 2097152. Throughput: 0: 3118.9, 1: 3111.8. Samples: 2098830. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-21 15:16:14,498][99566] Avg episode reward: [(0, '143485.830'), (1, '162275.771')] +[2023-09-21 15:16:14,500][101035] Saving new best policy, reward=162275.771! +[2023-09-21 15:16:18,984][101115] Updated weights for policy 0, policy_version 2080 (0.0014) +[2023-09-21 15:16:18,985][101117] Updated weights for policy 1, policy_version 2080 (0.0014) +[2023-09-21 15:16:19,497][99566] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6442.5). Total num frames: 2129920. Throughput: 0: 3111.4, 1: 3107.0. Samples: 2115142. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:19,498][99566] Avg episode reward: [(0, '141474.267'), (1, '162345.285')] +[2023-09-21 15:16:19,499][101035] Saving new best policy, reward=162345.285! +[2023-09-21 15:16:24,497][99566] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6442.5). Total num frames: 2162688. Throughput: 0: 3106.0, 1: 3115.1. Samples: 2154452. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:24,498][99566] Avg episode reward: [(0, '140839.183'), (1, '163464.763')] +[2023-09-21 15:16:24,507][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002112_1081344.pth... +[2023-09-21 15:16:24,507][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002112_1081344.pth... +[2023-09-21 15:16:24,516][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000001928_987136.pth +[2023-09-21 15:16:24,516][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000001928_987136.pth +[2023-09-21 15:16:24,517][101035] Saving new best policy, reward=163464.763! +[2023-09-21 15:16:29,496][99566] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6428.6). Total num frames: 2195456. Throughput: 0: 3110.6, 1: 3111.2. Samples: 2192296. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-21 15:16:29,497][99566] Avg episode reward: [(0, '141474.326'), (1, '163612.163')] +[2023-09-21 15:16:29,498][101035] Saving new best policy, reward=163612.163! +[2023-09-21 15:16:32,534][101115] Updated weights for policy 0, policy_version 2160 (0.0013) +[2023-09-21 15:16:32,534][101117] Updated weights for policy 1, policy_version 2160 (0.0015) +[2023-09-21 15:16:34,496][99566] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6387.0). Total num frames: 2220032. Throughput: 0: 3060.2, 1: 3058.9. Samples: 2208266. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:34,497][99566] Avg episode reward: [(0, '143510.663'), (1, '161964.692')] +[2023-09-21 15:16:39,497][99566] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6387.0). Total num frames: 2252800. Throughput: 0: 3029.4, 1: 3028.4. Samples: 2243264. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:39,498][99566] Avg episode reward: [(0, '144449.158'), (1, '159709.222')] +[2023-09-21 15:16:39,507][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002200_1126400.pth... +[2023-09-21 15:16:39,507][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002200_1126400.pth... +[2023-09-21 15:16:39,513][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002024_1036288.pth +[2023-09-21 15:16:39,514][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002024_1036288.pth +[2023-09-21 15:16:44,496][99566] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6359.2). Total num frames: 2277376. Throughput: 0: 2993.8, 1: 2994.1. Samples: 2279440. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:44,497][99566] Avg episode reward: [(0, '144500.336'), (1, '159960.034')] +[2023-09-21 15:16:46,205][101117] Updated weights for policy 1, policy_version 2240 (0.0015) +[2023-09-21 15:16:46,206][101115] Updated weights for policy 0, policy_version 2240 (0.0013) +[2023-09-21 15:16:49,496][99566] Fps is (10 sec: 5734.5, 60 sec: 6007.5, 300 sec: 6331.4). Total num frames: 2310144. Throughput: 0: 2960.0, 1: 2959.9. Samples: 2297222. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-21 15:16:49,497][99566] Avg episode reward: [(0, '148419.870'), (1, '157559.143')] +[2023-09-21 15:16:54,497][99566] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6359.2). Total num frames: 2342912. Throughput: 0: 2975.2, 1: 2973.6. Samples: 2332580. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-21 15:16:54,498][99566] Avg episode reward: [(0, '148104.147'), (1, '157533.887')] +[2023-09-21 15:16:54,507][101034] Saving ./train_dir/Standup/checkpoint_p0/checkpoint_000002288_1171456.pth... +[2023-09-21 15:16:54,507][101035] Saving ./train_dir/Standup/checkpoint_p1/checkpoint_000002288_1171456.pth... +[2023-09-21 15:16:54,512][101034] Removing ./train_dir/Standup/checkpoint_p0/checkpoint_000002112_1081344.pth +[2023-09-21 15:16:54,513][101035] Removing ./train_dir/Standup/checkpoint_p1/checkpoint_000002112_1081344.pth +[2023-09-21 15:16:59,496][99566] Fps is (10 sec: 5734.5, 60 sec: 5871.0, 300 sec: 6331.4). Total num frames: 2367488. Throughput: 0: 3009.8, 1: 3008.9. Samples: 2369672. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-21 15:16:59,497][99566] Avg episode reward: [(0, '149878.068'), (1, '156400.338')] +[2023-09-21 15:16:59,928][101115] Updated weights for policy 0, policy_vers