fatcat22's picture
Upload folder using huggingface_hub
338899f
[2023-07-04 11:28:34,438][00179] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-07-04 11:28:34,442][00179] Rollout worker 0 uses device cpu
[2023-07-04 11:28:34,444][00179] Rollout worker 1 uses device cpu
[2023-07-04 11:28:34,445][00179] Rollout worker 2 uses device cpu
[2023-07-04 11:28:34,447][00179] Rollout worker 3 uses device cpu
[2023-07-04 11:28:34,448][00179] Rollout worker 4 uses device cpu
[2023-07-04 11:28:34,449][00179] Rollout worker 5 uses device cpu
[2023-07-04 11:28:34,451][00179] Rollout worker 6 uses device cpu
[2023-07-04 11:28:34,452][00179] Rollout worker 7 uses device cpu
[2023-07-04 11:28:34,610][00179] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-04 11:28:34,611][00179] InferenceWorker_p0-w0: min num requests: 2
[2023-07-04 11:28:34,643][00179] Starting all processes...
[2023-07-04 11:28:34,645][00179] Starting process learner_proc0
[2023-07-04 11:28:34,693][00179] Starting all processes...
[2023-07-04 11:28:34,707][00179] Starting process inference_proc0-0
[2023-07-04 11:28:34,708][00179] Starting process rollout_proc0
[2023-07-04 11:28:34,709][00179] Starting process rollout_proc1
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc2
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc3
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc4
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc5
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc6
[2023-07-04 11:28:34,713][00179] Starting process rollout_proc7
[2023-07-04 11:28:50,065][11008] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-04 11:28:50,072][11008] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-07-04 11:28:50,138][11008] Num visible devices: 1
[2023-07-04 11:28:50,185][11014] Worker 5 uses CPU cores [1]
[2023-07-04 11:28:50,361][11012] Worker 3 uses CPU cores [1]
[2023-07-04 11:28:50,362][10995] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-04 11:28:50,370][10995] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-07-04 11:28:50,380][11015] Worker 4 uses CPU cores [0]
[2023-07-04 11:28:50,410][10995] Num visible devices: 1
[2023-07-04 11:28:50,411][11010] Worker 1 uses CPU cores [1]
[2023-07-04 11:28:50,421][11013] Worker 6 uses CPU cores [0]
[2023-07-04 11:28:50,422][11009] Worker 0 uses CPU cores [0]
[2023-07-04 11:28:50,428][10995] Starting seed is not provided
[2023-07-04 11:28:50,429][10995] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-04 11:28:50,430][10995] Initializing actor-critic model on device cuda:0
[2023-07-04 11:28:50,430][10995] RunningMeanStd input shape: (3, 72, 128)
[2023-07-04 11:28:50,433][10995] RunningMeanStd input shape: (1,)
[2023-07-04 11:28:50,451][10995] ConvEncoder: input_channels=3
[2023-07-04 11:28:50,496][11016] Worker 7 uses CPU cores [1]
[2023-07-04 11:28:50,525][11011] Worker 2 uses CPU cores [0]
[2023-07-04 11:28:50,711][10995] Conv encoder output size: 512
[2023-07-04 11:28:50,712][10995] Policy head output size: 512
[2023-07-04 11:28:50,765][10995] Created Actor Critic model with architecture:
[2023-07-04 11:28:50,765][10995] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-07-04 11:28:54,589][00179] Heartbeat connected on Batcher_0
[2023-07-04 11:28:54,610][00179] Heartbeat connected on InferenceWorker_p0-w0
[2023-07-04 11:28:54,620][00179] Heartbeat connected on RolloutWorker_w0
[2023-07-04 11:28:54,623][00179] Heartbeat connected on RolloutWorker_w1
[2023-07-04 11:28:54,627][00179] Heartbeat connected on RolloutWorker_w2
[2023-07-04 11:28:54,630][00179] Heartbeat connected on RolloutWorker_w3
[2023-07-04 11:28:54,637][00179] Heartbeat connected on RolloutWorker_w4
[2023-07-04 11:28:54,639][00179] Heartbeat connected on RolloutWorker_w5
[2023-07-04 11:28:54,641][00179] Heartbeat connected on RolloutWorker_w6
[2023-07-04 11:28:54,643][00179] Heartbeat connected on RolloutWorker_w7
[2023-07-04 11:28:58,675][10995] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-07-04 11:28:58,675][10995] No checkpoints found
[2023-07-04 11:28:58,676][10995] Did not load from checkpoint, starting from scratch!
[2023-07-04 11:28:58,676][10995] Initialized policy 0 weights for model version 0
[2023-07-04 11:28:58,681][10995] LearnerWorker_p0 finished initialization!
[2023-07-04 11:28:58,682][10995] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-07-04 11:28:58,682][00179] Heartbeat connected on LearnerWorker_p0
[2023-07-04 11:28:58,909][11008] RunningMeanStd input shape: (3, 72, 128)
[2023-07-04 11:28:58,911][11008] RunningMeanStd input shape: (1,)
[2023-07-04 11:28:58,930][11008] ConvEncoder: input_channels=3
[2023-07-04 11:28:59,089][11008] Conv encoder output size: 512
[2023-07-04 11:28:59,090][11008] Policy head output size: 512
[2023-07-04 11:28:59,214][00179] Inference worker 0-0 is ready!
[2023-07-04 11:28:59,216][00179] All inference workers are ready! Signal rollout workers to start!
[2023-07-04 11:28:59,312][11013] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,314][11009] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,315][11015] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,318][11011] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,376][11016] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,377][11012] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,381][11010] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:28:59,381][11014] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:29:00,824][11012] Decorrelating experience for 0 frames...
[2023-07-04 11:29:00,826][11014] Decorrelating experience for 0 frames...
[2023-07-04 11:29:01,459][11015] Decorrelating experience for 0 frames...
[2023-07-04 11:29:01,479][11013] Decorrelating experience for 0 frames...
[2023-07-04 11:29:01,476][11011] Decorrelating experience for 0 frames...
[2023-07-04 11:29:01,489][11009] Decorrelating experience for 0 frames...
[2023-07-04 11:29:02,638][11012] Decorrelating experience for 32 frames...
[2023-07-04 11:29:02,666][11010] Decorrelating experience for 0 frames...
[2023-07-04 11:29:03,057][11015] Decorrelating experience for 32 frames...
[2023-07-04 11:29:03,066][11013] Decorrelating experience for 32 frames...
[2023-07-04 11:29:03,182][11014] Decorrelating experience for 32 frames...
[2023-07-04 11:29:03,187][11016] Decorrelating experience for 0 frames...
[2023-07-04 11:29:03,417][00179] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-07-04 11:29:04,013][11011] Decorrelating experience for 32 frames...
[2023-07-04 11:29:04,187][11009] Decorrelating experience for 32 frames...
[2023-07-04 11:29:04,378][11010] Decorrelating experience for 32 frames...
[2023-07-04 11:29:04,763][11012] Decorrelating experience for 64 frames...
[2023-07-04 11:29:04,802][11013] Decorrelating experience for 64 frames...
[2023-07-04 11:29:04,971][11014] Decorrelating experience for 64 frames...
[2023-07-04 11:29:05,248][11011] Decorrelating experience for 64 frames...
[2023-07-04 11:29:05,340][11016] Decorrelating experience for 32 frames...
[2023-07-04 11:29:05,838][11010] Decorrelating experience for 64 frames...
[2023-07-04 11:29:05,965][11014] Decorrelating experience for 96 frames...
[2023-07-04 11:29:06,612][11013] Decorrelating experience for 96 frames...
[2023-07-04 11:29:06,612][11010] Decorrelating experience for 96 frames...
[2023-07-04 11:29:06,623][11011] Decorrelating experience for 96 frames...
[2023-07-04 11:29:06,858][11009] Decorrelating experience for 64 frames...
[2023-07-04 11:29:07,095][11015] Decorrelating experience for 64 frames...
[2023-07-04 11:29:07,699][11016] Decorrelating experience for 64 frames...
[2023-07-04 11:29:08,417][00179] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 152.4. Samples: 762. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-07-04 11:29:08,424][00179] Avg episode reward: [(0, '1.408')]
[2023-07-04 11:29:09,376][11016] Decorrelating experience for 96 frames...
[2023-07-04 11:29:10,221][11009] Decorrelating experience for 96 frames...
[2023-07-04 11:29:10,257][11015] Decorrelating experience for 96 frames...
[2023-07-04 11:29:11,130][10995] Signal inference workers to stop experience collection...
[2023-07-04 11:29:11,145][11008] InferenceWorker_p0-w0: stopping experience collection
[2023-07-04 11:29:11,438][11012] Decorrelating experience for 96 frames...
[2023-07-04 11:29:13,417][00179] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 255.0. Samples: 2550. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-07-04 11:29:13,420][00179] Avg episode reward: [(0, '3.545')]
[2023-07-04 11:29:14,968][10995] Signal inference workers to resume experience collection...
[2023-07-04 11:29:14,969][11008] InferenceWorker_p0-w0: resuming experience collection
[2023-07-04 11:29:18,417][00179] Fps is (10 sec: 819.2, 60 sec: 546.1, 300 sec: 546.1). Total num frames: 8192. Throughput: 0: 220.4. Samples: 3306. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0)
[2023-07-04 11:29:18,419][00179] Avg episode reward: [(0, '3.411')]
[2023-07-04 11:29:23,417][00179] Fps is (10 sec: 2457.6, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 24576. Throughput: 0: 341.7. Samples: 6834. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:29:23,422][00179] Avg episode reward: [(0, '3.946')]
[2023-07-04 11:29:27,062][11008] Updated weights for policy 0, policy_version 10 (0.0034)
[2023-07-04 11:29:28,417][00179] Fps is (10 sec: 3686.3, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 389.4. Samples: 9734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:29:28,419][00179] Avg episode reward: [(0, '4.224')]
[2023-07-04 11:29:33,417][00179] Fps is (10 sec: 4096.0, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 546.9. Samples: 16406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:29:33,419][00179] Avg episode reward: [(0, '4.232')]
[2023-07-04 11:29:37,788][11008] Updated weights for policy 0, policy_version 20 (0.0025)
[2023-07-04 11:29:38,417][00179] Fps is (10 sec: 3686.5, 60 sec: 2340.6, 300 sec: 2340.6). Total num frames: 81920. Throughput: 0: 610.6. Samples: 21370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:29:38,424][00179] Avg episode reward: [(0, '4.256')]
[2023-07-04 11:29:43,417][00179] Fps is (10 sec: 2867.0, 60 sec: 2355.2, 300 sec: 2355.2). Total num frames: 94208. Throughput: 0: 586.5. Samples: 23460. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:29:43,423][00179] Avg episode reward: [(0, '4.320')]
[2023-07-04 11:29:48,417][00179] Fps is (10 sec: 3686.4, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 646.4. Samples: 29086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:29:48,424][00179] Avg episode reward: [(0, '4.371')]
[2023-07-04 11:29:48,431][10995] Saving new best policy, reward=4.371!
[2023-07-04 11:29:49,300][11008] Updated weights for policy 0, policy_version 30 (0.0012)
[2023-07-04 11:29:53,417][00179] Fps is (10 sec: 4505.9, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 139264. Throughput: 0: 780.0. Samples: 35864. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-07-04 11:29:53,418][00179] Avg episode reward: [(0, '4.373')]
[2023-07-04 11:29:53,424][10995] Saving new best policy, reward=4.373!
[2023-07-04 11:29:58,417][00179] Fps is (10 sec: 3686.4, 60 sec: 2830.0, 300 sec: 2830.0). Total num frames: 155648. Throughput: 0: 794.6. Samples: 38308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:29:58,428][00179] Avg episode reward: [(0, '4.424')]
[2023-07-04 11:29:58,438][10995] Saving new best policy, reward=4.424!
[2023-07-04 11:30:00,792][11008] Updated weights for policy 0, policy_version 40 (0.0019)
[2023-07-04 11:30:03,417][00179] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2798.9). Total num frames: 167936. Throughput: 0: 871.5. Samples: 42522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:30:03,419][00179] Avg episode reward: [(0, '4.395')]
[2023-07-04 11:30:08,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 2961.7). Total num frames: 192512. Throughput: 0: 927.0. Samples: 48548. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:30:08,424][00179] Avg episode reward: [(0, '4.303')]
[2023-07-04 11:30:10,976][11008] Updated weights for policy 0, policy_version 50 (0.0023)
[2023-07-04 11:30:13,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3042.7). Total num frames: 212992. Throughput: 0: 937.2. Samples: 51908. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:30:13,419][00179] Avg episode reward: [(0, '4.359')]
[2023-07-04 11:30:18,418][00179] Fps is (10 sec: 3685.8, 60 sec: 3686.3, 300 sec: 3058.3). Total num frames: 229376. Throughput: 0: 910.5. Samples: 57380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:30:18,421][00179] Avg episode reward: [(0, '4.329')]
[2023-07-04 11:30:23,419][00179] Fps is (10 sec: 2866.4, 60 sec: 3618.0, 300 sec: 3020.7). Total num frames: 241664. Throughput: 0: 893.0. Samples: 61556. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:30:23,424][00179] Avg episode reward: [(0, '4.523')]
[2023-07-04 11:30:23,426][10995] Saving new best policy, reward=4.523!
[2023-07-04 11:30:24,046][11008] Updated weights for policy 0, policy_version 60 (0.0034)
[2023-07-04 11:30:28,417][00179] Fps is (10 sec: 3277.3, 60 sec: 3618.1, 300 sec: 3084.0). Total num frames: 262144. Throughput: 0: 911.2. Samples: 64462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:30:28,422][00179] Avg episode reward: [(0, '4.537')]
[2023-07-04 11:30:28,431][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000064_262144.pth...
[2023-07-04 11:30:28,582][10995] Saving new best policy, reward=4.537!
[2023-07-04 11:30:33,203][11008] Updated weights for policy 0, policy_version 70 (0.0022)
[2023-07-04 11:30:33,417][00179] Fps is (10 sec: 4506.8, 60 sec: 3686.4, 300 sec: 3185.8). Total num frames: 286720. Throughput: 0: 934.0. Samples: 71118. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:30:33,422][00179] Avg episode reward: [(0, '4.557')]
[2023-07-04 11:30:33,427][10995] Saving new best policy, reward=4.557!
[2023-07-04 11:30:38,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3147.5). Total num frames: 299008. Throughput: 0: 893.8. Samples: 76086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:30:38,422][00179] Avg episode reward: [(0, '4.697')]
[2023-07-04 11:30:38,459][10995] Saving new best policy, reward=4.697!
[2023-07-04 11:30:43,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3153.9). Total num frames: 315392. Throughput: 0: 884.2. Samples: 78096. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:30:43,427][00179] Avg episode reward: [(0, '4.614')]
[2023-07-04 11:30:46,319][11008] Updated weights for policy 0, policy_version 80 (0.0022)
[2023-07-04 11:30:48,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3198.8). Total num frames: 335872. Throughput: 0: 912.4. Samples: 83582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:30:48,422][00179] Avg episode reward: [(0, '4.500')]
[2023-07-04 11:30:53,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3239.6). Total num frames: 356352. Throughput: 0: 928.5. Samples: 90332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:30:53,419][00179] Avg episode reward: [(0, '4.342')]
[2023-07-04 11:30:55,928][11008] Updated weights for policy 0, policy_version 90 (0.0022)
[2023-07-04 11:30:58,420][00179] Fps is (10 sec: 3685.1, 60 sec: 3617.9, 300 sec: 3241.1). Total num frames: 372736. Throughput: 0: 908.7. Samples: 92802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:30:58,425][00179] Avg episode reward: [(0, '4.283')]
[2023-07-04 11:31:03,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3242.7). Total num frames: 389120. Throughput: 0: 879.4. Samples: 96950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:31:03,421][00179] Avg episode reward: [(0, '4.318')]
[2023-07-04 11:31:08,267][11008] Updated weights for policy 0, policy_version 100 (0.0020)
[2023-07-04 11:31:08,417][00179] Fps is (10 sec: 3687.7, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 409600. Throughput: 0: 920.2. Samples: 102962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:31:08,424][00179] Avg episode reward: [(0, '4.613')]
[2023-07-04 11:31:13,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3308.3). Total num frames: 430080. Throughput: 0: 929.3. Samples: 106282. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:31:13,419][00179] Avg episode reward: [(0, '4.633')]
[2023-07-04 11:31:18,417][00179] Fps is (10 sec: 3686.2, 60 sec: 3618.2, 300 sec: 3307.1). Total num frames: 446464. Throughput: 0: 900.5. Samples: 111640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:31:18,424][00179] Avg episode reward: [(0, '4.642')]
[2023-07-04 11:31:19,346][11008] Updated weights for policy 0, policy_version 110 (0.0022)
[2023-07-04 11:31:23,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.3, 300 sec: 3276.8). Total num frames: 458752. Throughput: 0: 884.6. Samples: 115892. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:31:23,419][00179] Avg episode reward: [(0, '4.505')]
[2023-07-04 11:31:28,417][00179] Fps is (10 sec: 3686.6, 60 sec: 3686.4, 300 sec: 3333.3). Total num frames: 483328. Throughput: 0: 907.6. Samples: 118936. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:31:28,419][00179] Avg episode reward: [(0, '4.518')]
[2023-07-04 11:31:30,196][11008] Updated weights for policy 0, policy_version 120 (0.0012)
[2023-07-04 11:31:33,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3358.7). Total num frames: 503808. Throughput: 0: 937.6. Samples: 125776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:31:33,423][00179] Avg episode reward: [(0, '4.634')]
[2023-07-04 11:31:38,417][00179] Fps is (10 sec: 3686.2, 60 sec: 3686.4, 300 sec: 3356.1). Total num frames: 520192. Throughput: 0: 899.6. Samples: 130814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:31:38,422][00179] Avg episode reward: [(0, '4.779')]
[2023-07-04 11:31:38,437][10995] Saving new best policy, reward=4.779!
[2023-07-04 11:31:42,399][11008] Updated weights for policy 0, policy_version 130 (0.0015)
[2023-07-04 11:31:43,418][00179] Fps is (10 sec: 2866.7, 60 sec: 3618.0, 300 sec: 3328.0). Total num frames: 532480. Throughput: 0: 891.6. Samples: 132920. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:31:43,424][00179] Avg episode reward: [(0, '4.686')]
[2023-07-04 11:31:48,417][00179] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3351.3). Total num frames: 552960. Throughput: 0: 921.1. Samples: 138400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:31:48,418][00179] Avg episode reward: [(0, '4.387')]
[2023-07-04 11:31:52,162][11008] Updated weights for policy 0, policy_version 140 (0.0013)
[2023-07-04 11:31:53,417][00179] Fps is (10 sec: 4506.3, 60 sec: 3686.4, 300 sec: 3397.3). Total num frames: 577536. Throughput: 0: 938.1. Samples: 145178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:31:53,419][00179] Avg episode reward: [(0, '4.404')]
[2023-07-04 11:31:58,418][00179] Fps is (10 sec: 4095.3, 60 sec: 3686.5, 300 sec: 3393.8). Total num frames: 593920. Throughput: 0: 918.9. Samples: 147636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:31:58,421][00179] Avg episode reward: [(0, '4.639')]
[2023-07-04 11:32:03,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3367.8). Total num frames: 606208. Throughput: 0: 892.6. Samples: 151806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:32:03,421][00179] Avg episode reward: [(0, '4.752')]
[2023-07-04 11:32:05,056][11008] Updated weights for policy 0, policy_version 150 (0.0020)
[2023-07-04 11:32:08,417][00179] Fps is (10 sec: 3277.3, 60 sec: 3618.1, 300 sec: 3387.5). Total num frames: 626688. Throughput: 0: 934.3. Samples: 157934. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:32:08,422][00179] Avg episode reward: [(0, '4.649')]
[2023-07-04 11:32:13,417][00179] Fps is (10 sec: 4505.7, 60 sec: 3686.4, 300 sec: 3427.7). Total num frames: 651264. Throughput: 0: 941.3. Samples: 161294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:32:13,419][00179] Avg episode reward: [(0, '4.541')]
[2023-07-04 11:32:14,087][11008] Updated weights for policy 0, policy_version 160 (0.0019)
[2023-07-04 11:32:18,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3423.8). Total num frames: 667648. Throughput: 0: 910.4. Samples: 166744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:32:18,423][00179] Avg episode reward: [(0, '4.498')]
[2023-07-04 11:32:23,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3399.7). Total num frames: 679936. Throughput: 0: 892.0. Samples: 170952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:32:23,422][00179] Avg episode reward: [(0, '4.618')]
[2023-07-04 11:32:27,025][11008] Updated weights for policy 0, policy_version 170 (0.0015)
[2023-07-04 11:32:28,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3416.7). Total num frames: 700416. Throughput: 0: 911.1. Samples: 173918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:32:28,419][00179] Avg episode reward: [(0, '4.577')]
[2023-07-04 11:32:28,427][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth...
[2023-07-04 11:32:33,424][00179] Fps is (10 sec: 4502.3, 60 sec: 3685.9, 300 sec: 3452.2). Total num frames: 724992. Throughput: 0: 938.8. Samples: 180654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:32:33,431][00179] Avg episode reward: [(0, '4.684')]
[2023-07-04 11:32:36,934][11008] Updated weights for policy 0, policy_version 180 (0.0017)
[2023-07-04 11:32:38,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3448.3). Total num frames: 741376. Throughput: 0: 902.3. Samples: 185780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:32:38,422][00179] Avg episode reward: [(0, '4.798')]
[2023-07-04 11:32:38,433][10995] Saving new best policy, reward=4.798!
[2023-07-04 11:32:43,417][00179] Fps is (10 sec: 2869.3, 60 sec: 3686.5, 300 sec: 3425.7). Total num frames: 753664. Throughput: 0: 893.8. Samples: 187854. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:32:43,419][00179] Avg episode reward: [(0, '4.602')]
[2023-07-04 11:32:48,418][00179] Fps is (10 sec: 3276.5, 60 sec: 3686.3, 300 sec: 3440.6). Total num frames: 774144. Throughput: 0: 922.9. Samples: 193336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:32:48,420][00179] Avg episode reward: [(0, '4.689')]
[2023-07-04 11:32:48,881][11008] Updated weights for policy 0, policy_version 190 (0.0020)
[2023-07-04 11:32:53,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3472.7). Total num frames: 798720. Throughput: 0: 941.6. Samples: 200304. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:32:53,419][00179] Avg episode reward: [(0, '5.029')]
[2023-07-04 11:32:53,425][10995] Saving new best policy, reward=5.029!
[2023-07-04 11:32:58,423][00179] Fps is (10 sec: 4093.6, 60 sec: 3686.1, 300 sec: 3468.4). Total num frames: 815104. Throughput: 0: 924.2. Samples: 202890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:32:58,427][00179] Avg episode reward: [(0, '5.018')]
[2023-07-04 11:32:59,847][11008] Updated weights for policy 0, policy_version 200 (0.0020)
[2023-07-04 11:33:03,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3447.5). Total num frames: 827392. Throughput: 0: 896.1. Samples: 207068. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:33:03,423][00179] Avg episode reward: [(0, '4.842')]
[2023-07-04 11:33:08,417][00179] Fps is (10 sec: 3279.0, 60 sec: 3686.4, 300 sec: 3460.7). Total num frames: 847872. Throughput: 0: 930.3. Samples: 212814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:33:08,422][00179] Avg episode reward: [(0, '4.582')]
[2023-07-04 11:33:10,548][11008] Updated weights for policy 0, policy_version 210 (0.0030)
[2023-07-04 11:33:13,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3473.4). Total num frames: 868352. Throughput: 0: 940.5. Samples: 216240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:33:13,423][00179] Avg episode reward: [(0, '4.619')]
[2023-07-04 11:33:18,417][00179] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3469.6). Total num frames: 884736. Throughput: 0: 913.3. Samples: 221746. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:33:18,423][00179] Avg episode reward: [(0, '4.645')]
[2023-07-04 11:33:22,961][11008] Updated weights for policy 0, policy_version 220 (0.0018)
[2023-07-04 11:33:23,418][00179] Fps is (10 sec: 3276.5, 60 sec: 3686.3, 300 sec: 3465.8). Total num frames: 901120. Throughput: 0: 892.2. Samples: 225928. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:33:23,425][00179] Avg episode reward: [(0, '4.651')]
[2023-07-04 11:33:28,417][00179] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3477.7). Total num frames: 921600. Throughput: 0: 907.4. Samples: 228688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:33:28,419][00179] Avg episode reward: [(0, '4.634')]
[2023-07-04 11:33:33,056][11008] Updated weights for policy 0, policy_version 230 (0.0026)
[2023-07-04 11:33:33,417][00179] Fps is (10 sec: 4096.4, 60 sec: 3618.6, 300 sec: 3489.2). Total num frames: 942080. Throughput: 0: 932.1. Samples: 235280. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:33:33,422][00179] Avg episode reward: [(0, '4.863')]
[2023-07-04 11:33:38,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.3). Total num frames: 958464. Throughput: 0: 894.1. Samples: 240538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:33:38,419][00179] Avg episode reward: [(0, '4.889')]
[2023-07-04 11:33:43,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3467.0). Total num frames: 970752. Throughput: 0: 881.6. Samples: 242554. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:33:43,426][00179] Avg episode reward: [(0, '4.809')]
[2023-07-04 11:33:45,959][11008] Updated weights for policy 0, policy_version 240 (0.0019)
[2023-07-04 11:33:48,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3478.0). Total num frames: 991232. Throughput: 0: 903.0. Samples: 247702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:33:48,419][00179] Avg episode reward: [(0, '4.705')]
[2023-07-04 11:33:53,417][00179] Fps is (10 sec: 4505.5, 60 sec: 3618.1, 300 sec: 3502.8). Total num frames: 1015808. Throughput: 0: 923.9. Samples: 254388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:33:53,420][00179] Avg episode reward: [(0, '4.625')]
[2023-07-04 11:33:55,750][11008] Updated weights for policy 0, policy_version 250 (0.0015)
[2023-07-04 11:33:58,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3550.3, 300 sec: 3485.1). Total num frames: 1028096. Throughput: 0: 904.7. Samples: 256952. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:33:58,423][00179] Avg episode reward: [(0, '4.615')]
[2023-07-04 11:34:03,417][00179] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1044480. Throughput: 0: 873.9. Samples: 261070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:34:03,419][00179] Avg episode reward: [(0, '4.747')]
[2023-07-04 11:34:08,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1060864. Throughput: 0: 904.8. Samples: 266642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:34:08,424][00179] Avg episode reward: [(0, '4.628')]
[2023-07-04 11:34:08,529][11008] Updated weights for policy 0, policy_version 260 (0.0023)
[2023-07-04 11:34:13,417][00179] Fps is (10 sec: 4096.3, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 1085440. Throughput: 0: 918.3. Samples: 270012. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:34:13,424][00179] Avg episode reward: [(0, '4.590')]
[2023-07-04 11:34:18,427][00179] Fps is (10 sec: 4091.6, 60 sec: 3617.5, 300 sec: 3651.6). Total num frames: 1101824. Throughput: 0: 893.0. Samples: 275474. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:34:18,431][00179] Avg episode reward: [(0, '4.982')]
[2023-07-04 11:34:19,492][11008] Updated weights for policy 0, policy_version 270 (0.0034)
[2023-07-04 11:34:23,417][00179] Fps is (10 sec: 2867.1, 60 sec: 3549.9, 300 sec: 3623.9). Total num frames: 1114112. Throughput: 0: 864.3. Samples: 279430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:34:23,420][00179] Avg episode reward: [(0, '5.108')]
[2023-07-04 11:34:23,422][10995] Saving new best policy, reward=5.108!
[2023-07-04 11:34:28,417][00179] Fps is (10 sec: 3280.3, 60 sec: 3549.9, 300 sec: 3623.9). Total num frames: 1134592. Throughput: 0: 877.6. Samples: 282044. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:34:28,419][00179] Avg episode reward: [(0, '5.292')]
[2023-07-04 11:34:28,431][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000277_1134592.pth...
[2023-07-04 11:34:28,583][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000064_262144.pth
[2023-07-04 11:34:28,594][10995] Saving new best policy, reward=5.292!
[2023-07-04 11:34:31,130][11008] Updated weights for policy 0, policy_version 280 (0.0024)
[2023-07-04 11:34:33,417][00179] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1155072. Throughput: 0: 905.0. Samples: 288426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:34:33,419][00179] Avg episode reward: [(0, '4.934')]
[2023-07-04 11:34:38,420][00179] Fps is (10 sec: 3685.2, 60 sec: 3549.7, 300 sec: 3651.7). Total num frames: 1171456. Throughput: 0: 871.8. Samples: 293622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:34:38,422][00179] Avg episode reward: [(0, '5.006')]
[2023-07-04 11:34:43,419][00179] Fps is (10 sec: 2866.5, 60 sec: 3549.7, 300 sec: 3610.0). Total num frames: 1183744. Throughput: 0: 858.7. Samples: 295594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:34:43,422][00179] Avg episode reward: [(0, '5.054')]
[2023-07-04 11:34:44,201][11008] Updated weights for policy 0, policy_version 290 (0.0014)
[2023-07-04 11:34:48,417][00179] Fps is (10 sec: 3277.9, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1204224. Throughput: 0: 880.9. Samples: 300710. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:34:48,423][00179] Avg episode reward: [(0, '4.934')]
[2023-07-04 11:34:53,417][00179] Fps is (10 sec: 4097.1, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 1224704. Throughput: 0: 901.8. Samples: 307222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:34:53,421][00179] Avg episode reward: [(0, '5.029')]
[2023-07-04 11:34:53,787][11008] Updated weights for policy 0, policy_version 300 (0.0020)
[2023-07-04 11:34:58,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1241088. Throughput: 0: 888.0. Samples: 309972. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-07-04 11:34:58,421][00179] Avg episode reward: [(0, '5.202')]
[2023-07-04 11:35:03,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1257472. Throughput: 0: 858.2. Samples: 314084. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:35:03,420][00179] Avg episode reward: [(0, '5.572')]
[2023-07-04 11:35:03,426][10995] Saving new best policy, reward=5.572!
[2023-07-04 11:35:06,827][11008] Updated weights for policy 0, policy_version 310 (0.0018)
[2023-07-04 11:35:08,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1273856. Throughput: 0: 891.0. Samples: 319524. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:35:08,419][00179] Avg episode reward: [(0, '5.642')]
[2023-07-04 11:35:08,426][10995] Saving new best policy, reward=5.642!
[2023-07-04 11:35:13,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3610.1). Total num frames: 1294336. Throughput: 0: 904.9. Samples: 322764. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:35:13,422][00179] Avg episode reward: [(0, '5.170')]
[2023-07-04 11:35:16,863][11008] Updated weights for policy 0, policy_version 320 (0.0013)
[2023-07-04 11:35:18,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3482.2, 300 sec: 3624.0). Total num frames: 1310720. Throughput: 0: 888.0. Samples: 328388. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:35:18,423][00179] Avg episode reward: [(0, '5.222')]
[2023-07-04 11:35:23,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1327104. Throughput: 0: 862.8. Samples: 332444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:35:23,420][00179] Avg episode reward: [(0, '5.089')]
[2023-07-04 11:35:28,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1343488. Throughput: 0: 874.5. Samples: 334942. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:35:28,425][00179] Avg episode reward: [(0, '5.261')]
[2023-07-04 11:35:29,282][11008] Updated weights for policy 0, policy_version 330 (0.0041)
[2023-07-04 11:35:33,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3623.9). Total num frames: 1368064. Throughput: 0: 906.0. Samples: 341478. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:35:33,421][00179] Avg episode reward: [(0, '4.827')]
[2023-07-04 11:35:38,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3550.1, 300 sec: 3623.9). Total num frames: 1384448. Throughput: 0: 881.1. Samples: 346872. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
[2023-07-04 11:35:38,427][00179] Avg episode reward: [(0, '4.856')]
[2023-07-04 11:35:40,726][11008] Updated weights for policy 0, policy_version 340 (0.0013)
[2023-07-04 11:35:43,417][00179] Fps is (10 sec: 2867.1, 60 sec: 3550.0, 300 sec: 3596.1). Total num frames: 1396736. Throughput: 0: 866.8. Samples: 348980. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:35:43,426][00179] Avg episode reward: [(0, '5.049')]
[2023-07-04 11:35:48,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 1417216. Throughput: 0: 883.5. Samples: 353842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:35:48,421][00179] Avg episode reward: [(0, '5.239')]
[2023-07-04 11:35:52,118][11008] Updated weights for policy 0, policy_version 350 (0.0017)
[2023-07-04 11:35:53,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3610.1). Total num frames: 1437696. Throughput: 0: 903.8. Samples: 360196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:35:53,420][00179] Avg episode reward: [(0, '5.068')]
[2023-07-04 11:35:58,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1454080. Throughput: 0: 891.6. Samples: 362884. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:35:58,423][00179] Avg episode reward: [(0, '4.952')]
[2023-07-04 11:36:03,417][00179] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1466368. Throughput: 0: 854.2. Samples: 366828. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-07-04 11:36:03,419][00179] Avg episode reward: [(0, '5.152')]
[2023-07-04 11:36:05,615][11008] Updated weights for policy 0, policy_version 360 (0.0012)
[2023-07-04 11:36:08,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 1482752. Throughput: 0: 880.6. Samples: 372070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:36:08,419][00179] Avg episode reward: [(0, '5.103')]
[2023-07-04 11:36:13,417][00179] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3596.2). Total num frames: 1507328. Throughput: 0: 897.7. Samples: 375338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:36:13,419][00179] Avg episode reward: [(0, '5.021')]
[2023-07-04 11:36:15,046][11008] Updated weights for policy 0, policy_version 370 (0.0041)
[2023-07-04 11:36:18,420][00179] Fps is (10 sec: 4094.5, 60 sec: 3549.6, 300 sec: 3610.0). Total num frames: 1523712. Throughput: 0: 879.7. Samples: 381070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:36:18,423][00179] Avg episode reward: [(0, '5.087')]
[2023-07-04 11:36:23,417][00179] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 1536000. Throughput: 0: 852.1. Samples: 385216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:36:23,422][00179] Avg episode reward: [(0, '5.090')]
[2023-07-04 11:36:28,155][11008] Updated weights for policy 0, policy_version 380 (0.0028)
[2023-07-04 11:36:28,417][00179] Fps is (10 sec: 3277.9, 60 sec: 3549.8, 300 sec: 3568.4). Total num frames: 1556480. Throughput: 0: 856.3. Samples: 387512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:36:28,420][00179] Avg episode reward: [(0, '4.959')]
[2023-07-04 11:36:28,433][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000380_1556480.pth...
[2023-07-04 11:36:28,540][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth
[2023-07-04 11:36:33,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1576960. Throughput: 0: 888.8. Samples: 393838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:36:33,420][00179] Avg episode reward: [(0, '5.009')]
[2023-07-04 11:36:38,417][00179] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 1593344. Throughput: 0: 867.6. Samples: 399236. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:36:38,419][00179] Avg episode reward: [(0, '5.168')]
[2023-07-04 11:36:39,338][11008] Updated weights for policy 0, policy_version 390 (0.0023)
[2023-07-04 11:36:43,418][00179] Fps is (10 sec: 2866.7, 60 sec: 3481.5, 300 sec: 3568.4). Total num frames: 1605632. Throughput: 0: 851.3. Samples: 401194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:36:43,421][00179] Avg episode reward: [(0, '5.336')]
[2023-07-04 11:36:48,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 1626112. Throughput: 0: 867.0. Samples: 405844. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:36:48,419][00179] Avg episode reward: [(0, '5.256')]
[2023-07-04 11:36:51,340][11008] Updated weights for policy 0, policy_version 400 (0.0021)
[2023-07-04 11:36:53,417][00179] Fps is (10 sec: 4096.7, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 1646592. Throughput: 0: 889.2. Samples: 412084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:36:53,422][00179] Avg episode reward: [(0, '4.804')]
[2023-07-04 11:36:58,418][00179] Fps is (10 sec: 3276.4, 60 sec: 3413.3, 300 sec: 3568.4). Total num frames: 1658880. Throughput: 0: 875.5. Samples: 414738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:36:58,420][00179] Avg episode reward: [(0, '4.919')]
[2023-07-04 11:37:03,417][00179] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3554.5). Total num frames: 1675264. Throughput: 0: 835.2. Samples: 418652. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:37:03,419][00179] Avg episode reward: [(0, '4.976')]
[2023-07-04 11:37:04,762][11008] Updated weights for policy 0, policy_version 410 (0.0022)
[2023-07-04 11:37:08,417][00179] Fps is (10 sec: 3277.2, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1691648. Throughput: 0: 851.6. Samples: 423538. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:37:08,424][00179] Avg episode reward: [(0, '5.122')]
[2023-07-04 11:37:13,417][00179] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 1712128. Throughput: 0: 874.4. Samples: 426860. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:37:13,418][00179] Avg episode reward: [(0, '5.059')]
[2023-07-04 11:37:14,771][11008] Updated weights for policy 0, policy_version 420 (0.0018)
[2023-07-04 11:37:18,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3413.5, 300 sec: 3554.5). Total num frames: 1728512. Throughput: 0: 866.1. Samples: 432812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:37:18,426][00179] Avg episode reward: [(0, '5.358')]
[2023-07-04 11:37:23,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1744896. Throughput: 0: 832.8. Samples: 436714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:37:23,421][00179] Avg episode reward: [(0, '5.201')]
[2023-07-04 11:37:28,244][11008] Updated weights for policy 0, policy_version 430 (0.0025)
[2023-07-04 11:37:28,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 1761280. Throughput: 0: 836.4. Samples: 438830. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:37:28,425][00179] Avg episode reward: [(0, '5.220')]
[2023-07-04 11:37:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 1781760. Throughput: 0: 872.4. Samples: 445102. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:37:33,420][00179] Avg episode reward: [(0, '5.328')]
[2023-07-04 11:37:38,417][00179] Fps is (10 sec: 3686.2, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 1798144. Throughput: 0: 861.2. Samples: 450840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:37:38,421][00179] Avg episode reward: [(0, '5.601')]
[2023-07-04 11:37:38,767][11008] Updated weights for policy 0, policy_version 440 (0.0027)
[2023-07-04 11:37:43,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3481.7, 300 sec: 3526.7). Total num frames: 1814528. Throughput: 0: 845.2. Samples: 452772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:37:43,420][00179] Avg episode reward: [(0, '5.472')]
[2023-07-04 11:37:48,417][00179] Fps is (10 sec: 2867.3, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 1826816. Throughput: 0: 851.3. Samples: 456958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:37:48,419][00179] Avg episode reward: [(0, '5.748')]
[2023-07-04 11:37:48,478][10995] Saving new best policy, reward=5.748!
[2023-07-04 11:37:51,521][11008] Updated weights for policy 0, policy_version 450 (0.0012)
[2023-07-04 11:37:53,417][00179] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 1847296. Throughput: 0: 879.9. Samples: 463134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:37:53,419][00179] Avg episode reward: [(0, '5.788')]
[2023-07-04 11:37:53,457][10995] Saving new best policy, reward=5.788!
[2023-07-04 11:37:58,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3481.7, 300 sec: 3526.7). Total num frames: 1867776. Throughput: 0: 872.0. Samples: 466100. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:37:58,420][00179] Avg episode reward: [(0, '5.562')]
[2023-07-04 11:38:03,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3413.4, 300 sec: 3499.0). Total num frames: 1880064. Throughput: 0: 828.1. Samples: 470078. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:38:03,419][00179] Avg episode reward: [(0, '5.321')]
[2023-07-04 11:38:05,021][11008] Updated weights for policy 0, policy_version 460 (0.0013)
[2023-07-04 11:38:08,417][00179] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 1892352. Throughput: 0: 839.6. Samples: 474494. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-07-04 11:38:08,421][00179] Avg episode reward: [(0, '5.205')]
[2023-07-04 11:38:13,417][00179] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3485.1). Total num frames: 1912832. Throughput: 0: 860.6. Samples: 477556. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:38:13,421][00179] Avg episode reward: [(0, '5.177')]
[2023-07-04 11:38:15,469][11008] Updated weights for policy 0, policy_version 470 (0.0019)
[2023-07-04 11:38:18,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 1933312. Throughput: 0: 856.4. Samples: 483640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:38:18,418][00179] Avg episode reward: [(0, '5.250')]
[2023-07-04 11:38:23,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 1945600. Throughput: 0: 819.0. Samples: 487694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:38:23,419][00179] Avg episode reward: [(0, '5.358')]
[2023-07-04 11:38:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 1961984. Throughput: 0: 821.6. Samples: 489742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:38:28,425][00179] Avg episode reward: [(0, '5.411')]
[2023-07-04 11:38:28,433][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth...
[2023-07-04 11:38:28,583][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000277_1134592.pth
[2023-07-04 11:38:28,789][11008] Updated weights for policy 0, policy_version 480 (0.0018)
[2023-07-04 11:38:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 1982464. Throughput: 0: 865.7. Samples: 495916. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:38:33,418][00179] Avg episode reward: [(0, '5.374')]
[2023-07-04 11:38:38,422][00179] Fps is (10 sec: 4093.8, 60 sec: 3413.1, 300 sec: 3498.9). Total num frames: 2002944. Throughput: 0: 864.5. Samples: 502040. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:38:38,430][00179] Avg episode reward: [(0, '5.627')]
[2023-07-04 11:38:39,088][11008] Updated weights for policy 0, policy_version 490 (0.0019)
[2023-07-04 11:38:43,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3485.1). Total num frames: 2019328. Throughput: 0: 843.0. Samples: 504034. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:38:43,426][00179] Avg episode reward: [(0, '5.479')]
[2023-07-04 11:38:48,417][00179] Fps is (10 sec: 2868.7, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2031616. Throughput: 0: 846.4. Samples: 508164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:38:48,419][00179] Avg episode reward: [(0, '5.646')]
[2023-07-04 11:38:51,361][11008] Updated weights for policy 0, policy_version 500 (0.0031)
[2023-07-04 11:38:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2056192. Throughput: 0: 892.5. Samples: 514658. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-07-04 11:38:53,424][00179] Avg episode reward: [(0, '5.619')]
[2023-07-04 11:38:58,419][00179] Fps is (10 sec: 4504.5, 60 sec: 3481.5, 300 sec: 3498.9). Total num frames: 2076672. Throughput: 0: 893.9. Samples: 517782. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:38:58,422][00179] Avg episode reward: [(0, '5.592')]
[2023-07-04 11:39:02,718][11008] Updated weights for policy 0, policy_version 510 (0.0021)
[2023-07-04 11:39:03,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2088960. Throughput: 0: 862.1. Samples: 522434. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:39:03,421][00179] Avg episode reward: [(0, '5.771')]
[2023-07-04 11:39:08,417][00179] Fps is (10 sec: 2458.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2101248. Throughput: 0: 865.3. Samples: 526632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:39:08,419][00179] Avg episode reward: [(0, '6.176')]
[2023-07-04 11:39:08,431][10995] Saving new best policy, reward=6.176!
[2023-07-04 11:39:13,417][00179] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3471.3). Total num frames: 2125824. Throughput: 0: 891.2. Samples: 529844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:39:13,424][00179] Avg episode reward: [(0, '6.043')]
[2023-07-04 11:39:14,246][11008] Updated weights for policy 0, policy_version 520 (0.0037)
[2023-07-04 11:39:18,422][00179] Fps is (10 sec: 4503.3, 60 sec: 3549.6, 300 sec: 3498.9). Total num frames: 2146304. Throughput: 0: 897.9. Samples: 536324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:39:18,426][00179] Avg episode reward: [(0, '5.298')]
[2023-07-04 11:39:23,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2158592. Throughput: 0: 854.8. Samples: 540500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:39:23,426][00179] Avg episode reward: [(0, '5.153')]
[2023-07-04 11:39:27,101][11008] Updated weights for policy 0, policy_version 530 (0.0018)
[2023-07-04 11:39:28,417][00179] Fps is (10 sec: 2868.6, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2174976. Throughput: 0: 855.3. Samples: 542524. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:39:28,419][00179] Avg episode reward: [(0, '5.693')]
[2023-07-04 11:39:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2195456. Throughput: 0: 895.2. Samples: 548446. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:39:33,423][00179] Avg episode reward: [(0, '5.650')]
[2023-07-04 11:39:36,722][11008] Updated weights for policy 0, policy_version 540 (0.0016)
[2023-07-04 11:39:38,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3550.2, 300 sec: 3499.0). Total num frames: 2215936. Throughput: 0: 894.9. Samples: 554928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:39:38,421][00179] Avg episode reward: [(0, '5.235')]
[2023-07-04 11:39:43,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2228224. Throughput: 0: 869.8. Samples: 556920. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:39:43,427][00179] Avg episode reward: [(0, '5.316')]
[2023-07-04 11:39:48,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2244608. Throughput: 0: 860.4. Samples: 561154. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:39:48,419][00179] Avg episode reward: [(0, '5.384')]
[2023-07-04 11:39:49,783][11008] Updated weights for policy 0, policy_version 550 (0.0016)
[2023-07-04 11:39:53,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2269184. Throughput: 0: 913.6. Samples: 567744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:39:53,419][00179] Avg episode reward: [(0, '5.673')]
[2023-07-04 11:39:58,417][00179] Fps is (10 sec: 4505.3, 60 sec: 3550.0, 300 sec: 3498.9). Total num frames: 2289664. Throughput: 0: 918.0. Samples: 571156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:39:58,428][00179] Avg episode reward: [(0, '5.891')]
[2023-07-04 11:39:59,487][11008] Updated weights for policy 0, policy_version 560 (0.0016)
[2023-07-04 11:40:03,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2301952. Throughput: 0: 882.1. Samples: 576012. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:03,419][00179] Avg episode reward: [(0, '5.976')]
[2023-07-04 11:40:08,417][00179] Fps is (10 sec: 2867.4, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2318336. Throughput: 0: 884.0. Samples: 580278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:08,423][00179] Avg episode reward: [(0, '5.932')]
[2023-07-04 11:40:11,744][11008] Updated weights for policy 0, policy_version 570 (0.0028)
[2023-07-04 11:40:13,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2338816. Throughput: 0: 914.6. Samples: 583680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:13,424][00179] Avg episode reward: [(0, '6.216')]
[2023-07-04 11:40:13,497][10995] Saving new best policy, reward=6.216!
[2023-07-04 11:40:18,419][00179] Fps is (10 sec: 4095.1, 60 sec: 3550.0, 300 sec: 3498.9). Total num frames: 2359296. Throughput: 0: 932.0. Samples: 590388. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:18,421][00179] Avg episode reward: [(0, '6.335')]
[2023-07-04 11:40:18,518][10995] Saving new best policy, reward=6.335!
[2023-07-04 11:40:22,974][11008] Updated weights for policy 0, policy_version 580 (0.0016)
[2023-07-04 11:40:23,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2375680. Throughput: 0: 884.0. Samples: 594710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:23,420][00179] Avg episode reward: [(0, '6.331')]
[2023-07-04 11:40:28,417][00179] Fps is (10 sec: 2867.8, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2387968. Throughput: 0: 885.9. Samples: 596786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:28,419][00179] Avg episode reward: [(0, '6.269')]
[2023-07-04 11:40:28,448][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000584_2392064.pth...
[2023-07-04 11:40:28,553][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000380_1556480.pth
[2023-07-04 11:40:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2412544. Throughput: 0: 923.5. Samples: 602712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:40:33,419][00179] Avg episode reward: [(0, '5.975')]
[2023-07-04 11:40:33,989][11008] Updated weights for policy 0, policy_version 590 (0.0022)
[2023-07-04 11:40:38,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2433024. Throughput: 0: 922.0. Samples: 609236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:40:38,423][00179] Avg episode reward: [(0, '5.798')]
[2023-07-04 11:40:43,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2445312. Throughput: 0: 892.3. Samples: 611310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:40:43,428][00179] Avg episode reward: [(0, '5.998')]
[2023-07-04 11:40:46,508][11008] Updated weights for policy 0, policy_version 600 (0.0012)
[2023-07-04 11:40:48,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2461696. Throughput: 0: 877.9. Samples: 615516. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:40:48,425][00179] Avg episode reward: [(0, '6.179')]
[2023-07-04 11:40:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2482176. Throughput: 0: 924.0. Samples: 621858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:40:53,418][00179] Avg episode reward: [(0, '6.625')]
[2023-07-04 11:40:53,491][10995] Saving new best policy, reward=6.625!
[2023-07-04 11:40:56,396][11008] Updated weights for policy 0, policy_version 610 (0.0020)
[2023-07-04 11:40:58,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.2, 300 sec: 3526.7). Total num frames: 2506752. Throughput: 0: 918.3. Samples: 625002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:40:58,423][00179] Avg episode reward: [(0, '6.632')]
[2023-07-04 11:40:58,436][10995] Saving new best policy, reward=6.632!
[2023-07-04 11:41:03,418][00179] Fps is (10 sec: 3685.8, 60 sec: 3618.0, 300 sec: 3512.8). Total num frames: 2519040. Throughput: 0: 878.1. Samples: 629904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:41:03,421][00179] Avg episode reward: [(0, '6.825')]
[2023-07-04 11:41:03,424][10995] Saving new best policy, reward=6.825!
[2023-07-04 11:41:08,417][00179] Fps is (10 sec: 2457.6, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2531328. Throughput: 0: 873.2. Samples: 634002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:41:08,419][00179] Avg episode reward: [(0, '6.904')]
[2023-07-04 11:41:08,427][10995] Saving new best policy, reward=6.904!
[2023-07-04 11:41:09,616][11008] Updated weights for policy 0, policy_version 620 (0.0016)
[2023-07-04 11:41:13,417][00179] Fps is (10 sec: 3687.0, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 2555904. Throughput: 0: 900.3. Samples: 637298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:41:13,419][00179] Avg episode reward: [(0, '6.770')]
[2023-07-04 11:41:18,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.3, 300 sec: 3526.7). Total num frames: 2576384. Throughput: 0: 919.0. Samples: 644068. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:41:18,419][00179] Avg episode reward: [(0, '6.635')]
[2023-07-04 11:41:19,162][11008] Updated weights for policy 0, policy_version 630 (0.0027)
[2023-07-04 11:41:23,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3549.8, 300 sec: 3499.0). Total num frames: 2588672. Throughput: 0: 874.6. Samples: 648592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:41:23,424][00179] Avg episode reward: [(0, '6.990')]
[2023-07-04 11:41:23,430][10995] Saving new best policy, reward=6.990!
[2023-07-04 11:41:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 2605056. Throughput: 0: 872.3. Samples: 650562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:41:28,425][00179] Avg episode reward: [(0, '6.960')]
[2023-07-04 11:41:31,855][11008] Updated weights for policy 0, policy_version 640 (0.0026)
[2023-07-04 11:41:33,417][00179] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 2625536. Throughput: 0: 909.2. Samples: 656428. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:41:33,425][00179] Avg episode reward: [(0, '6.862')]
[2023-07-04 11:41:38,417][00179] Fps is (10 sec: 4505.5, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 2650112. Throughput: 0: 918.0. Samples: 663170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:41:38,419][00179] Avg episode reward: [(0, '6.721')]
[2023-07-04 11:41:42,707][11008] Updated weights for policy 0, policy_version 650 (0.0011)
[2023-07-04 11:41:43,418][00179] Fps is (10 sec: 3686.1, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2662400. Throughput: 0: 894.7. Samples: 665264. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:41:43,421][00179] Avg episode reward: [(0, '7.576')]
[2023-07-04 11:41:43,427][10995] Saving new best policy, reward=7.576!
[2023-07-04 11:41:48,417][00179] Fps is (10 sec: 2457.7, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2674688. Throughput: 0: 874.7. Samples: 669264. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:41:48,426][00179] Avg episode reward: [(0, '7.423')]
[2023-07-04 11:41:53,417][00179] Fps is (10 sec: 3686.7, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 2699264. Throughput: 0: 923.9. Samples: 675578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:41:53,421][00179] Avg episode reward: [(0, '7.059')]
[2023-07-04 11:41:54,044][11008] Updated weights for policy 0, policy_version 660 (0.0016)
[2023-07-04 11:41:58,417][00179] Fps is (10 sec: 4505.5, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2719744. Throughput: 0: 926.1. Samples: 678972. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:41:58,421][00179] Avg episode reward: [(0, '6.878')]
[2023-07-04 11:42:03,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3540.6). Total num frames: 2736128. Throughput: 0: 887.0. Samples: 683982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:42:03,420][00179] Avg episode reward: [(0, '7.295')]
[2023-07-04 11:42:05,977][11008] Updated weights for policy 0, policy_version 670 (0.0014)
[2023-07-04 11:42:08,417][00179] Fps is (10 sec: 2867.3, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 2748416. Throughput: 0: 880.9. Samples: 688230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:08,419][00179] Avg episode reward: [(0, '7.432')]
[2023-07-04 11:42:13,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 2772992. Throughput: 0: 912.0. Samples: 691600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:13,419][00179] Avg episode reward: [(0, '7.842')]
[2023-07-04 11:42:13,427][10995] Saving new best policy, reward=7.842!
[2023-07-04 11:42:15,935][11008] Updated weights for policy 0, policy_version 680 (0.0019)
[2023-07-04 11:42:18,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2793472. Throughput: 0: 932.2. Samples: 698376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:18,419][00179] Avg episode reward: [(0, '7.925')]
[2023-07-04 11:42:18,433][10995] Saving new best policy, reward=7.925!
[2023-07-04 11:42:23,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 2809856. Throughput: 0: 883.4. Samples: 702922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:23,422][00179] Avg episode reward: [(0, '8.184')]
[2023-07-04 11:42:23,425][10995] Saving new best policy, reward=8.184!
[2023-07-04 11:42:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3526.7). Total num frames: 2822144. Throughput: 0: 883.2. Samples: 705008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:42:28,423][00179] Avg episode reward: [(0, '8.321')]
[2023-07-04 11:42:28,432][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000689_2822144.pth...
[2023-07-04 11:42:28,544][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth
[2023-07-04 11:42:28,560][10995] Saving new best policy, reward=8.321!
[2023-07-04 11:42:29,076][11008] Updated weights for policy 0, policy_version 690 (0.0012)
[2023-07-04 11:42:33,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 2842624. Throughput: 0: 918.8. Samples: 710610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:33,422][00179] Avg episode reward: [(0, '8.317')]
[2023-07-04 11:42:38,191][11008] Updated weights for policy 0, policy_version 700 (0.0024)
[2023-07-04 11:42:38,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.2, 300 sec: 3568.4). Total num frames: 2867200. Throughput: 0: 928.1. Samples: 717342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:42:38,428][00179] Avg episode reward: [(0, '8.304')]
[2023-07-04 11:42:43,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3568.4). Total num frames: 2879488. Throughput: 0: 898.6. Samples: 719410. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:42:43,419][00179] Avg episode reward: [(0, '8.669')]
[2023-07-04 11:42:43,421][10995] Saving new best policy, reward=8.669!
[2023-07-04 11:42:48,418][00179] Fps is (10 sec: 2866.9, 60 sec: 3686.3, 300 sec: 3554.5). Total num frames: 2895872. Throughput: 0: 881.2. Samples: 723636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:42:48,421][00179] Avg episode reward: [(0, '8.463')]
[2023-07-04 11:42:51,050][11008] Updated weights for policy 0, policy_version 710 (0.0034)
[2023-07-04 11:42:53,417][00179] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2916352. Throughput: 0: 930.8. Samples: 730116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:42:53,421][00179] Avg episode reward: [(0, '7.874')]
[2023-07-04 11:42:58,417][00179] Fps is (10 sec: 4506.0, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 2940928. Throughput: 0: 930.5. Samples: 733474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:42:58,422][00179] Avg episode reward: [(0, '7.523')]
[2023-07-04 11:43:01,113][11008] Updated weights for policy 0, policy_version 720 (0.0027)
[2023-07-04 11:43:03,420][00179] Fps is (10 sec: 3685.1, 60 sec: 3617.9, 300 sec: 3596.1). Total num frames: 2953216. Throughput: 0: 896.0. Samples: 738700. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-07-04 11:43:03,423][00179] Avg episode reward: [(0, '8.128')]
[2023-07-04 11:43:08,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 2969600. Throughput: 0: 891.0. Samples: 743016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:08,419][00179] Avg episode reward: [(0, '8.341')]
[2023-07-04 11:43:12,630][11008] Updated weights for policy 0, policy_version 730 (0.0019)
[2023-07-04 11:43:13,417][00179] Fps is (10 sec: 3687.7, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2990080. Throughput: 0: 922.6. Samples: 746524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:43:13,419][00179] Avg episode reward: [(0, '8.783')]
[2023-07-04 11:43:13,432][10995] Saving new best policy, reward=8.783!
[2023-07-04 11:43:18,419][00179] Fps is (10 sec: 4504.5, 60 sec: 3686.2, 300 sec: 3623.9). Total num frames: 3014656. Throughput: 0: 950.1. Samples: 753368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:43:18,422][00179] Avg episode reward: [(0, '8.531')]
[2023-07-04 11:43:23,424][00179] Fps is (10 sec: 3683.6, 60 sec: 3617.7, 300 sec: 3609.9). Total num frames: 3026944. Throughput: 0: 901.8. Samples: 757932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:43:23,432][00179] Avg episode reward: [(0, '8.055')]
[2023-07-04 11:43:23,942][11008] Updated weights for policy 0, policy_version 740 (0.0020)
[2023-07-04 11:43:28,417][00179] Fps is (10 sec: 2867.9, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 3043328. Throughput: 0: 903.7. Samples: 760074. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:28,419][00179] Avg episode reward: [(0, '7.673')]
[2023-07-04 11:43:33,417][00179] Fps is (10 sec: 3689.2, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3063808. Throughput: 0: 942.3. Samples: 766038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:33,423][00179] Avg episode reward: [(0, '7.773')]
[2023-07-04 11:43:34,447][11008] Updated weights for policy 0, policy_version 750 (0.0013)
[2023-07-04 11:43:38,419][00179] Fps is (10 sec: 4504.4, 60 sec: 3686.2, 300 sec: 3623.9). Total num frames: 3088384. Throughput: 0: 950.8. Samples: 772904. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:38,427][00179] Avg episode reward: [(0, '8.037')]
[2023-07-04 11:43:43,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3100672. Throughput: 0: 921.6. Samples: 774944. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:43,423][00179] Avg episode reward: [(0, '8.125')]
[2023-07-04 11:43:46,518][11008] Updated weights for policy 0, policy_version 760 (0.0018)
[2023-07-04 11:43:48,417][00179] Fps is (10 sec: 2867.9, 60 sec: 3686.5, 300 sec: 3596.1). Total num frames: 3117056. Throughput: 0: 901.4. Samples: 779258. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:48,419][00179] Avg episode reward: [(0, '7.595')]
[2023-07-04 11:43:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3137536. Throughput: 0: 945.3. Samples: 785554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:43:53,424][00179] Avg episode reward: [(0, '7.756')]
[2023-07-04 11:43:56,360][11008] Updated weights for policy 0, policy_version 770 (0.0011)
[2023-07-04 11:43:58,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3162112. Throughput: 0: 942.6. Samples: 788940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:43:58,420][00179] Avg episode reward: [(0, '7.989')]
[2023-07-04 11:44:03,417][00179] Fps is (10 sec: 3686.3, 60 sec: 3686.6, 300 sec: 3637.8). Total num frames: 3174400. Throughput: 0: 899.9. Samples: 793860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:44:03,419][00179] Avg episode reward: [(0, '8.737')]
[2023-07-04 11:44:08,417][00179] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3186688. Throughput: 0: 886.1. Samples: 797800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:44:08,419][00179] Avg episode reward: [(0, '8.644')]
[2023-07-04 11:44:09,656][11008] Updated weights for policy 0, policy_version 780 (0.0040)
[2023-07-04 11:44:13,420][00179] Fps is (10 sec: 3685.3, 60 sec: 3686.2, 300 sec: 3610.1). Total num frames: 3211264. Throughput: 0: 909.8. Samples: 801020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:13,422][00179] Avg episode reward: [(0, '8.634')]
[2023-07-04 11:44:18,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.3, 300 sec: 3637.8). Total num frames: 3231744. Throughput: 0: 923.9. Samples: 807614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:18,419][00179] Avg episode reward: [(0, '9.009')]
[2023-07-04 11:44:18,436][10995] Saving new best policy, reward=9.009!
[2023-07-04 11:44:19,654][11008] Updated weights for policy 0, policy_version 790 (0.0027)
[2023-07-04 11:44:23,417][00179] Fps is (10 sec: 3277.9, 60 sec: 3618.6, 300 sec: 3623.9). Total num frames: 3244032. Throughput: 0: 873.3. Samples: 812200. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:23,426][00179] Avg episode reward: [(0, '9.753')]
[2023-07-04 11:44:23,427][10995] Saving new best policy, reward=9.753!
[2023-07-04 11:44:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3260416. Throughput: 0: 872.8. Samples: 814222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:28,419][00179] Avg episode reward: [(0, '9.634')]
[2023-07-04 11:44:28,429][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000796_3260416.pth...
[2023-07-04 11:44:28,543][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000584_2392064.pth
[2023-07-04 11:44:31,973][11008] Updated weights for policy 0, policy_version 800 (0.0032)
[2023-07-04 11:44:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3280896. Throughput: 0: 909.2. Samples: 820172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:33,421][00179] Avg episode reward: [(0, '9.128')]
[2023-07-04 11:44:38,419][00179] Fps is (10 sec: 4504.4, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3305472. Throughput: 0: 922.9. Samples: 827086. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:44:38,421][00179] Avg episode reward: [(0, '9.090')]
[2023-07-04 11:44:42,339][11008] Updated weights for policy 0, policy_version 810 (0.0023)
[2023-07-04 11:44:43,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3317760. Throughput: 0: 893.6. Samples: 829154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:43,423][00179] Avg episode reward: [(0, '9.444')]
[2023-07-04 11:44:48,417][00179] Fps is (10 sec: 2868.0, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3334144. Throughput: 0: 880.8. Samples: 833494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:44:48,420][00179] Avg episode reward: [(0, '10.005')]
[2023-07-04 11:44:48,433][10995] Saving new best policy, reward=10.005!
[2023-07-04 11:44:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3354624. Throughput: 0: 933.9. Samples: 839824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:44:53,420][00179] Avg episode reward: [(0, '9.353')]
[2023-07-04 11:44:53,826][11008] Updated weights for policy 0, policy_version 820 (0.0020)
[2023-07-04 11:44:58,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3379200. Throughput: 0: 938.2. Samples: 843238. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:44:58,427][00179] Avg episode reward: [(0, '9.308')]
[2023-07-04 11:45:03,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3637.8). Total num frames: 3391488. Throughput: 0: 904.2. Samples: 848304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:45:03,420][00179] Avg episode reward: [(0, '8.936')]
[2023-07-04 11:45:05,263][11008] Updated weights for policy 0, policy_version 830 (0.0023)
[2023-07-04 11:45:08,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3407872. Throughput: 0: 896.9. Samples: 852562. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:45:08,419][00179] Avg episode reward: [(0, '9.466')]
[2023-07-04 11:45:13,418][00179] Fps is (10 sec: 3685.9, 60 sec: 3618.3, 300 sec: 3623.9). Total num frames: 3428352. Throughput: 0: 924.5. Samples: 855824. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-07-04 11:45:13,426][00179] Avg episode reward: [(0, '9.895')]
[2023-07-04 11:45:15,807][11008] Updated weights for policy 0, policy_version 840 (0.0021)
[2023-07-04 11:45:18,420][00179] Fps is (10 sec: 4094.5, 60 sec: 3617.9, 300 sec: 3637.8). Total num frames: 3448832. Throughput: 0: 936.7. Samples: 862328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:45:18,422][00179] Avg episode reward: [(0, '10.477')]
[2023-07-04 11:45:18,436][10995] Saving new best policy, reward=10.477!
[2023-07-04 11:45:23,417][00179] Fps is (10 sec: 3686.9, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 3465216. Throughput: 0: 881.0. Samples: 866730. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:45:23,419][00179] Avg episode reward: [(0, '10.901')]
[2023-07-04 11:45:23,422][10995] Saving new best policy, reward=10.901!
[2023-07-04 11:45:28,417][00179] Fps is (10 sec: 2868.1, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3477504. Throughput: 0: 878.9. Samples: 868704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:45:28,423][00179] Avg episode reward: [(0, '11.068')]
[2023-07-04 11:45:28,432][10995] Saving new best policy, reward=11.068!
[2023-07-04 11:45:28,932][11008] Updated weights for policy 0, policy_version 850 (0.0013)
[2023-07-04 11:45:33,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3497984. Throughput: 0: 911.4. Samples: 874506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:45:33,423][00179] Avg episode reward: [(0, '10.707')]
[2023-07-04 11:45:38,216][11008] Updated weights for policy 0, policy_version 860 (0.0019)
[2023-07-04 11:45:38,417][00179] Fps is (10 sec: 4505.7, 60 sec: 3618.3, 300 sec: 3651.7). Total num frames: 3522560. Throughput: 0: 917.3. Samples: 881104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:45:38,425][00179] Avg episode reward: [(0, '10.609')]
[2023-07-04 11:45:43,418][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3534848. Throughput: 0: 886.9. Samples: 883150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:45:43,424][00179] Avg episode reward: [(0, '10.119')]
[2023-07-04 11:45:48,417][00179] Fps is (10 sec: 2867.3, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3551232. Throughput: 0: 866.2. Samples: 887282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:45:48,422][00179] Avg episode reward: [(0, '10.208')]
[2023-07-04 11:45:51,260][11008] Updated weights for policy 0, policy_version 870 (0.0031)
[2023-07-04 11:45:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3571712. Throughput: 0: 912.0. Samples: 893602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:45:53,424][00179] Avg episode reward: [(0, '10.755')]
[2023-07-04 11:45:58,417][00179] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 3592192. Throughput: 0: 906.3. Samples: 896608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:45:58,422][00179] Avg episode reward: [(0, '11.168')]
[2023-07-04 11:45:58,433][10995] Saving new best policy, reward=11.168!
[2023-07-04 11:46:02,924][11008] Updated weights for policy 0, policy_version 880 (0.0012)
[2023-07-04 11:46:03,423][00179] Fps is (10 sec: 3274.8, 60 sec: 3549.5, 300 sec: 3637.7). Total num frames: 3604480. Throughput: 0: 862.6. Samples: 901146. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:46:03,429][00179] Avg episode reward: [(0, '12.366')]
[2023-07-04 11:46:03,433][10995] Saving new best policy, reward=12.366!
[2023-07-04 11:46:08,417][00179] Fps is (10 sec: 2457.5, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3616768. Throughput: 0: 854.0. Samples: 905160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:46:08,423][00179] Avg episode reward: [(0, '12.930')]
[2023-07-04 11:46:08,431][10995] Saving new best policy, reward=12.930!
[2023-07-04 11:46:13,417][00179] Fps is (10 sec: 3278.8, 60 sec: 3481.7, 300 sec: 3596.1). Total num frames: 3637248. Throughput: 0: 879.9. Samples: 908300. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:46:13,424][00179] Avg episode reward: [(0, '12.943')]
[2023-07-04 11:46:13,427][10995] Saving new best policy, reward=12.943!
[2023-07-04 11:46:14,825][11008] Updated weights for policy 0, policy_version 890 (0.0017)
[2023-07-04 11:46:18,417][00179] Fps is (10 sec: 4096.1, 60 sec: 3481.8, 300 sec: 3623.9). Total num frames: 3657728. Throughput: 0: 893.9. Samples: 914732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:46:18,420][00179] Avg episode reward: [(0, '13.704')]
[2023-07-04 11:46:18,435][10995] Saving new best policy, reward=13.704!
[2023-07-04 11:46:23,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 3674112. Throughput: 0: 842.5. Samples: 919018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:46:23,422][00179] Avg episode reward: [(0, '14.235')]
[2023-07-04 11:46:23,428][10995] Saving new best policy, reward=14.235!
[2023-07-04 11:46:27,754][11008] Updated weights for policy 0, policy_version 900 (0.0014)
[2023-07-04 11:46:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3596.1). Total num frames: 3686400. Throughput: 0: 841.7. Samples: 921026. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-07-04 11:46:28,421][00179] Avg episode reward: [(0, '13.946')]
[2023-07-04 11:46:28,432][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000900_3686400.pth...
[2023-07-04 11:46:28,551][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000689_2822144.pth
[2023-07-04 11:46:33,417][00179] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 3706880. Throughput: 0: 878.6. Samples: 926820. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:46:33,422][00179] Avg episode reward: [(0, '14.208')]
[2023-07-04 11:46:37,336][11008] Updated weights for policy 0, policy_version 910 (0.0011)
[2023-07-04 11:46:38,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 3731456. Throughput: 0: 884.0. Samples: 933380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:46:38,419][00179] Avg episode reward: [(0, '13.609')]
[2023-07-04 11:46:43,420][00179] Fps is (10 sec: 3685.2, 60 sec: 3481.4, 300 sec: 3623.9). Total num frames: 3743744. Throughput: 0: 862.3. Samples: 935414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-07-04 11:46:43,423][00179] Avg episode reward: [(0, '13.131')]
[2023-07-04 11:46:48,417][00179] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3582.3). Total num frames: 3756032. Throughput: 0: 853.7. Samples: 939558. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:46:48,419][00179] Avg episode reward: [(0, '13.004')]
[2023-07-04 11:46:50,321][11008] Updated weights for policy 0, policy_version 920 (0.0036)
[2023-07-04 11:46:53,417][00179] Fps is (10 sec: 3687.7, 60 sec: 3481.6, 300 sec: 3596.2). Total num frames: 3780608. Throughput: 0: 903.7. Samples: 945828. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:46:53,419][00179] Avg episode reward: [(0, '12.501')]
[2023-07-04 11:46:58,418][00179] Fps is (10 sec: 4505.2, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 3801088. Throughput: 0: 908.8. Samples: 949196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:46:58,425][00179] Avg episode reward: [(0, '12.672')]
[2023-07-04 11:47:00,459][11008] Updated weights for policy 0, policy_version 930 (0.0028)
[2023-07-04 11:47:03,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3550.2, 300 sec: 3623.9). Total num frames: 3817472. Throughput: 0: 874.4. Samples: 954078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:47:03,419][00179] Avg episode reward: [(0, '12.477')]
[2023-07-04 11:47:08,417][00179] Fps is (10 sec: 2867.4, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3829760. Throughput: 0: 875.8. Samples: 958430. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:47:08,419][00179] Avg episode reward: [(0, '13.004')]
[2023-07-04 11:47:12,346][11008] Updated weights for policy 0, policy_version 940 (0.0024)
[2023-07-04 11:47:13,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3854336. Throughput: 0: 907.1. Samples: 961846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:47:13,422][00179] Avg episode reward: [(0, '13.535')]
[2023-07-04 11:47:18,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3874816. Throughput: 0: 925.8. Samples: 968480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:47:18,421][00179] Avg episode reward: [(0, '14.686')]
[2023-07-04 11:47:18,433][10995] Saving new best policy, reward=14.686!
[2023-07-04 11:47:23,417][00179] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 3887104. Throughput: 0: 881.0. Samples: 973024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:47:23,423][00179] Avg episode reward: [(0, '15.385')]
[2023-07-04 11:47:23,425][10995] Saving new best policy, reward=15.385!
[2023-07-04 11:47:23,809][11008] Updated weights for policy 0, policy_version 950 (0.0021)
[2023-07-04 11:47:28,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3903488. Throughput: 0: 881.0. Samples: 975058. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:47:28,418][00179] Avg episode reward: [(0, '15.771')]
[2023-07-04 11:47:28,435][10995] Saving new best policy, reward=15.771!
[2023-07-04 11:47:33,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3582.3). Total num frames: 3923968. Throughput: 0: 920.4. Samples: 980978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:47:33,420][00179] Avg episode reward: [(0, '16.049')]
[2023-07-04 11:47:33,426][10995] Saving new best policy, reward=16.049!
[2023-07-04 11:47:34,634][11008] Updated weights for policy 0, policy_version 960 (0.0019)
[2023-07-04 11:47:38,417][00179] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3948544. Throughput: 0: 931.6. Samples: 987750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:47:38,423][00179] Avg episode reward: [(0, '16.329')]
[2023-07-04 11:47:38,452][10995] Saving new best policy, reward=16.329!
[2023-07-04 11:47:43,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.4, 300 sec: 3610.0). Total num frames: 3960832. Throughput: 0: 901.3. Samples: 989752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-07-04 11:47:43,424][00179] Avg episode reward: [(0, '16.369')]
[2023-07-04 11:47:43,427][10995] Saving new best policy, reward=16.369!
[2023-07-04 11:47:47,012][11008] Updated weights for policy 0, policy_version 970 (0.0012)
[2023-07-04 11:47:48,417][00179] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 3977216. Throughput: 0: 886.2. Samples: 993956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-07-04 11:47:48,423][00179] Avg episode reward: [(0, '16.564')]
[2023-07-04 11:47:48,432][10995] Saving new best policy, reward=16.564!
[2023-07-04 11:47:53,417][00179] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3997696. Throughput: 0: 930.4. Samples: 1000300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-07-04 11:47:53,425][00179] Avg episode reward: [(0, '17.205')]
[2023-07-04 11:47:53,428][10995] Saving new best policy, reward=17.205!
[2023-07-04 11:47:54,790][10995] Stopping Batcher_0...
[2023-07-04 11:47:54,798][10995] Loop batcher_evt_loop terminating...
[2023-07-04 11:47:54,791][00179] Component Batcher_0 stopped!
[2023-07-04 11:47:54,793][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-07-04 11:47:54,847][00179] Component RolloutWorker_w7 stopped!
[2023-07-04 11:47:54,853][11016] Stopping RolloutWorker_w7...
[2023-07-04 11:47:54,856][00179] Component RolloutWorker_w4 stopped!
[2023-07-04 11:47:54,856][11015] Stopping RolloutWorker_w4...
[2023-07-04 11:47:54,869][11016] Loop rollout_proc7_evt_loop terminating...
[2023-07-04 11:47:54,871][11009] Stopping RolloutWorker_w0...
[2023-07-04 11:47:54,871][00179] Component RolloutWorker_w0 stopped!
[2023-07-04 11:47:54,867][11015] Loop rollout_proc4_evt_loop terminating...
[2023-07-04 11:47:54,875][11009] Loop rollout_proc0_evt_loop terminating...
[2023-07-04 11:47:54,889][11008] Weights refcount: 2 0
[2023-07-04 11:47:54,897][11011] Stopping RolloutWorker_w2...
[2023-07-04 11:47:54,900][11013] Stopping RolloutWorker_w6...
[2023-07-04 11:47:54,895][00179] Component RolloutWorker_w3 stopped!
[2023-07-04 11:47:54,900][11013] Loop rollout_proc6_evt_loop terminating...
[2023-07-04 11:47:54,898][11011] Loop rollout_proc2_evt_loop terminating...
[2023-07-04 11:47:54,900][00179] Component RolloutWorker_w2 stopped!
[2023-07-04 11:47:54,901][00179] Component RolloutWorker_w6 stopped!
[2023-07-04 11:47:54,907][11012] Stopping RolloutWorker_w3...
[2023-07-04 11:47:54,908][11012] Loop rollout_proc3_evt_loop terminating...
[2023-07-04 11:47:54,910][11010] Stopping RolloutWorker_w1...
[2023-07-04 11:47:54,911][11010] Loop rollout_proc1_evt_loop terminating...
[2023-07-04 11:47:54,916][11014] Stopping RolloutWorker_w5...
[2023-07-04 11:47:54,917][11014] Loop rollout_proc5_evt_loop terminating...
[2023-07-04 11:47:54,916][00179] Component RolloutWorker_w1 stopped!
[2023-07-04 11:47:54,920][00179] Component RolloutWorker_w5 stopped!
[2023-07-04 11:47:54,928][11008] Stopping InferenceWorker_p0-w0...
[2023-07-04 11:47:54,928][11008] Loop inference_proc0-0_evt_loop terminating...
[2023-07-04 11:47:54,926][00179] Component InferenceWorker_p0-w0 stopped!
[2023-07-04 11:47:54,941][10995] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000796_3260416.pth
[2023-07-04 11:47:54,965][10995] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-07-04 11:47:55,117][00179] Component LearnerWorker_p0 stopped!
[2023-07-04 11:47:55,117][10995] Stopping LearnerWorker_p0...
[2023-07-04 11:47:55,119][10995] Loop learner_proc0_evt_loop terminating...
[2023-07-04 11:47:55,119][00179] Waiting for process learner_proc0 to stop...
[2023-07-04 11:47:56,884][00179] Waiting for process inference_proc0-0 to join...
[2023-07-04 11:47:56,890][00179] Waiting for process rollout_proc0 to join...
[2023-07-04 11:47:58,759][00179] Waiting for process rollout_proc1 to join...
[2023-07-04 11:47:58,761][00179] Waiting for process rollout_proc2 to join...
[2023-07-04 11:47:58,764][00179] Waiting for process rollout_proc3 to join...
[2023-07-04 11:47:58,771][00179] Waiting for process rollout_proc4 to join...
[2023-07-04 11:47:58,773][00179] Waiting for process rollout_proc5 to join...
[2023-07-04 11:47:58,774][00179] Waiting for process rollout_proc6 to join...
[2023-07-04 11:47:58,776][00179] Waiting for process rollout_proc7 to join...
[2023-07-04 11:47:58,778][00179] Batcher 0 profile tree view:
batching: 28.7735, releasing_batches: 0.0212
[2023-07-04 11:47:58,781][00179] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0001
wait_policy_total: 519.8282
update_model: 8.0300
weight_update: 0.0012
one_step: 0.0057
handle_policy_step: 564.9241
deserialize: 15.7827, stack: 2.9412, obs_to_device_normalize: 112.1957, forward: 308.5563, send_messages: 26.7324
prepare_outputs: 73.3165
to_cpu: 42.4894
[2023-07-04 11:47:58,782][00179] Learner 0 profile tree view:
misc: 0.0047, prepare_batch: 19.5520
train: 75.4637
epoch_init: 0.0053, minibatch_init: 0.0074, losses_postprocess: 0.6075, kl_divergence: 0.6473, after_optimizer: 3.7725
calculate_losses: 26.2643
losses_init: 0.0041, forward_head: 1.1368, bptt_initial: 17.4425, tail: 1.1223, advantages_returns: 0.2866, losses: 3.8737
bptt: 2.0952
bptt_forward_core: 2.0294
update: 43.5645
clip: 32.6802
[2023-07-04 11:47:58,785][00179] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.2406, enqueue_policy_requests: 138.6646, env_step: 855.3495, overhead: 23.1933, complete_rollouts: 7.4568
save_policy_outputs: 19.5551
split_output_tensors: 9.4165
[2023-07-04 11:47:58,786][00179] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.3813, enqueue_policy_requests: 150.8346, env_step: 843.0122, overhead: 22.1565, complete_rollouts: 6.6442
save_policy_outputs: 20.2061
split_output_tensors: 9.2457
[2023-07-04 11:47:58,790][00179] Loop Runner_EvtLoop terminating...
[2023-07-04 11:47:58,791][00179] Runner profile tree view:
main_loop: 1164.1484
[2023-07-04 11:47:58,792][00179] Collected {0: 4005888}, FPS: 3441.0
[2023-07-04 11:48:04,374][00179] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-07-04 11:48:04,376][00179] Overriding arg 'num_workers' with value 1 passed from command line
[2023-07-04 11:48:04,380][00179] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-07-04 11:48:04,382][00179] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-07-04 11:48:04,384][00179] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-07-04 11:48:04,386][00179] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-07-04 11:48:04,388][00179] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-07-04 11:48:04,389][00179] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-07-04 11:48:04,390][00179] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-07-04 11:48:04,391][00179] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-07-04 11:48:04,392][00179] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-07-04 11:48:04,393][00179] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-07-04 11:48:04,394][00179] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-07-04 11:48:04,396][00179] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-07-04 11:48:04,397][00179] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-07-04 11:48:04,423][00179] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-07-04 11:48:04,425][00179] RunningMeanStd input shape: (3, 72, 128)
[2023-07-04 11:48:04,429][00179] RunningMeanStd input shape: (1,)
[2023-07-04 11:48:04,444][00179] ConvEncoder: input_channels=3
[2023-07-04 11:48:04,568][00179] Conv encoder output size: 512
[2023-07-04 11:48:04,572][00179] Policy head output size: 512
[2023-07-04 11:48:06,975][00179] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-07-04 11:48:08,275][00179] Num frames 100...
[2023-07-04 11:48:08,398][00179] Num frames 200...
[2023-07-04 11:48:08,517][00179] Num frames 300...
[2023-07-04 11:48:08,635][00179] Num frames 400...
[2023-07-04 11:48:08,755][00179] Num frames 500...
[2023-07-04 11:48:08,873][00179] Num frames 600...
[2023-07-04 11:48:08,995][00179] Num frames 700...
[2023-07-04 11:48:09,131][00179] Avg episode rewards: #0: 14.680, true rewards: #0: 7.680
[2023-07-04 11:48:09,132][00179] Avg episode reward: 14.680, avg true_objective: 7.680
[2023-07-04 11:48:09,182][00179] Num frames 800...
[2023-07-04 11:48:09,308][00179] Num frames 900...
[2023-07-04 11:48:09,427][00179] Num frames 1000...
[2023-07-04 11:48:09,543][00179] Num frames 1100...
[2023-07-04 11:48:09,662][00179] Num frames 1200...
[2023-07-04 11:48:09,787][00179] Num frames 1300...
[2023-07-04 11:48:09,909][00179] Num frames 1400...
[2023-07-04 11:48:10,014][00179] Avg episode rewards: #0: 13.205, true rewards: #0: 7.205
[2023-07-04 11:48:10,016][00179] Avg episode reward: 13.205, avg true_objective: 7.205
[2023-07-04 11:48:10,093][00179] Num frames 1500...
[2023-07-04 11:48:10,214][00179] Num frames 1600...
[2023-07-04 11:48:10,340][00179] Num frames 1700...
[2023-07-04 11:48:10,459][00179] Num frames 1800...
[2023-07-04 11:48:10,580][00179] Num frames 1900...
[2023-07-04 11:48:10,710][00179] Num frames 2000...
[2023-07-04 11:48:10,848][00179] Num frames 2100...
[2023-07-04 11:48:10,980][00179] Num frames 2200...
[2023-07-04 11:48:11,099][00179] Num frames 2300...
[2023-07-04 11:48:11,237][00179] Num frames 2400...
[2023-07-04 11:48:11,419][00179] Num frames 2500...
[2023-07-04 11:48:11,597][00179] Num frames 2600...
[2023-07-04 11:48:11,776][00179] Num frames 2700...
[2023-07-04 11:48:11,952][00179] Num frames 2800...
[2023-07-04 11:48:12,127][00179] Num frames 2900...
[2023-07-04 11:48:12,324][00179] Avg episode rewards: #0: 19.923, true rewards: #0: 9.923
[2023-07-04 11:48:12,326][00179] Avg episode reward: 19.923, avg true_objective: 9.923
[2023-07-04 11:48:12,372][00179] Num frames 3000...
[2023-07-04 11:48:12,571][00179] Num frames 3100...
[2023-07-04 11:48:12,754][00179] Num frames 3200...
[2023-07-04 11:48:12,932][00179] Num frames 3300...
[2023-07-04 11:48:13,117][00179] Num frames 3400...
[2023-07-04 11:48:13,325][00179] Avg episode rewards: #0: 16.723, true rewards: #0: 8.722
[2023-07-04 11:48:13,327][00179] Avg episode reward: 16.723, avg true_objective: 8.722
[2023-07-04 11:48:13,349][00179] Num frames 3500...
[2023-07-04 11:48:13,524][00179] Num frames 3600...
[2023-07-04 11:48:13,707][00179] Num frames 3700...
[2023-07-04 11:48:13,882][00179] Num frames 3800...
[2023-07-04 11:48:14,059][00179] Num frames 3900...
[2023-07-04 11:48:14,232][00179] Num frames 4000...
[2023-07-04 11:48:14,408][00179] Num frames 4100...
[2023-07-04 11:48:14,584][00179] Num frames 4200...
[2023-07-04 11:48:14,762][00179] Num frames 4300...
[2023-07-04 11:48:14,941][00179] Num frames 4400...
[2023-07-04 11:48:15,090][00179] Avg episode rewards: #0: 17.698, true rewards: #0: 8.898
[2023-07-04 11:48:15,092][00179] Avg episode reward: 17.698, avg true_objective: 8.898
[2023-07-04 11:48:15,162][00179] Num frames 4500...
[2023-07-04 11:48:15,287][00179] Num frames 4600...
[2023-07-04 11:48:15,417][00179] Num frames 4700...
[2023-07-04 11:48:15,540][00179] Num frames 4800...
[2023-07-04 11:48:15,662][00179] Num frames 4900...
[2023-07-04 11:48:15,785][00179] Num frames 5000...
[2023-07-04 11:48:15,904][00179] Num frames 5100...
[2023-07-04 11:48:16,034][00179] Num frames 5200...
[2023-07-04 11:48:16,155][00179] Num frames 5300...
[2023-07-04 11:48:16,275][00179] Num frames 5400...
[2023-07-04 11:48:16,403][00179] Num frames 5500...
[2023-07-04 11:48:16,525][00179] Num frames 5600...
[2023-07-04 11:48:16,653][00179] Num frames 5700...
[2023-07-04 11:48:16,778][00179] Num frames 5800...
[2023-07-04 11:48:16,900][00179] Num frames 5900...
[2023-07-04 11:48:17,000][00179] Avg episode rewards: #0: 20.223, true rewards: #0: 9.890
[2023-07-04 11:48:17,001][00179] Avg episode reward: 20.223, avg true_objective: 9.890
[2023-07-04 11:48:17,086][00179] Num frames 6000...
[2023-07-04 11:48:17,206][00179] Num frames 6100...
[2023-07-04 11:48:17,329][00179] Num frames 6200...
[2023-07-04 11:48:17,467][00179] Num frames 6300...
[2023-07-04 11:48:17,588][00179] Num frames 6400...
[2023-07-04 11:48:17,712][00179] Num frames 6500...
[2023-07-04 11:48:17,801][00179] Avg episode rewards: #0: 18.607, true rewards: #0: 9.321
[2023-07-04 11:48:17,803][00179] Avg episode reward: 18.607, avg true_objective: 9.321
[2023-07-04 11:48:17,895][00179] Num frames 6600...
[2023-07-04 11:48:18,015][00179] Num frames 6700...
[2023-07-04 11:48:18,134][00179] Num frames 6800...
[2023-07-04 11:48:18,255][00179] Num frames 6900...
[2023-07-04 11:48:18,388][00179] Num frames 7000...
[2023-07-04 11:48:18,513][00179] Num frames 7100...
[2023-07-04 11:48:18,671][00179] Num frames 7200...
[2023-07-04 11:48:18,867][00179] Num frames 7300...
[2023-07-04 11:48:19,074][00179] Num frames 7400...
[2023-07-04 11:48:19,312][00179] Num frames 7500...
[2023-07-04 11:48:19,455][00179] Num frames 7600...
[2023-07-04 11:48:19,527][00179] Avg episode rewards: #0: 18.766, true rewards: #0: 9.516
[2023-07-04 11:48:19,529][00179] Avg episode reward: 18.766, avg true_objective: 9.516
[2023-07-04 11:48:19,635][00179] Num frames 7700...
[2023-07-04 11:48:19,766][00179] Num frames 7800...
[2023-07-04 11:48:19,922][00179] Num frames 7900...
[2023-07-04 11:48:20,041][00179] Num frames 8000...
[2023-07-04 11:48:20,158][00179] Num frames 8100...
[2023-07-04 11:48:20,277][00179] Num frames 8200...
[2023-07-04 11:48:20,396][00179] Num frames 8300...
[2023-07-04 11:48:20,523][00179] Num frames 8400...
[2023-07-04 11:48:20,640][00179] Num frames 8500...
[2023-07-04 11:48:20,749][00179] Avg episode rewards: #0: 18.935, true rewards: #0: 9.490
[2023-07-04 11:48:20,751][00179] Avg episode reward: 18.935, avg true_objective: 9.490
[2023-07-04 11:48:20,821][00179] Num frames 8600...
[2023-07-04 11:48:20,940][00179] Num frames 8700...
[2023-07-04 11:48:21,061][00179] Num frames 8800...
[2023-07-04 11:48:21,177][00179] Num frames 8900...
[2023-07-04 11:48:21,302][00179] Num frames 9000...
[2023-07-04 11:48:21,424][00179] Num frames 9100...
[2023-07-04 11:48:21,553][00179] Avg episode rewards: #0: 17.949, true rewards: #0: 9.149
[2023-07-04 11:48:21,555][00179] Avg episode reward: 17.949, avg true_objective: 9.149
[2023-07-04 11:49:15,433][00179] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-07-04 11:51:56,775][00179] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-07-04 11:51:56,778][00179] Overriding arg 'num_workers' with value 1 passed from command line
[2023-07-04 11:51:56,782][00179] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-07-04 11:51:56,786][00179] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-07-04 11:51:56,788][00179] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-07-04 11:51:56,789][00179] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-07-04 11:51:56,794][00179] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-07-04 11:51:56,795][00179] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-07-04 11:51:56,796][00179] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-07-04 11:51:56,798][00179] Adding new argument 'hf_repository'='fatcat22/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-07-04 11:51:56,802][00179] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-07-04 11:51:56,803][00179] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-07-04 11:51:56,804][00179] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-07-04 11:51:56,805][00179] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-07-04 11:51:56,806][00179] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-07-04 11:51:56,832][00179] RunningMeanStd input shape: (3, 72, 128)
[2023-07-04 11:51:56,834][00179] RunningMeanStd input shape: (1,)
[2023-07-04 11:51:56,853][00179] ConvEncoder: input_channels=3
[2023-07-04 11:51:56,909][00179] Conv encoder output size: 512
[2023-07-04 11:51:56,911][00179] Policy head output size: 512
[2023-07-04 11:51:56,939][00179] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-07-04 11:51:57,625][00179] Num frames 100...
[2023-07-04 11:51:57,808][00179] Num frames 200...
[2023-07-04 11:51:57,952][00179] Num frames 300...
[2023-07-04 11:51:58,085][00179] Num frames 400...
[2023-07-04 11:51:58,195][00179] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480
[2023-07-04 11:51:58,197][00179] Avg episode reward: 5.480, avg true_objective: 4.480
[2023-07-04 11:51:58,268][00179] Num frames 500...
[2023-07-04 11:51:58,395][00179] Num frames 600...
[2023-07-04 11:51:58,516][00179] Num frames 700...
[2023-07-04 11:51:58,640][00179] Num frames 800...
[2023-07-04 11:51:58,766][00179] Num frames 900...
[2023-07-04 11:51:58,888][00179] Num frames 1000...
[2023-07-04 11:51:59,008][00179] Num frames 1100...
[2023-07-04 11:51:59,138][00179] Num frames 1200...
[2023-07-04 11:51:59,250][00179] Avg episode rewards: #0: 10.240, true rewards: #0: 6.240
[2023-07-04 11:51:59,252][00179] Avg episode reward: 10.240, avg true_objective: 6.240
[2023-07-04 11:51:59,322][00179] Num frames 1300...
[2023-07-04 11:51:59,445][00179] Num frames 1400...
[2023-07-04 11:51:59,576][00179] Num frames 1500...
[2023-07-04 11:51:59,696][00179] Num frames 1600...
[2023-07-04 11:51:59,816][00179] Num frames 1700...
[2023-07-04 11:51:59,933][00179] Num frames 1800...
[2023-07-04 11:52:00,018][00179] Avg episode rewards: #0: 10.080, true rewards: #0: 6.080
[2023-07-04 11:52:00,020][00179] Avg episode reward: 10.080, avg true_objective: 6.080
[2023-07-04 11:52:00,125][00179] Num frames 1900...
[2023-07-04 11:52:00,242][00179] Num frames 2000...
[2023-07-04 11:52:00,380][00179] Num frames 2100...
[2023-07-04 11:52:00,515][00179] Num frames 2200...
[2023-07-04 11:52:00,635][00179] Num frames 2300...
[2023-07-04 11:52:00,754][00179] Num frames 2400...
[2023-07-04 11:52:00,876][00179] Num frames 2500...
[2023-07-04 11:52:00,993][00179] Num frames 2600...
[2023-07-04 11:52:01,116][00179] Avg episode rewards: #0: 11.140, true rewards: #0: 6.640
[2023-07-04 11:52:01,118][00179] Avg episode reward: 11.140, avg true_objective: 6.640
[2023-07-04 11:52:01,174][00179] Num frames 2700...
[2023-07-04 11:52:01,311][00179] Num frames 2800...
[2023-07-04 11:52:01,428][00179] Num frames 2900...
[2023-07-04 11:52:01,547][00179] Num frames 3000...
[2023-07-04 11:52:01,671][00179] Num frames 3100...
[2023-07-04 11:52:01,795][00179] Num frames 3200...
[2023-07-04 11:52:01,921][00179] Num frames 3300...
[2023-07-04 11:52:02,046][00179] Num frames 3400...
[2023-07-04 11:52:02,184][00179] Num frames 3500...
[2023-07-04 11:52:02,306][00179] Avg episode rewards: #0: 12.104, true rewards: #0: 7.104
[2023-07-04 11:52:02,308][00179] Avg episode reward: 12.104, avg true_objective: 7.104
[2023-07-04 11:52:02,377][00179] Num frames 3600...
[2023-07-04 11:52:02,502][00179] Num frames 3700...
[2023-07-04 11:52:02,622][00179] Num frames 3800...
[2023-07-04 11:52:02,744][00179] Num frames 3900...
[2023-07-04 11:52:02,867][00179] Num frames 4000...
[2023-07-04 11:52:03,039][00179] Avg episode rewards: #0: 11.327, true rewards: #0: 6.827
[2023-07-04 11:52:03,041][00179] Avg episode reward: 11.327, avg true_objective: 6.827
[2023-07-04 11:52:03,051][00179] Num frames 4100...
[2023-07-04 11:52:03,183][00179] Num frames 4200...
[2023-07-04 11:52:03,321][00179] Num frames 4300...
[2023-07-04 11:52:03,445][00179] Num frames 4400...
[2023-07-04 11:52:03,567][00179] Num frames 4500...
[2023-07-04 11:52:03,639][00179] Avg episode rewards: #0: 10.589, true rewards: #0: 6.446
[2023-07-04 11:52:03,640][00179] Avg episode reward: 10.589, avg true_objective: 6.446
[2023-07-04 11:52:03,753][00179] Num frames 4600...
[2023-07-04 11:52:03,879][00179] Num frames 4700...
[2023-07-04 11:52:04,001][00179] Num frames 4800...
[2023-07-04 11:52:04,157][00179] Num frames 4900...
[2023-07-04 11:52:04,288][00179] Num frames 5000...
[2023-07-04 11:52:04,411][00179] Num frames 5100...
[2023-07-04 11:52:04,533][00179] Num frames 5200...
[2023-07-04 11:52:04,656][00179] Num frames 5300...
[2023-07-04 11:52:04,776][00179] Num frames 5400...
[2023-07-04 11:52:04,897][00179] Num frames 5500...
[2023-07-04 11:52:05,039][00179] Avg episode rewards: #0: 12.585, true rewards: #0: 6.960
[2023-07-04 11:52:05,040][00179] Avg episode reward: 12.585, avg true_objective: 6.960
[2023-07-04 11:52:05,086][00179] Num frames 5600...
[2023-07-04 11:52:05,212][00179] Num frames 5700...
[2023-07-04 11:52:05,336][00179] Num frames 5800...
[2023-07-04 11:52:05,460][00179] Num frames 5900...
[2023-07-04 11:52:05,589][00179] Num frames 6000...
[2023-07-04 11:52:05,709][00179] Num frames 6100...
[2023-07-04 11:52:05,832][00179] Num frames 6200...
[2023-07-04 11:52:05,964][00179] Num frames 6300...
[2023-07-04 11:52:06,088][00179] Num frames 6400...
[2023-07-04 11:52:06,217][00179] Num frames 6500...
[2023-07-04 11:52:06,345][00179] Num frames 6600...
[2023-07-04 11:52:06,454][00179] Avg episode rewards: #0: 13.819, true rewards: #0: 7.374
[2023-07-04 11:52:06,455][00179] Avg episode reward: 13.819, avg true_objective: 7.374
[2023-07-04 11:52:06,536][00179] Num frames 6700...
[2023-07-04 11:52:06,659][00179] Num frames 6800...
[2023-07-04 11:52:06,785][00179] Num frames 6900...
[2023-07-04 11:52:06,906][00179] Num frames 7000...
[2023-07-04 11:52:07,028][00179] Num frames 7100...
[2023-07-04 11:52:07,151][00179] Num frames 7200...
[2023-07-04 11:52:07,291][00179] Num frames 7300...
[2023-07-04 11:52:07,414][00179] Num frames 7400...
[2023-07-04 11:52:07,545][00179] Avg episode rewards: #0: 14.163, true rewards: #0: 7.463
[2023-07-04 11:52:07,547][00179] Avg episode reward: 14.163, avg true_objective: 7.463
[2023-07-04 11:52:52,111][00179] Replay video saved to /content/train_dir/default_experiment/replay.mp4!