SergejSchweizer's picture
Upload . with huggingface_hub
a904269
[2023-02-25 19:18:07,534][14226] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-02-25 19:18:07,536][14226] Rollout worker 0 uses device cpu
[2023-02-25 19:18:07,537][14226] Rollout worker 1 uses device cpu
[2023-02-25 19:18:07,542][14226] Rollout worker 2 uses device cpu
[2023-02-25 19:18:07,543][14226] Rollout worker 3 uses device cpu
[2023-02-25 19:18:07,544][14226] Rollout worker 4 uses device cpu
[2023-02-25 19:18:07,545][14226] Rollout worker 5 uses device cpu
[2023-02-25 19:18:07,547][14226] Rollout worker 6 uses device cpu
[2023-02-25 19:18:07,549][14226] Rollout worker 7 uses device cpu
[2023-02-25 19:18:07,782][14226] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:07,787][14226] InferenceWorker_p0-w0: min num requests: 2
[2023-02-25 19:18:07,834][14226] Starting all processes...
[2023-02-25 19:18:07,838][14226] Starting process learner_proc0
[2023-02-25 19:18:07,939][14226] Starting all processes...
[2023-02-25 19:18:07,987][14226] Starting process inference_proc0-0
[2023-02-25 19:18:07,989][14226] Starting process rollout_proc0
[2023-02-25 19:18:07,989][14226] Starting process rollout_proc1
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc2
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc3
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc4
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc5
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc6
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc7
[2023-02-25 19:18:17,980][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:17,981][19851] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-02-25 19:18:18,158][19873] Worker 3 uses CPU cores [1]
[2023-02-25 19:18:18,346][19866] Worker 0 uses CPU cores [0]
[2023-02-25 19:18:18,548][19867] Worker 1 uses CPU cores [1]
[2023-02-25 19:18:18,981][19874] Worker 4 uses CPU cores [0]
[2023-02-25 19:18:19,012][19877] Worker 7 uses CPU cores [1]
[2023-02-25 19:18:19,048][19865] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:19,055][19865] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-02-25 19:18:19,062][19872] Worker 2 uses CPU cores [0]
[2023-02-25 19:18:19,091][19875] Worker 5 uses CPU cores [1]
[2023-02-25 19:18:19,121][19876] Worker 6 uses CPU cores [0]
[2023-02-25 19:18:19,172][19851] Num visible devices: 1
[2023-02-25 19:18:19,173][19865] Num visible devices: 1
[2023-02-25 19:18:19,188][19851] Starting seed is not provided
[2023-02-25 19:18:19,188][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:19,189][19851] Initializing actor-critic model on device cuda:0
[2023-02-25 19:18:19,189][19851] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:19,191][19851] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:19,203][19851] ConvEncoder: input_channels=3
[2023-02-25 19:18:19,483][19851] Conv encoder output size: 512
[2023-02-25 19:18:19,483][19851] Policy head output size: 512
[2023-02-25 19:18:19,530][19851] Created Actor Critic model with architecture:
[2023-02-25 19:18:19,530][19851] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-02-25 19:18:26,966][19851] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-02-25 19:18:26,967][19851] No checkpoints found
[2023-02-25 19:18:26,967][19851] Did not load from checkpoint, starting from scratch!
[2023-02-25 19:18:26,968][19851] Initialized policy 0 weights for model version 0
[2023-02-25 19:18:26,971][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:26,977][19851] LearnerWorker_p0 finished initialization!
[2023-02-25 19:18:27,072][19865] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:27,074][19865] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:27,089][19865] ConvEncoder: input_channels=3
[2023-02-25 19:18:27,190][19865] Conv encoder output size: 512
[2023-02-25 19:18:27,190][19865] Policy head output size: 512
[2023-02-25 19:18:27,772][14226] Heartbeat connected on Batcher_0
[2023-02-25 19:18:27,782][14226] Heartbeat connected on LearnerWorker_p0
[2023-02-25 19:18:27,797][14226] Heartbeat connected on RolloutWorker_w0
[2023-02-25 19:18:27,804][14226] Heartbeat connected on RolloutWorker_w1
[2023-02-25 19:18:27,811][14226] Heartbeat connected on RolloutWorker_w2
[2023-02-25 19:18:27,816][14226] Heartbeat connected on RolloutWorker_w3
[2023-02-25 19:18:27,825][14226] Heartbeat connected on RolloutWorker_w4
[2023-02-25 19:18:27,828][14226] Heartbeat connected on RolloutWorker_w5
[2023-02-25 19:18:27,831][14226] Heartbeat connected on RolloutWorker_w6
[2023-02-25 19:18:27,837][14226] Heartbeat connected on RolloutWorker_w7
[2023-02-25 19:18:28,315][14226] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:29,503][14226] Inference worker 0-0 is ready!
[2023-02-25 19:18:29,510][14226] All inference workers are ready! Signal rollout workers to start!
[2023-02-25 19:18:29,517][14226] Heartbeat connected on InferenceWorker_p0-w0
[2023-02-25 19:18:29,619][19877] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,630][19875] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,640][19867] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,680][19873] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,692][19866] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,698][19872] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,698][19876] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,710][19874] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:30,552][19872] Decorrelating experience for 0 frames...
[2023-02-25 19:18:30,554][19876] Decorrelating experience for 0 frames...
[2023-02-25 19:18:30,891][19872] Decorrelating experience for 32 frames...
[2023-02-25 19:18:31,103][19875] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,115][19867] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,112][19877] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,122][19873] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,306][19872] Decorrelating experience for 64 frames...
[2023-02-25 19:18:31,713][19872] Decorrelating experience for 96 frames...
[2023-02-25 19:18:32,285][19875] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,287][19877] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,319][19867] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,330][19873] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,763][19866] Decorrelating experience for 0 frames...
[2023-02-25 19:18:33,315][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:33,473][19866] Decorrelating experience for 32 frames...
[2023-02-25 19:18:33,540][19876] Decorrelating experience for 32 frames...
[2023-02-25 19:18:34,002][19874] Decorrelating experience for 0 frames...
[2023-02-25 19:18:34,600][19875] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,603][19877] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,677][19874] Decorrelating experience for 32 frames...
[2023-02-25 19:18:34,693][19873] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,772][19876] Decorrelating experience for 64 frames...
[2023-02-25 19:18:35,594][19874] Decorrelating experience for 64 frames...
[2023-02-25 19:18:35,651][19876] Decorrelating experience for 96 frames...
[2023-02-25 19:18:36,732][19867] Decorrelating experience for 64 frames...
[2023-02-25 19:18:36,870][19877] Decorrelating experience for 96 frames...
[2023-02-25 19:18:37,010][19873] Decorrelating experience for 96 frames...
[2023-02-25 19:18:38,315][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 1.6. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:38,320][14226] Avg episode reward: [(0, '1.792')]
[2023-02-25 19:18:40,717][19867] Decorrelating experience for 96 frames...
[2023-02-25 19:18:40,719][19875] Decorrelating experience for 96 frames...
[2023-02-25 19:18:40,754][19874] Decorrelating experience for 96 frames...
[2023-02-25 19:18:43,318][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 119.0. Samples: 1786. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:43,321][14226] Avg episode reward: [(0, '2.936')]
[2023-02-25 19:18:43,816][19851] Signal inference workers to stop experience collection...
[2023-02-25 19:18:43,832][19865] InferenceWorker_p0-w0: stopping experience collection
[2023-02-25 19:18:44,047][19866] Decorrelating experience for 64 frames...
[2023-02-25 19:18:44,670][19866] Decorrelating experience for 96 frames...
[2023-02-25 19:18:45,338][19851] Signal inference workers to resume experience collection...
[2023-02-25 19:18:45,341][19865] InferenceWorker_p0-w0: resuming experience collection
[2023-02-25 19:18:48,317][14226] Fps is (10 sec: 1638.0, 60 sec: 819.1, 300 sec: 819.1). Total num frames: 16384. Throughput: 0: 204.3. Samples: 4086. Policy #0 lag: (min: 0.0, avg: 1.3, max: 3.0)
[2023-02-25 19:18:48,319][14226] Avg episode reward: [(0, '3.181')]
[2023-02-25 19:18:53,315][14226] Fps is (10 sec: 3687.7, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 36864. Throughput: 0: 292.2. Samples: 7304. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:18:53,322][14226] Avg episode reward: [(0, '3.828')]
[2023-02-25 19:18:53,896][19865] Updated weights for policy 0, policy_version 10 (0.0367)
[2023-02-25 19:18:58,315][14226] Fps is (10 sec: 3687.3, 60 sec: 1774.9, 300 sec: 1774.9). Total num frames: 53248. Throughput: 0: 444.7. Samples: 13342. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:18:58,318][14226] Avg episode reward: [(0, '4.401')]
[2023-02-25 19:19:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 1989.5, 300 sec: 1989.5). Total num frames: 69632. Throughput: 0: 500.6. Samples: 17520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:19:03,321][14226] Avg episode reward: [(0, '4.379')]
[2023-02-25 19:19:06,605][19865] Updated weights for policy 0, policy_version 20 (0.0026)
[2023-02-25 19:19:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 2150.4, 300 sec: 2150.4). Total num frames: 86016. Throughput: 0: 495.5. Samples: 19822. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:08,322][14226] Avg episode reward: [(0, '4.350')]
[2023-02-25 19:19:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 2366.6, 300 sec: 2366.6). Total num frames: 106496. Throughput: 0: 580.7. Samples: 26132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:13,323][14226] Avg episode reward: [(0, '4.350')]
[2023-02-25 19:19:13,376][19851] Saving new best policy, reward=4.350!
[2023-02-25 19:19:17,093][19865] Updated weights for policy 0, policy_version 30 (0.0021)
[2023-02-25 19:19:18,316][14226] Fps is (10 sec: 3686.2, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 122880. Throughput: 0: 701.1. Samples: 31548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:18,320][14226] Avg episode reward: [(0, '4.438')]
[2023-02-25 19:19:18,394][19851] Saving new best policy, reward=4.438!
[2023-02-25 19:19:23,317][14226] Fps is (10 sec: 3276.1, 60 sec: 2532.0, 300 sec: 2532.0). Total num frames: 139264. Throughput: 0: 745.0. Samples: 33544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:19:23,320][14226] Avg episode reward: [(0, '4.402')]
[2023-02-25 19:19:28,315][14226] Fps is (10 sec: 3277.0, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 155648. Throughput: 0: 813.5. Samples: 38392. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:28,323][14226] Avg episode reward: [(0, '4.139')]
[2023-02-25 19:19:29,571][19865] Updated weights for policy 0, policy_version 40 (0.0026)
[2023-02-25 19:19:33,315][14226] Fps is (10 sec: 4096.9, 60 sec: 3003.7, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 907.8. Samples: 44934. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:19:33,318][14226] Avg episode reward: [(0, '4.429')]
[2023-02-25 19:19:38,317][14226] Fps is (10 sec: 4095.1, 60 sec: 3276.7, 300 sec: 2808.6). Total num frames: 196608. Throughput: 0: 903.9. Samples: 47980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:38,322][14226] Avg episode reward: [(0, '4.523')]
[2023-02-25 19:19:38,326][19851] Saving new best policy, reward=4.523!
[2023-02-25 19:19:40,773][19865] Updated weights for policy 0, policy_version 50 (0.0022)
[2023-02-25 19:19:43,315][14226] Fps is (10 sec: 2867.1, 60 sec: 3481.8, 300 sec: 2785.3). Total num frames: 208896. Throughput: 0: 861.2. Samples: 52098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:43,322][14226] Avg episode reward: [(0, '4.520')]
[2023-02-25 19:19:48,315][14226] Fps is (10 sec: 3277.5, 60 sec: 3550.0, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 887.5. Samples: 57458. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:48,318][14226] Avg episode reward: [(0, '4.441')]
[2023-02-25 19:19:51,553][19865] Updated weights for policy 0, policy_version 60 (0.0013)
[2023-02-25 19:19:53,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 2939.5). Total num frames: 249856. Throughput: 0: 909.9. Samples: 60768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:53,323][14226] Avg episode reward: [(0, '4.447')]
[2023-02-25 19:19:58,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 2958.2). Total num frames: 266240. Throughput: 0: 900.1. Samples: 66636. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:58,318][14226] Avg episode reward: [(0, '4.563')]
[2023-02-25 19:19:58,321][19851] Saving new best policy, reward=4.563!
[2023-02-25 19:20:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2975.0). Total num frames: 282624. Throughput: 0: 870.8. Samples: 70734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:20:03,325][14226] Avg episode reward: [(0, '4.480')]
[2023-02-25 19:20:03,343][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth...
[2023-02-25 19:20:04,563][19865] Updated weights for policy 0, policy_version 70 (0.0044)
[2023-02-25 19:20:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2990.1). Total num frames: 299008. Throughput: 0: 879.2. Samples: 73106. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:08,317][14226] Avg episode reward: [(0, '4.543')]
[2023-02-25 19:20:13,316][14226] Fps is (10 sec: 3686.2, 60 sec: 3549.8, 300 sec: 3042.7). Total num frames: 319488. Throughput: 0: 915.1. Samples: 79570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:13,320][14226] Avg episode reward: [(0, '4.348')]
[2023-02-25 19:20:14,322][19865] Updated weights for policy 0, policy_version 80 (0.0016)
[2023-02-25 19:20:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3053.4). Total num frames: 335872. Throughput: 0: 887.9. Samples: 84888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:20:18,318][14226] Avg episode reward: [(0, '4.296')]
[2023-02-25 19:20:23,315][14226] Fps is (10 sec: 3277.0, 60 sec: 3550.0, 300 sec: 3063.1). Total num frames: 352256. Throughput: 0: 864.8. Samples: 86896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:23,323][14226] Avg episode reward: [(0, '4.383')]
[2023-02-25 19:20:27,503][19865] Updated weights for policy 0, policy_version 90 (0.0019)
[2023-02-25 19:20:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3072.0). Total num frames: 368640. Throughput: 0: 882.7. Samples: 91818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:28,331][14226] Avg episode reward: [(0, '4.467')]
[2023-02-25 19:20:33,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3145.7). Total num frames: 393216. Throughput: 0: 906.7. Samples: 98260. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:20:33,324][14226] Avg episode reward: [(0, '4.713')]
[2023-02-25 19:20:33,336][19851] Saving new best policy, reward=4.713!
[2023-02-25 19:20:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3119.3). Total num frames: 405504. Throughput: 0: 893.5. Samples: 100974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:38,322][14226] Avg episode reward: [(0, '4.881')]
[2023-02-25 19:20:38,332][19851] Saving new best policy, reward=4.881!
[2023-02-25 19:20:38,703][19865] Updated weights for policy 0, policy_version 100 (0.0017)
[2023-02-25 19:20:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3125.1). Total num frames: 421888. Throughput: 0: 852.2. Samples: 104986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:43,323][14226] Avg episode reward: [(0, '4.685')]
[2023-02-25 19:20:48,321][14226] Fps is (10 sec: 3274.7, 60 sec: 3481.2, 300 sec: 3130.4). Total num frames: 438272. Throughput: 0: 864.5. Samples: 109644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:20:48,324][14226] Avg episode reward: [(0, '4.658')]
[2023-02-25 19:20:52,425][19865] Updated weights for policy 0, policy_version 110 (0.0038)
[2023-02-25 19:20:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3107.3). Total num frames: 450560. Throughput: 0: 858.8. Samples: 111750. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:53,320][14226] Avg episode reward: [(0, '4.510')]
[2023-02-25 19:20:58,315][14226] Fps is (10 sec: 2459.2, 60 sec: 3276.8, 300 sec: 3085.7). Total num frames: 462848. Throughput: 0: 800.9. Samples: 115612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:58,319][14226] Avg episode reward: [(0, '4.457')]
[2023-02-25 19:21:03,316][14226] Fps is (10 sec: 2866.8, 60 sec: 3276.7, 300 sec: 3091.8). Total num frames: 479232. Throughput: 0: 774.5. Samples: 119740. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:21:03,325][14226] Avg episode reward: [(0, '4.546')]
[2023-02-25 19:21:06,587][19865] Updated weights for policy 0, policy_version 120 (0.0025)
[2023-02-25 19:21:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3097.6). Total num frames: 495616. Throughput: 0: 783.9. Samples: 122172. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:21:08,322][14226] Avg episode reward: [(0, '4.692')]
[2023-02-25 19:21:13,324][14226] Fps is (10 sec: 3683.6, 60 sec: 3276.3, 300 sec: 3127.7). Total num frames: 516096. Throughput: 0: 815.8. Samples: 128536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:13,327][14226] Avg episode reward: [(0, '4.668')]
[2023-02-25 19:21:17,189][19865] Updated weights for policy 0, policy_version 130 (0.0013)
[2023-02-25 19:21:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3132.2). Total num frames: 532480. Throughput: 0: 790.4. Samples: 133828. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:21:18,319][14226] Avg episode reward: [(0, '4.697')]
[2023-02-25 19:21:23,315][14226] Fps is (10 sec: 2869.6, 60 sec: 3208.5, 300 sec: 3113.0). Total num frames: 544768. Throughput: 0: 775.0. Samples: 135850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:23,319][14226] Avg episode reward: [(0, '4.660')]
[2023-02-25 19:21:28,315][14226] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 3140.3). Total num frames: 565248. Throughput: 0: 791.7. Samples: 140612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:28,318][14226] Avg episode reward: [(0, '4.568')]
[2023-02-25 19:21:29,687][19865] Updated weights for policy 0, policy_version 140 (0.0023)
[2023-02-25 19:21:33,315][14226] Fps is (10 sec: 4096.2, 60 sec: 3208.5, 300 sec: 3166.1). Total num frames: 585728. Throughput: 0: 833.5. Samples: 147146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:33,318][14226] Avg episode reward: [(0, '4.555')]
[2023-02-25 19:21:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3169.0). Total num frames: 602112. Throughput: 0: 850.8. Samples: 150036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:38,321][14226] Avg episode reward: [(0, '4.598')]
[2023-02-25 19:21:41,465][19865] Updated weights for policy 0, policy_version 150 (0.0015)
[2023-02-25 19:21:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3171.8). Total num frames: 618496. Throughput: 0: 854.6. Samples: 154070. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:21:43,323][14226] Avg episode reward: [(0, '4.711')]
[2023-02-25 19:21:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3277.2, 300 sec: 3174.4). Total num frames: 634880. Throughput: 0: 881.4. Samples: 159402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:48,318][14226] Avg episode reward: [(0, '4.881')]
[2023-02-25 19:21:52,130][19865] Updated weights for policy 0, policy_version 160 (0.0015)
[2023-02-25 19:21:53,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3216.9). Total num frames: 659456. Throughput: 0: 898.2. Samples: 162590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:53,318][14226] Avg episode reward: [(0, '4.745')]
[2023-02-25 19:21:58,319][14226] Fps is (10 sec: 3684.7, 60 sec: 3481.3, 300 sec: 3198.7). Total num frames: 671744. Throughput: 0: 880.6. Samples: 168160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:58,322][14226] Avg episode reward: [(0, '4.497')]
[2023-02-25 19:22:03,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.7, 300 sec: 3200.6). Total num frames: 688128. Throughput: 0: 854.0. Samples: 172258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:03,323][14226] Avg episode reward: [(0, '4.471')]
[2023-02-25 19:22:03,342][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000168_688128.pth...
[2023-02-25 19:22:05,269][19865] Updated weights for policy 0, policy_version 170 (0.0015)
[2023-02-25 19:22:08,315][14226] Fps is (10 sec: 3688.1, 60 sec: 3549.9, 300 sec: 3220.9). Total num frames: 708608. Throughput: 0: 865.9. Samples: 174814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:08,318][14226] Avg episode reward: [(0, '4.460')]
[2023-02-25 19:22:13,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3550.4, 300 sec: 3240.4). Total num frames: 729088. Throughput: 0: 906.2. Samples: 181390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:13,322][14226] Avg episode reward: [(0, '4.657')]
[2023-02-25 19:22:15,300][19865] Updated weights for policy 0, policy_version 180 (0.0017)
[2023-02-25 19:22:18,316][14226] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3241.2). Total num frames: 745472. Throughput: 0: 874.2. Samples: 186484. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:18,320][14226] Avg episode reward: [(0, '4.803')]
[2023-02-25 19:22:23,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3224.5). Total num frames: 757760. Throughput: 0: 854.8. Samples: 188500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:23,318][14226] Avg episode reward: [(0, '4.814')]
[2023-02-25 19:22:27,865][19865] Updated weights for policy 0, policy_version 190 (0.0034)
[2023-02-25 19:22:28,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3549.9, 300 sec: 3242.7). Total num frames: 778240. Throughput: 0: 880.7. Samples: 193700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:28,318][14226] Avg episode reward: [(0, '4.906')]
[2023-02-25 19:22:28,321][19851] Saving new best policy, reward=4.906!
[2023-02-25 19:22:33,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3260.1). Total num frames: 798720. Throughput: 0: 906.2. Samples: 200182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:33,320][14226] Avg episode reward: [(0, '5.080')]
[2023-02-25 19:22:33,431][19851] Saving new best policy, reward=5.080!
[2023-02-25 19:22:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3260.4). Total num frames: 815104. Throughput: 0: 892.2. Samples: 202740. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:22:38,317][14226] Avg episode reward: [(0, '5.052')]
[2023-02-25 19:22:39,268][19865] Updated weights for policy 0, policy_version 200 (0.0018)
[2023-02-25 19:22:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3244.7). Total num frames: 827392. Throughput: 0: 860.5. Samples: 206880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:43,323][14226] Avg episode reward: [(0, '5.128')]
[2023-02-25 19:22:43,340][19851] Saving new best policy, reward=5.128!
[2023-02-25 19:22:48,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3549.8, 300 sec: 3261.0). Total num frames: 847872. Throughput: 0: 896.3. Samples: 212594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:48,318][14226] Avg episode reward: [(0, '5.166')]
[2023-02-25 19:22:48,324][19851] Saving new best policy, reward=5.166!
[2023-02-25 19:22:50,370][19865] Updated weights for policy 0, policy_version 210 (0.0030)
[2023-02-25 19:22:53,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3292.3). Total num frames: 872448. Throughput: 0: 908.7. Samples: 215706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:53,320][14226] Avg episode reward: [(0, '5.270')]
[2023-02-25 19:22:53,333][19851] Saving new best policy, reward=5.270!
[2023-02-25 19:22:58,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3550.1, 300 sec: 3276.8). Total num frames: 884736. Throughput: 0: 879.1. Samples: 220950. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:58,318][14226] Avg episode reward: [(0, '5.427')]
[2023-02-25 19:22:58,322][19851] Saving new best policy, reward=5.427!
[2023-02-25 19:23:03,320][14226] Fps is (10 sec: 2456.3, 60 sec: 3481.3, 300 sec: 3261.8). Total num frames: 897024. Throughput: 0: 855.3. Samples: 224974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:03,322][14226] Avg episode reward: [(0, '5.288')]
[2023-02-25 19:23:03,648][19865] Updated weights for policy 0, policy_version 220 (0.0015)
[2023-02-25 19:23:08,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 917504. Throughput: 0: 875.9. Samples: 227914. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:08,318][14226] Avg episode reward: [(0, '5.646')]
[2023-02-25 19:23:08,322][19851] Saving new best policy, reward=5.646!
[2023-02-25 19:23:13,274][19865] Updated weights for policy 0, policy_version 230 (0.0013)
[2023-02-25 19:23:13,315][14226] Fps is (10 sec: 4507.9, 60 sec: 3549.9, 300 sec: 3305.5). Total num frames: 942080. Throughput: 0: 902.0. Samples: 234292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:13,325][14226] Avg episode reward: [(0, '5.589')]
[2023-02-25 19:23:18,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3481.7, 300 sec: 3290.9). Total num frames: 954368. Throughput: 0: 864.4. Samples: 239080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:18,319][14226] Avg episode reward: [(0, '5.338')]
[2023-02-25 19:23:23,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 966656. Throughput: 0: 851.6. Samples: 241064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:23,318][14226] Avg episode reward: [(0, '5.171')]
[2023-02-25 19:23:26,260][19865] Updated weights for policy 0, policy_version 240 (0.0027)
[2023-02-25 19:23:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3360.1). Total num frames: 991232. Throughput: 0: 883.4. Samples: 246634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:28,318][14226] Avg episode reward: [(0, '5.395')]
[2023-02-25 19:23:33,318][14226] Fps is (10 sec: 4504.0, 60 sec: 3549.7, 300 sec: 3429.5). Total num frames: 1011712. Throughput: 0: 903.2. Samples: 253240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:23:33,321][14226] Avg episode reward: [(0, '5.912')]
[2023-02-25 19:23:33,337][19851] Saving new best policy, reward=5.912!
[2023-02-25 19:23:36,899][19865] Updated weights for policy 0, policy_version 250 (0.0013)
[2023-02-25 19:23:38,318][14226] Fps is (10 sec: 3276.0, 60 sec: 3481.4, 300 sec: 3471.2). Total num frames: 1024000. Throughput: 0: 884.1. Samples: 255494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:38,320][14226] Avg episode reward: [(0, '6.284')]
[2023-02-25 19:23:38,335][19851] Saving new best policy, reward=6.284!
[2023-02-25 19:23:43,315][14226] Fps is (10 sec: 2868.1, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1040384. Throughput: 0: 861.0. Samples: 259696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:23:43,318][14226] Avg episode reward: [(0, '6.602')]
[2023-02-25 19:23:43,327][19851] Saving new best policy, reward=6.602!
[2023-02-25 19:23:48,315][14226] Fps is (10 sec: 3687.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1060864. Throughput: 0: 905.8. Samples: 265732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:48,317][14226] Avg episode reward: [(0, '6.752')]
[2023-02-25 19:23:48,325][19851] Saving new best policy, reward=6.752!
[2023-02-25 19:23:48,720][19865] Updated weights for policy 0, policy_version 260 (0.0013)
[2023-02-25 19:23:53,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1081344. Throughput: 0: 909.8. Samples: 268856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:53,324][14226] Avg episode reward: [(0, '6.650')]
[2023-02-25 19:23:58,317][14226] Fps is (10 sec: 3685.8, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 1097728. Throughput: 0: 883.3. Samples: 274042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:58,323][14226] Avg episode reward: [(0, '6.485')]
[2023-02-25 19:24:01,036][19865] Updated weights for policy 0, policy_version 270 (0.0028)
[2023-02-25 19:24:03,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3550.2, 300 sec: 3471.2). Total num frames: 1110016. Throughput: 0: 869.6. Samples: 278210. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:24:03,324][14226] Avg episode reward: [(0, '6.814')]
[2023-02-25 19:24:03,415][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth...
[2023-02-25 19:24:03,532][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth
[2023-02-25 19:24:03,550][19851] Saving new best policy, reward=6.814!
[2023-02-25 19:24:08,315][14226] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1130496. Throughput: 0: 893.4. Samples: 281268. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:24:08,320][14226] Avg episode reward: [(0, '7.101')]
[2023-02-25 19:24:08,326][19851] Saving new best policy, reward=7.101!
[2023-02-25 19:24:11,144][19865] Updated weights for policy 0, policy_version 280 (0.0020)
[2023-02-25 19:24:13,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1155072. Throughput: 0: 910.8. Samples: 287620. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:24:13,317][14226] Avg episode reward: [(0, '7.348')]
[2023-02-25 19:24:13,332][19851] Saving new best policy, reward=7.348!
[2023-02-25 19:24:18,316][14226] Fps is (10 sec: 3685.8, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 1167360. Throughput: 0: 866.6. Samples: 292234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:18,321][14226] Avg episode reward: [(0, '7.290')]
[2023-02-25 19:24:23,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1183744. Throughput: 0: 862.0. Samples: 294282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:24:23,321][14226] Avg episode reward: [(0, '7.303')]
[2023-02-25 19:24:24,133][19865] Updated weights for policy 0, policy_version 290 (0.0021)
[2023-02-25 19:24:28,315][14226] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1204224. Throughput: 0: 898.5. Samples: 300128. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:24:28,320][14226] Avg episode reward: [(0, '7.302')]
[2023-02-25 19:24:33,315][14226] Fps is (10 sec: 4095.9, 60 sec: 3550.1, 300 sec: 3485.1). Total num frames: 1224704. Throughput: 0: 910.7. Samples: 306712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:33,320][14226] Avg episode reward: [(0, '7.641')]
[2023-02-25 19:24:33,340][19851] Saving new best policy, reward=7.641!
[2023-02-25 19:24:34,070][19865] Updated weights for policy 0, policy_version 300 (0.0033)
[2023-02-25 19:24:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3485.1). Total num frames: 1236992. Throughput: 0: 885.3. Samples: 308696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:24:38,322][14226] Avg episode reward: [(0, '7.532')]
[2023-02-25 19:24:43,315][14226] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1253376. Throughput: 0: 861.9. Samples: 312824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:24:43,322][14226] Avg episode reward: [(0, '7.387')]
[2023-02-25 19:24:46,358][19865] Updated weights for policy 0, policy_version 310 (0.0037)
[2023-02-25 19:24:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1277952. Throughput: 0: 913.9. Samples: 319336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:48,318][14226] Avg episode reward: [(0, '6.834')]
[2023-02-25 19:24:53,322][14226] Fps is (10 sec: 4502.6, 60 sec: 3617.7, 300 sec: 3498.9). Total num frames: 1298432. Throughput: 0: 918.1. Samples: 322588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:53,323][14226] Avg episode reward: [(0, '6.901')]
[2023-02-25 19:24:57,731][19865] Updated weights for policy 0, policy_version 320 (0.0022)
[2023-02-25 19:24:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3485.1). Total num frames: 1310720. Throughput: 0: 884.2. Samples: 327410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:58,317][14226] Avg episode reward: [(0, '7.164')]
[2023-02-25 19:25:03,315][14226] Fps is (10 sec: 2869.1, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1327104. Throughput: 0: 884.1. Samples: 332018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:03,317][14226] Avg episode reward: [(0, '7.855')]
[2023-02-25 19:25:03,334][19851] Saving new best policy, reward=7.855!
[2023-02-25 19:25:08,295][19865] Updated weights for policy 0, policy_version 330 (0.0021)
[2023-02-25 19:25:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 1351680. Throughput: 0: 913.4. Samples: 335386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:08,318][14226] Avg episode reward: [(0, '8.127')]
[2023-02-25 19:25:08,328][19851] Saving new best policy, reward=8.127!
[2023-02-25 19:25:13,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1368064. Throughput: 0: 929.8. Samples: 341970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:13,321][14226] Avg episode reward: [(0, '9.417')]
[2023-02-25 19:25:13,332][19851] Saving new best policy, reward=9.417!
[2023-02-25 19:25:18,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3499.0). Total num frames: 1384448. Throughput: 0: 881.5. Samples: 346378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:18,323][14226] Avg episode reward: [(0, '9.889')]
[2023-02-25 19:25:18,329][19851] Saving new best policy, reward=9.889!
[2023-02-25 19:25:21,017][19865] Updated weights for policy 0, policy_version 340 (0.0018)
[2023-02-25 19:25:23,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3498.9). Total num frames: 1400832. Throughput: 0: 882.8. Samples: 348422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:23,319][14226] Avg episode reward: [(0, '10.379')]
[2023-02-25 19:25:23,340][19851] Saving new best policy, reward=10.379!
[2023-02-25 19:25:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1421312. Throughput: 0: 929.0. Samples: 354628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:25:28,323][14226] Avg episode reward: [(0, '10.857')]
[2023-02-25 19:25:28,329][19851] Saving new best policy, reward=10.857!
[2023-02-25 19:25:30,421][19865] Updated weights for policy 0, policy_version 350 (0.0013)
[2023-02-25 19:25:33,315][14226] Fps is (10 sec: 4096.3, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 1441792. Throughput: 0: 924.9. Samples: 360956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:33,320][14226] Avg episode reward: [(0, '10.658')]
[2023-02-25 19:25:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 1458176. Throughput: 0: 900.2. Samples: 363092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:38,318][14226] Avg episode reward: [(0, '10.712')]
[2023-02-25 19:25:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 1470464. Throughput: 0: 893.9. Samples: 367636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:43,318][14226] Avg episode reward: [(0, '10.512')]
[2023-02-25 19:25:43,550][19865] Updated weights for policy 0, policy_version 360 (0.0033)
[2023-02-25 19:25:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 1486848. Throughput: 0: 891.8. Samples: 372148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:48,318][14226] Avg episode reward: [(0, '11.118')]
[2023-02-25 19:25:48,325][19851] Saving new best policy, reward=11.118!
[2023-02-25 19:25:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.4, 300 sec: 3512.8). Total num frames: 1499136. Throughput: 0: 862.4. Samples: 374194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:53,322][14226] Avg episode reward: [(0, '11.432')]
[2023-02-25 19:25:53,335][19851] Saving new best policy, reward=11.432!
[2023-02-25 19:25:58,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 1511424. Throughput: 0: 805.6. Samples: 378220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:58,317][14226] Avg episode reward: [(0, '11.527')]
[2023-02-25 19:25:58,402][19851] Saving new best policy, reward=11.527!
[2023-02-25 19:25:58,416][19865] Updated weights for policy 0, policy_version 370 (0.0037)
[2023-02-25 19:26:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 1531904. Throughput: 0: 816.9. Samples: 383140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:03,318][14226] Avg episode reward: [(0, '12.076')]
[2023-02-25 19:26:03,330][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000374_1531904.pth...
[2023-02-25 19:26:03,453][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000168_688128.pth
[2023-02-25 19:26:03,462][19851] Saving new best policy, reward=12.076!
[2023-02-25 19:26:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3512.9). Total num frames: 1552384. Throughput: 0: 842.9. Samples: 386352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:08,318][14226] Avg episode reward: [(0, '12.471')]
[2023-02-25 19:26:08,323][19851] Saving new best policy, reward=12.471!
[2023-02-25 19:26:08,650][19865] Updated weights for policy 0, policy_version 380 (0.0027)
[2023-02-25 19:26:13,321][14226] Fps is (10 sec: 4093.7, 60 sec: 3413.0, 300 sec: 3526.7). Total num frames: 1572864. Throughput: 0: 846.5. Samples: 392724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:26:13,326][14226] Avg episode reward: [(0, '12.906')]
[2023-02-25 19:26:13,344][19851] Saving new best policy, reward=12.906!
[2023-02-25 19:26:18,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 1585152. Throughput: 0: 799.5. Samples: 396934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:26:18,320][14226] Avg episode reward: [(0, '13.514')]
[2023-02-25 19:26:18,326][19851] Saving new best policy, reward=13.514!
[2023-02-25 19:26:21,565][19865] Updated weights for policy 0, policy_version 390 (0.0029)
[2023-02-25 19:26:23,315][14226] Fps is (10 sec: 2868.8, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 1601536. Throughput: 0: 797.9. Samples: 398998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:23,324][14226] Avg episode reward: [(0, '14.463')]
[2023-02-25 19:26:23,388][19851] Saving new best policy, reward=14.463!
[2023-02-25 19:26:28,316][14226] Fps is (10 sec: 4095.7, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 1626112. Throughput: 0: 841.9. Samples: 405520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:28,319][14226] Avg episode reward: [(0, '13.096')]
[2023-02-25 19:26:30,593][19865] Updated weights for policy 0, policy_version 400 (0.0022)
[2023-02-25 19:26:33,317][14226] Fps is (10 sec: 4095.3, 60 sec: 3345.0, 300 sec: 3526.7). Total num frames: 1642496. Throughput: 0: 875.6. Samples: 411550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:33,322][14226] Avg episode reward: [(0, '13.859')]
[2023-02-25 19:26:38,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 1658880. Throughput: 0: 877.4. Samples: 413678. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:38,318][14226] Avg episode reward: [(0, '13.539')]
[2023-02-25 19:26:43,051][19865] Updated weights for policy 0, policy_version 410 (0.0018)
[2023-02-25 19:26:43,315][14226] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1679360. Throughput: 0: 895.9. Samples: 418534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:26:43,317][14226] Avg episode reward: [(0, '13.623')]
[2023-02-25 19:26:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1699840. Throughput: 0: 936.2. Samples: 425268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:48,324][14226] Avg episode reward: [(0, '15.830')]
[2023-02-25 19:26:48,328][19851] Saving new best policy, reward=15.830!
[2023-02-25 19:26:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3540.7). Total num frames: 1716224. Throughput: 0: 934.4. Samples: 428402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:53,327][14226] Avg episode reward: [(0, '16.726')]
[2023-02-25 19:26:53,340][19851] Saving new best policy, reward=16.726!
[2023-02-25 19:26:53,771][19865] Updated weights for policy 0, policy_version 420 (0.0015)
[2023-02-25 19:26:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1732608. Throughput: 0: 884.3. Samples: 432512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:26:58,322][14226] Avg episode reward: [(0, '17.417')]
[2023-02-25 19:26:58,330][19851] Saving new best policy, reward=17.417!
[2023-02-25 19:27:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1753088. Throughput: 0: 908.5. Samples: 437816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:03,318][14226] Avg episode reward: [(0, '17.333')]
[2023-02-25 19:27:05,135][19865] Updated weights for policy 0, policy_version 430 (0.0017)
[2023-02-25 19:27:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1773568. Throughput: 0: 938.8. Samples: 441246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:27:08,318][14226] Avg episode reward: [(0, '17.300')]
[2023-02-25 19:27:13,318][14226] Fps is (10 sec: 3685.4, 60 sec: 3618.3, 300 sec: 3540.6). Total num frames: 1789952. Throughput: 0: 925.6. Samples: 447172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:13,321][14226] Avg episode reward: [(0, '17.437')]
[2023-02-25 19:27:13,333][19851] Saving new best policy, reward=17.437!
[2023-02-25 19:27:17,270][19865] Updated weights for policy 0, policy_version 440 (0.0017)
[2023-02-25 19:27:18,316][14226] Fps is (10 sec: 2867.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1802240. Throughput: 0: 884.6. Samples: 451354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:18,319][14226] Avg episode reward: [(0, '15.806')]
[2023-02-25 19:27:23,315][14226] Fps is (10 sec: 3277.6, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1822720. Throughput: 0: 890.6. Samples: 453756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:27:23,325][14226] Avg episode reward: [(0, '15.729')]
[2023-02-25 19:27:27,769][19865] Updated weights for policy 0, policy_version 450 (0.0013)
[2023-02-25 19:27:28,315][14226] Fps is (10 sec: 4096.3, 60 sec: 3618.2, 300 sec: 3540.6). Total num frames: 1843200. Throughput: 0: 922.1. Samples: 460030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:27:28,326][14226] Avg episode reward: [(0, '15.815')]
[2023-02-25 19:27:33,317][14226] Fps is (10 sec: 3685.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1859584. Throughput: 0: 891.2. Samples: 465372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:33,321][14226] Avg episode reward: [(0, '15.677')]
[2023-02-25 19:27:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1871872. Throughput: 0: 866.0. Samples: 467372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:38,321][14226] Avg episode reward: [(0, '15.586')]
[2023-02-25 19:27:40,867][19865] Updated weights for policy 0, policy_version 460 (0.0021)
[2023-02-25 19:27:43,315][14226] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1892352. Throughput: 0: 882.2. Samples: 472210. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:43,318][14226] Avg episode reward: [(0, '16.347')]
[2023-02-25 19:27:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1912832. Throughput: 0: 904.7. Samples: 478526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:27:48,326][14226] Avg episode reward: [(0, '17.477')]
[2023-02-25 19:27:48,328][19851] Saving new best policy, reward=17.477!
[2023-02-25 19:27:51,636][19865] Updated weights for policy 0, policy_version 470 (0.0025)
[2023-02-25 19:27:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1929216. Throughput: 0: 888.0. Samples: 481206. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:27:53,323][14226] Avg episode reward: [(0, '18.722')]
[2023-02-25 19:27:53,342][19851] Saving new best policy, reward=18.722!
[2023-02-25 19:27:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3540.7). Total num frames: 1941504. Throughput: 0: 843.3. Samples: 485120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:58,323][14226] Avg episode reward: [(0, '19.037')]
[2023-02-25 19:27:58,327][19851] Saving new best policy, reward=19.037!
[2023-02-25 19:28:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1961984. Throughput: 0: 866.9. Samples: 490362. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:03,317][14226] Avg episode reward: [(0, '18.261')]
[2023-02-25 19:28:03,330][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth...
[2023-02-25 19:28:03,467][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth
[2023-02-25 19:28:04,134][19865] Updated weights for policy 0, policy_version 480 (0.0014)
[2023-02-25 19:28:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1982464. Throughput: 0: 881.6. Samples: 493430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:08,318][14226] Avg episode reward: [(0, '19.426')]
[2023-02-25 19:28:08,326][19851] Saving new best policy, reward=19.426!
[2023-02-25 19:28:13,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3526.7). Total num frames: 1994752. Throughput: 0: 860.7. Samples: 498760. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:13,322][14226] Avg episode reward: [(0, '19.911')]
[2023-02-25 19:28:13,330][19851] Saving new best policy, reward=19.911!
[2023-02-25 19:28:16,968][19865] Updated weights for policy 0, policy_version 490 (0.0027)
[2023-02-25 19:28:18,315][14226] Fps is (10 sec: 2457.5, 60 sec: 3413.4, 300 sec: 3526.7). Total num frames: 2007040. Throughput: 0: 828.9. Samples: 502672. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:18,318][14226] Avg episode reward: [(0, '19.128')]
[2023-02-25 19:28:23,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2027520. Throughput: 0: 844.0. Samples: 505352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:23,323][14226] Avg episode reward: [(0, '17.767')]
[2023-02-25 19:28:27,496][19865] Updated weights for policy 0, policy_version 500 (0.0031)
[2023-02-25 19:28:28,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 2048000. Throughput: 0: 875.2. Samples: 511592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:28,323][14226] Avg episode reward: [(0, '19.024')]
[2023-02-25 19:28:33,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3526.8). Total num frames: 2064384. Throughput: 0: 843.1. Samples: 516466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:33,321][14226] Avg episode reward: [(0, '19.443')]
[2023-02-25 19:28:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2076672. Throughput: 0: 828.1. Samples: 518472. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:38,326][14226] Avg episode reward: [(0, '18.573')]
[2023-02-25 19:28:40,596][19865] Updated weights for policy 0, policy_version 510 (0.0023)
[2023-02-25 19:28:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2097152. Throughput: 0: 856.9. Samples: 523680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:43,322][14226] Avg episode reward: [(0, '20.181')]
[2023-02-25 19:28:43,339][19851] Saving new best policy, reward=20.181!
[2023-02-25 19:28:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2117632. Throughput: 0: 882.4. Samples: 530072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:48,323][14226] Avg episode reward: [(0, '21.558')]
[2023-02-25 19:28:48,329][19851] Saving new best policy, reward=21.558!
[2023-02-25 19:28:51,388][19865] Updated weights for policy 0, policy_version 520 (0.0012)
[2023-02-25 19:28:53,316][14226] Fps is (10 sec: 3686.0, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2134016. Throughput: 0: 865.3. Samples: 532370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:53,321][14226] Avg episode reward: [(0, '22.117')]
[2023-02-25 19:28:53,334][19851] Saving new best policy, reward=22.117!
[2023-02-25 19:28:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2146304. Throughput: 0: 833.4. Samples: 536264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:58,318][14226] Avg episode reward: [(0, '21.822')]
[2023-02-25 19:29:03,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2166784. Throughput: 0: 877.2. Samples: 542148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:29:03,318][14226] Avg episode reward: [(0, '22.573')]
[2023-02-25 19:29:03,328][19851] Saving new best policy, reward=22.573!
[2023-02-25 19:29:03,840][19865] Updated weights for policy 0, policy_version 530 (0.0014)
[2023-02-25 19:29:08,316][14226] Fps is (10 sec: 4095.4, 60 sec: 3413.2, 300 sec: 3498.9). Total num frames: 2187264. Throughput: 0: 885.4. Samples: 545196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:29:08,322][14226] Avg episode reward: [(0, '21.887')]
[2023-02-25 19:29:13,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 2199552. Throughput: 0: 851.0. Samples: 549888. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:29:13,323][14226] Avg episode reward: [(0, '20.853')]
[2023-02-25 19:29:16,885][19865] Updated weights for policy 0, policy_version 540 (0.0016)
[2023-02-25 19:29:18,317][14226] Fps is (10 sec: 2867.0, 60 sec: 3481.5, 300 sec: 3498.9). Total num frames: 2215936. Throughput: 0: 832.8. Samples: 553946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:18,320][14226] Avg episode reward: [(0, '20.671')]
[2023-02-25 19:29:23,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 2236416. Throughput: 0: 857.2. Samples: 557048. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-02-25 19:29:23,318][14226] Avg episode reward: [(0, '20.370')]
[2023-02-25 19:29:27,194][19865] Updated weights for policy 0, policy_version 550 (0.0020)
[2023-02-25 19:29:28,315][14226] Fps is (10 sec: 3687.3, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2252800. Throughput: 0: 878.7. Samples: 563220. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:28,318][14226] Avg episode reward: [(0, '19.554')]
[2023-02-25 19:29:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 2269184. Throughput: 0: 831.7. Samples: 567500. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:29:33,321][14226] Avg episode reward: [(0, '19.293')]
[2023-02-25 19:29:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2281472. Throughput: 0: 822.5. Samples: 569382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:38,318][14226] Avg episode reward: [(0, '20.078')]
[2023-02-25 19:29:40,343][19865] Updated weights for policy 0, policy_version 560 (0.0024)
[2023-02-25 19:29:43,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2306048. Throughput: 0: 867.2. Samples: 575286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:43,325][14226] Avg episode reward: [(0, '19.684')]
[2023-02-25 19:29:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3471.3). Total num frames: 2322432. Throughput: 0: 870.8. Samples: 581334. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:48,318][14226] Avg episode reward: [(0, '20.644')]
[2023-02-25 19:29:51,933][19865] Updated weights for policy 0, policy_version 570 (0.0016)
[2023-02-25 19:29:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2334720. Throughput: 0: 846.7. Samples: 583296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:53,321][14226] Avg episode reward: [(0, '20.446')]
[2023-02-25 19:29:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 2351104. Throughput: 0: 830.3. Samples: 587252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:58,318][14226] Avg episode reward: [(0, '22.169')]
[2023-02-25 19:30:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2371584. Throughput: 0: 879.8. Samples: 593534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:03,318][14226] Avg episode reward: [(0, '21.614')]
[2023-02-25 19:30:03,399][19865] Updated weights for policy 0, policy_version 580 (0.0027)
[2023-02-25 19:30:03,405][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000580_2375680.pth...
[2023-02-25 19:30:03,529][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000374_1531904.pth
[2023-02-25 19:30:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 2392064. Throughput: 0: 877.5. Samples: 596536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:08,325][14226] Avg episode reward: [(0, '22.972')]
[2023-02-25 19:30:08,332][19851] Saving new best policy, reward=22.972!
[2023-02-25 19:30:13,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2404352. Throughput: 0: 832.5. Samples: 600682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:13,318][14226] Avg episode reward: [(0, '21.815')]
[2023-02-25 19:30:16,965][19865] Updated weights for policy 0, policy_version 590 (0.0029)
[2023-02-25 19:30:18,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.5, 300 sec: 3457.3). Total num frames: 2420736. Throughput: 0: 840.2. Samples: 605310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:18,318][14226] Avg episode reward: [(0, '21.502')]
[2023-02-25 19:30:23,315][14226] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2441216. Throughput: 0: 866.4. Samples: 608372. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:30:23,317][14226] Avg episode reward: [(0, '23.034')]
[2023-02-25 19:30:23,341][19851] Saving new best policy, reward=23.034!
[2023-02-25 19:30:27,343][19865] Updated weights for policy 0, policy_version 600 (0.0036)
[2023-02-25 19:30:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2457600. Throughput: 0: 868.0. Samples: 614348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:28,318][14226] Avg episode reward: [(0, '22.007')]
[2023-02-25 19:30:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 2469888. Throughput: 0: 824.2. Samples: 618422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:33,318][14226] Avg episode reward: [(0, '22.557')]
[2023-02-25 19:30:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2490368. Throughput: 0: 825.8. Samples: 620456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:38,318][14226] Avg episode reward: [(0, '22.130')]
[2023-02-25 19:30:41,184][19865] Updated weights for policy 0, policy_version 610 (0.0022)
[2023-02-25 19:30:43,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 2502656. Throughput: 0: 847.1. Samples: 625370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:43,318][14226] Avg episode reward: [(0, '23.367')]
[2023-02-25 19:30:43,328][19851] Saving new best policy, reward=23.367!
[2023-02-25 19:30:48,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3443.4). Total num frames: 2514944. Throughput: 0: 789.7. Samples: 629070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:48,319][14226] Avg episode reward: [(0, '23.750')]
[2023-02-25 19:30:48,324][19851] Saving new best policy, reward=23.750!
[2023-02-25 19:30:53,315][14226] Fps is (10 sec: 2457.7, 60 sec: 3208.5, 300 sec: 3443.4). Total num frames: 2527232. Throughput: 0: 766.0. Samples: 631006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:53,323][14226] Avg episode reward: [(0, '22.551')]
[2023-02-25 19:30:56,738][19865] Updated weights for policy 0, policy_version 620 (0.0026)
[2023-02-25 19:30:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2543616. Throughput: 0: 766.2. Samples: 635162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:58,324][14226] Avg episode reward: [(0, '21.819')]
[2023-02-25 19:31:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2564096. Throughput: 0: 803.5. Samples: 641468. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:03,318][14226] Avg episode reward: [(0, '21.845')]
[2023-02-25 19:31:06,737][19865] Updated weights for policy 0, policy_version 630 (0.0030)
[2023-02-25 19:31:08,315][14226] Fps is (10 sec: 3686.2, 60 sec: 3140.2, 300 sec: 3415.7). Total num frames: 2580480. Throughput: 0: 805.6. Samples: 644626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:08,318][14226] Avg episode reward: [(0, '22.819')]
[2023-02-25 19:31:13,316][14226] Fps is (10 sec: 3276.3, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2596864. Throughput: 0: 764.7. Samples: 648760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:13,322][14226] Avg episode reward: [(0, '22.386')]
[2023-02-25 19:31:18,315][14226] Fps is (10 sec: 3277.0, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2613248. Throughput: 0: 776.7. Samples: 653374. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:18,324][14226] Avg episode reward: [(0, '22.061')]
[2023-02-25 19:31:20,082][19865] Updated weights for policy 0, policy_version 640 (0.0025)
[2023-02-25 19:31:23,315][14226] Fps is (10 sec: 3686.9, 60 sec: 3208.5, 300 sec: 3415.7). Total num frames: 2633728. Throughput: 0: 800.1. Samples: 656460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:23,318][14226] Avg episode reward: [(0, '23.373')]
[2023-02-25 19:31:28,319][14226] Fps is (10 sec: 3685.1, 60 sec: 3208.3, 300 sec: 3415.6). Total num frames: 2650112. Throughput: 0: 820.8. Samples: 662308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:28,321][14226] Avg episode reward: [(0, '22.192')]
[2023-02-25 19:31:32,371][19865] Updated weights for policy 0, policy_version 650 (0.0035)
[2023-02-25 19:31:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 2662400. Throughput: 0: 824.4. Samples: 666170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:31:33,319][14226] Avg episode reward: [(0, '21.930')]
[2023-02-25 19:31:38,315][14226] Fps is (10 sec: 2868.2, 60 sec: 3140.3, 300 sec: 3387.9). Total num frames: 2678784. Throughput: 0: 829.5. Samples: 668334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:38,322][14226] Avg episode reward: [(0, '20.644')]
[2023-02-25 19:31:43,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2699264. Throughput: 0: 875.9. Samples: 674576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:31:43,317][14226] Avg episode reward: [(0, '21.910')]
[2023-02-25 19:31:43,425][19865] Updated weights for policy 0, policy_version 660 (0.0013)
[2023-02-25 19:31:48,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 2715648. Throughput: 0: 850.8. Samples: 679752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:31:48,319][14226] Avg episode reward: [(0, '22.429')]
[2023-02-25 19:31:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 2727936. Throughput: 0: 824.0. Samples: 681706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:53,325][14226] Avg episode reward: [(0, '21.283')]
[2023-02-25 19:31:56,682][19865] Updated weights for policy 0, policy_version 670 (0.0064)
[2023-02-25 19:31:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2748416. Throughput: 0: 839.9. Samples: 686556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:31:58,318][14226] Avg episode reward: [(0, '21.590')]
[2023-02-25 19:32:03,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2768896. Throughput: 0: 876.1. Samples: 692800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:03,317][14226] Avg episode reward: [(0, '22.223')]
[2023-02-25 19:32:03,334][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000676_2768896.pth...
[2023-02-25 19:32:03,456][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth
[2023-02-25 19:32:07,739][19865] Updated weights for policy 0, policy_version 680 (0.0013)
[2023-02-25 19:32:08,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3374.0). Total num frames: 2785280. Throughput: 0: 864.3. Samples: 695352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:08,320][14226] Avg episode reward: [(0, '21.635')]
[2023-02-25 19:32:13,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 2797568. Throughput: 0: 822.9. Samples: 699336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:13,322][14226] Avg episode reward: [(0, '21.014')]
[2023-02-25 19:32:18,317][14226] Fps is (10 sec: 3276.3, 60 sec: 3413.2, 300 sec: 3374.0). Total num frames: 2818048. Throughput: 0: 861.8. Samples: 704954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:18,323][14226] Avg episode reward: [(0, '20.343')]
[2023-02-25 19:32:19,631][19865] Updated weights for policy 0, policy_version 690 (0.0030)
[2023-02-25 19:32:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2838528. Throughput: 0: 884.5. Samples: 708136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:23,318][14226] Avg episode reward: [(0, '20.192')]
[2023-02-25 19:32:28,315][14226] Fps is (10 sec: 3686.9, 60 sec: 3413.5, 300 sec: 3374.0). Total num frames: 2854912. Throughput: 0: 858.8. Samples: 713222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:28,321][14226] Avg episode reward: [(0, '21.425')]
[2023-02-25 19:32:32,733][19865] Updated weights for policy 0, policy_version 700 (0.0026)
[2023-02-25 19:32:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2867200. Throughput: 0: 831.2. Samples: 717154. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:33,318][14226] Avg episode reward: [(0, '20.984')]
[2023-02-25 19:32:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2887680. Throughput: 0: 851.0. Samples: 720002. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:38,324][14226] Avg episode reward: [(0, '22.227')]
[2023-02-25 19:32:42,668][19865] Updated weights for policy 0, policy_version 710 (0.0018)
[2023-02-25 19:32:43,318][14226] Fps is (10 sec: 4094.9, 60 sec: 3481.4, 300 sec: 3374.0). Total num frames: 2908160. Throughput: 0: 884.3. Samples: 726350. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:32:43,325][14226] Avg episode reward: [(0, '22.079')]
[2023-02-25 19:32:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 2920448. Throughput: 0: 848.9. Samples: 731000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:48,320][14226] Avg episode reward: [(0, '23.618')]
[2023-02-25 19:32:53,315][14226] Fps is (10 sec: 2868.0, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2936832. Throughput: 0: 837.0. Samples: 733018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:53,318][14226] Avg episode reward: [(0, '23.949')]
[2023-02-25 19:32:53,327][19851] Saving new best policy, reward=23.949!
[2023-02-25 19:32:55,899][19865] Updated weights for policy 0, policy_version 720 (0.0018)
[2023-02-25 19:32:58,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2957312. Throughput: 0: 869.0. Samples: 738440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:32:58,318][14226] Avg episode reward: [(0, '23.313')]
[2023-02-25 19:33:03,315][14226] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2977792. Throughput: 0: 884.2. Samples: 744744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:03,321][14226] Avg episode reward: [(0, '24.735')]
[2023-02-25 19:33:03,333][19851] Saving new best policy, reward=24.735!
[2023-02-25 19:33:07,268][19865] Updated weights for policy 0, policy_version 730 (0.0016)
[2023-02-25 19:33:08,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2990080. Throughput: 0: 860.3. Samples: 746848. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:08,319][14226] Avg episode reward: [(0, '24.567')]
[2023-02-25 19:33:13,315][14226] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3006464. Throughput: 0: 833.3. Samples: 750722. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:33:13,317][14226] Avg episode reward: [(0, '25.508')]
[2023-02-25 19:33:13,332][19851] Saving new best policy, reward=25.508!
[2023-02-25 19:33:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3387.9). Total num frames: 3026944. Throughput: 0: 878.0. Samples: 756666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:18,318][14226] Avg episode reward: [(0, '25.210')]
[2023-02-25 19:33:19,139][19865] Updated weights for policy 0, policy_version 740 (0.0017)
[2023-02-25 19:33:23,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3043328. Throughput: 0: 882.9. Samples: 759732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:23,321][14226] Avg episode reward: [(0, '25.449')]
[2023-02-25 19:33:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3059712. Throughput: 0: 844.1. Samples: 764330. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:33:28,320][14226] Avg episode reward: [(0, '24.391')]
[2023-02-25 19:33:32,440][19865] Updated weights for policy 0, policy_version 750 (0.0012)
[2023-02-25 19:33:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3072000. Throughput: 0: 837.0. Samples: 768666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:33,318][14226] Avg episode reward: [(0, '24.438')]
[2023-02-25 19:33:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3096576. Throughput: 0: 862.8. Samples: 771842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:33:38,318][14226] Avg episode reward: [(0, '23.098')]
[2023-02-25 19:33:42,148][19865] Updated weights for policy 0, policy_version 760 (0.0017)
[2023-02-25 19:33:43,318][14226] Fps is (10 sec: 4094.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3112960. Throughput: 0: 884.2. Samples: 778230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:43,321][14226] Avg episode reward: [(0, '22.200')]
[2023-02-25 19:33:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 3129344. Throughput: 0: 835.8. Samples: 782354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:33:48,318][14226] Avg episode reward: [(0, '22.994')]
[2023-02-25 19:33:53,315][14226] Fps is (10 sec: 2868.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3141632. Throughput: 0: 832.4. Samples: 784308. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:33:53,321][14226] Avg episode reward: [(0, '21.716')]
[2023-02-25 19:33:55,444][19865] Updated weights for policy 0, policy_version 770 (0.0027)
[2023-02-25 19:33:58,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3166208. Throughput: 0: 876.6. Samples: 790170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:58,318][14226] Avg episode reward: [(0, '21.671')]
[2023-02-25 19:34:03,322][14226] Fps is (10 sec: 4093.3, 60 sec: 3413.0, 300 sec: 3373.9). Total num frames: 3182592. Throughput: 0: 876.2. Samples: 796102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:03,324][14226] Avg episode reward: [(0, '21.513')]
[2023-02-25 19:34:03,351][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth...
[2023-02-25 19:34:03,563][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000580_2375680.pth
[2023-02-25 19:34:07,374][19865] Updated weights for policy 0, policy_version 780 (0.0016)
[2023-02-25 19:34:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3194880. Throughput: 0: 851.5. Samples: 798048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:08,321][14226] Avg episode reward: [(0, '21.635')]
[2023-02-25 19:34:13,315][14226] Fps is (10 sec: 2869.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3211264. Throughput: 0: 844.0. Samples: 802310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:13,321][14226] Avg episode reward: [(0, '19.901')]
[2023-02-25 19:34:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3231744. Throughput: 0: 888.0. Samples: 808628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:18,318][14226] Avg episode reward: [(0, '20.496')]
[2023-02-25 19:34:18,430][19865] Updated weights for policy 0, policy_version 790 (0.0019)
[2023-02-25 19:34:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3252224. Throughput: 0: 886.5. Samples: 811736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:23,323][14226] Avg episode reward: [(0, '21.328')]
[2023-02-25 19:34:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3264512. Throughput: 0: 840.3. Samples: 816042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:28,318][14226] Avg episode reward: [(0, '21.576')]
[2023-02-25 19:34:31,316][19865] Updated weights for policy 0, policy_version 800 (0.0041)
[2023-02-25 19:34:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3284992. Throughput: 0: 862.5. Samples: 821168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:33,318][14226] Avg episode reward: [(0, '22.463')]
[2023-02-25 19:34:38,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3305472. Throughput: 0: 895.9. Samples: 824624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:38,322][14226] Avg episode reward: [(0, '23.978')]
[2023-02-25 19:34:40,239][19865] Updated weights for policy 0, policy_version 810 (0.0039)
[2023-02-25 19:34:43,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3550.1, 300 sec: 3401.8). Total num frames: 3325952. Throughput: 0: 908.7. Samples: 831060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:43,320][14226] Avg episode reward: [(0, '24.812')]
[2023-02-25 19:34:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 3338240. Throughput: 0: 874.3. Samples: 835440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:48,321][14226] Avg episode reward: [(0, '24.109')]
[2023-02-25 19:34:52,742][19865] Updated weights for policy 0, policy_version 820 (0.0021)
[2023-02-25 19:34:53,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3415.6). Total num frames: 3358720. Throughput: 0: 883.7. Samples: 837816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:53,322][14226] Avg episode reward: [(0, '22.731')]
[2023-02-25 19:34:58,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3429.5). Total num frames: 3383296. Throughput: 0: 937.0. Samples: 844474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:34:58,325][14226] Avg episode reward: [(0, '22.719')]
[2023-02-25 19:35:02,693][19865] Updated weights for policy 0, policy_version 830 (0.0015)
[2023-02-25 19:35:03,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3618.5, 300 sec: 3415.6). Total num frames: 3399680. Throughput: 0: 927.6. Samples: 850368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:03,321][14226] Avg episode reward: [(0, '23.358')]
[2023-02-25 19:35:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3415.7). Total num frames: 3411968. Throughput: 0: 904.2. Samples: 852426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:08,319][14226] Avg episode reward: [(0, '23.036')]
[2023-02-25 19:35:13,315][14226] Fps is (10 sec: 3276.6, 60 sec: 3686.4, 300 sec: 3429.5). Total num frames: 3432448. Throughput: 0: 922.3. Samples: 857546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:13,324][14226] Avg episode reward: [(0, '22.818')]
[2023-02-25 19:35:14,426][19865] Updated weights for policy 0, policy_version 840 (0.0028)
[2023-02-25 19:35:18,323][14226] Fps is (10 sec: 4502.0, 60 sec: 3754.2, 300 sec: 3443.3). Total num frames: 3457024. Throughput: 0: 956.0. Samples: 864196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:18,326][14226] Avg episode reward: [(0, '23.309')]
[2023-02-25 19:35:23,315][14226] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3473408. Throughput: 0: 944.8. Samples: 867142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:23,318][14226] Avg episode reward: [(0, '22.427')]
[2023-02-25 19:35:25,739][19865] Updated weights for policy 0, policy_version 850 (0.0021)
[2023-02-25 19:35:28,315][14226] Fps is (10 sec: 2869.5, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3485696. Throughput: 0: 899.5. Samples: 871536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:28,323][14226] Avg episode reward: [(0, '22.797')]
[2023-02-25 19:35:33,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3506176. Throughput: 0: 924.6. Samples: 877046. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:35:33,321][14226] Avg episode reward: [(0, '22.067')]
[2023-02-25 19:35:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3518464. Throughput: 0: 918.0. Samples: 879128. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:38,317][14226] Avg episode reward: [(0, '23.399')]
[2023-02-25 19:35:38,338][19865] Updated weights for policy 0, policy_version 860 (0.0013)
[2023-02-25 19:35:43,315][14226] Fps is (10 sec: 2457.7, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3530752. Throughput: 0: 857.9. Samples: 883080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:43,323][14226] Avg episode reward: [(0, '23.213')]
[2023-02-25 19:35:48,315][14226] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3547136. Throughput: 0: 817.6. Samples: 887162. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:48,322][14226] Avg episode reward: [(0, '21.848')]
[2023-02-25 19:35:52,161][19865] Updated weights for policy 0, policy_version 870 (0.0023)
[2023-02-25 19:35:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3567616. Throughput: 0: 827.9. Samples: 889682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:53,318][14226] Avg episode reward: [(0, '22.204')]
[2023-02-25 19:35:58,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3588096. Throughput: 0: 863.7. Samples: 896414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:58,318][14226] Avg episode reward: [(0, '22.700')]
[2023-02-25 19:36:01,954][19865] Updated weights for policy 0, policy_version 880 (0.0014)
[2023-02-25 19:36:03,315][14226] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3604480. Throughput: 0: 843.8. Samples: 902160. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:36:03,320][14226] Avg episode reward: [(0, '22.883')]
[2023-02-25 19:36:03,332][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth...
[2023-02-25 19:36:03,514][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000676_2768896.pth
[2023-02-25 19:36:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3620864. Throughput: 0: 823.0. Samples: 904178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:08,318][14226] Avg episode reward: [(0, '22.916')]
[2023-02-25 19:36:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3641344. Throughput: 0: 845.2. Samples: 909570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:13,317][14226] Avg episode reward: [(0, '22.524')]
[2023-02-25 19:36:13,555][19865] Updated weights for policy 0, policy_version 890 (0.0012)
[2023-02-25 19:36:18,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3482.1, 300 sec: 3499.0). Total num frames: 3665920. Throughput: 0: 874.8. Samples: 916410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:36:18,318][14226] Avg episode reward: [(0, '23.046')]
[2023-02-25 19:36:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3682304. Throughput: 0: 893.7. Samples: 919344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:23,325][14226] Avg episode reward: [(0, '22.642')]
[2023-02-25 19:36:24,639][19865] Updated weights for policy 0, policy_version 900 (0.0012)
[2023-02-25 19:36:28,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3694592. Throughput: 0: 900.4. Samples: 923598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:28,318][14226] Avg episode reward: [(0, '22.364')]
[2023-02-25 19:36:33,315][14226] Fps is (10 sec: 3686.2, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3719168. Throughput: 0: 939.0. Samples: 929418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:33,324][14226] Avg episode reward: [(0, '22.105')]
[2023-02-25 19:36:35,400][19865] Updated weights for policy 0, policy_version 910 (0.0029)
[2023-02-25 19:36:38,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3739648. Throughput: 0: 955.6. Samples: 932684. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:36:38,318][14226] Avg episode reward: [(0, '23.615')]
[2023-02-25 19:36:43,315][14226] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 3751936. Throughput: 0: 931.9. Samples: 938350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:43,318][14226] Avg episode reward: [(0, '23.424')]
[2023-02-25 19:36:47,799][19865] Updated weights for policy 0, policy_version 920 (0.0038)
[2023-02-25 19:36:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3768320. Throughput: 0: 897.8. Samples: 942560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:48,318][14226] Avg episode reward: [(0, '24.465')]
[2023-02-25 19:36:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3788800. Throughput: 0: 916.3. Samples: 945412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:53,318][14226] Avg episode reward: [(0, '24.379')]
[2023-02-25 19:36:57,361][19865] Updated weights for policy 0, policy_version 930 (0.0025)
[2023-02-25 19:36:58,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3540.6). Total num frames: 3813376. Throughput: 0: 943.6. Samples: 952034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:58,317][14226] Avg episode reward: [(0, '25.000')]
[2023-02-25 19:37:03,315][14226] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3825664. Throughput: 0: 908.0. Samples: 957268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:03,326][14226] Avg episode reward: [(0, '25.342')]
[2023-02-25 19:37:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3842048. Throughput: 0: 887.8. Samples: 959294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:37:08,321][14226] Avg episode reward: [(0, '24.691')]
[2023-02-25 19:37:10,035][19865] Updated weights for policy 0, policy_version 940 (0.0025)
[2023-02-25 19:37:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3862528. Throughput: 0: 908.0. Samples: 964460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:37:13,318][14226] Avg episode reward: [(0, '24.170')]
[2023-02-25 19:37:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3878912. Throughput: 0: 913.7. Samples: 970536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:18,318][14226] Avg episode reward: [(0, '24.228')]
[2023-02-25 19:37:21,494][19865] Updated weights for policy 0, policy_version 950 (0.0013)
[2023-02-25 19:37:23,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3895296. Throughput: 0: 891.3. Samples: 972794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:23,322][14226] Avg episode reward: [(0, '26.323')]
[2023-02-25 19:37:23,338][19851] Saving new best policy, reward=26.323!
[2023-02-25 19:37:28,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3907584. Throughput: 0: 846.6. Samples: 976448. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:37:28,318][14226] Avg episode reward: [(0, '26.837')]
[2023-02-25 19:37:28,324][19851] Saving new best policy, reward=26.837!
[2023-02-25 19:37:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3928064. Throughput: 0: 873.6. Samples: 981874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:37:33,323][14226] Avg episode reward: [(0, '25.361')]
[2023-02-25 19:37:34,138][19865] Updated weights for policy 0, policy_version 960 (0.0019)
[2023-02-25 19:37:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 3944448. Throughput: 0: 878.2. Samples: 984932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:38,318][14226] Avg episode reward: [(0, '26.093')]
[2023-02-25 19:37:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3960832. Throughput: 0: 838.4. Samples: 989760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:43,317][14226] Avg episode reward: [(0, '26.741')]
[2023-02-25 19:37:47,713][19865] Updated weights for policy 0, policy_version 970 (0.0021)
[2023-02-25 19:37:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3973120. Throughput: 0: 808.9. Samples: 993668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:48,318][14226] Avg episode reward: [(0, '26.553')]
[2023-02-25 19:37:53,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3993600. Throughput: 0: 830.7. Samples: 996674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:53,318][14226] Avg episode reward: [(0, '24.478')]
[2023-02-25 19:37:55,607][19851] Stopping Batcher_0...
[2023-02-25 19:37:55,608][19851] Loop batcher_evt_loop terminating...
[2023-02-25 19:37:55,609][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:37:55,607][14226] Component Batcher_0 stopped!
[2023-02-25 19:37:55,657][19865] Weights refcount: 2 0
[2023-02-25 19:37:55,671][19865] Stopping InferenceWorker_p0-w0...
[2023-02-25 19:37:55,672][19865] Loop inference_proc0-0_evt_loop terminating...
[2023-02-25 19:37:55,671][14226] Component InferenceWorker_p0-w0 stopped!
[2023-02-25 19:37:55,699][19875] Stopping RolloutWorker_w5...
[2023-02-25 19:37:55,699][14226] Component RolloutWorker_w2 stopped!
[2023-02-25 19:37:55,708][14226] Component RolloutWorker_w5 stopped!
[2023-02-25 19:37:55,716][19873] Stopping RolloutWorker_w3...
[2023-02-25 19:37:55,717][19873] Loop rollout_proc3_evt_loop terminating...
[2023-02-25 19:37:55,717][19872] Stopping RolloutWorker_w2...
[2023-02-25 19:37:55,717][19872] Loop rollout_proc2_evt_loop terminating...
[2023-02-25 19:37:55,718][19875] Loop rollout_proc5_evt_loop terminating...
[2023-02-25 19:37:55,716][14226] Component RolloutWorker_w3 stopped!
[2023-02-25 19:37:55,726][14226] Component RolloutWorker_w7 stopped!
[2023-02-25 19:37:55,726][19877] Stopping RolloutWorker_w7...
[2023-02-25 19:37:55,734][19877] Loop rollout_proc7_evt_loop terminating...
[2023-02-25 19:37:55,741][14226] Component RolloutWorker_w1 stopped!
[2023-02-25 19:37:55,741][19867] Stopping RolloutWorker_w1...
[2023-02-25 19:37:55,744][19867] Loop rollout_proc1_evt_loop terminating...
[2023-02-25 19:37:55,745][19876] Stopping RolloutWorker_w6...
[2023-02-25 19:37:55,745][14226] Component RolloutWorker_w6 stopped!
[2023-02-25 19:37:55,745][19876] Loop rollout_proc6_evt_loop terminating...
[2023-02-25 19:37:55,756][19874] Stopping RolloutWorker_w4...
[2023-02-25 19:37:55,758][19874] Loop rollout_proc4_evt_loop terminating...
[2023-02-25 19:37:55,756][14226] Component RolloutWorker_w4 stopped!
[2023-02-25 19:37:55,760][19866] Stopping RolloutWorker_w0...
[2023-02-25 19:37:55,760][14226] Component RolloutWorker_w0 stopped!
[2023-02-25 19:37:55,773][19866] Loop rollout_proc0_evt_loop terminating...
[2023-02-25 19:37:55,809][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth
[2023-02-25 19:37:55,826][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:37:56,028][19851] Stopping LearnerWorker_p0...
[2023-02-25 19:37:56,028][19851] Loop learner_proc0_evt_loop terminating...
[2023-02-25 19:37:56,027][14226] Component LearnerWorker_p0 stopped!
[2023-02-25 19:37:56,031][14226] Waiting for process learner_proc0 to stop...
[2023-02-25 19:37:57,892][14226] Waiting for process inference_proc0-0 to join...
[2023-02-25 19:37:58,433][14226] Waiting for process rollout_proc0 to join...
[2023-02-25 19:37:59,138][14226] Waiting for process rollout_proc1 to join...
[2023-02-25 19:37:59,147][14226] Waiting for process rollout_proc2 to join...
[2023-02-25 19:37:59,148][14226] Waiting for process rollout_proc3 to join...
[2023-02-25 19:37:59,149][14226] Waiting for process rollout_proc4 to join...
[2023-02-25 19:37:59,151][14226] Waiting for process rollout_proc5 to join...
[2023-02-25 19:37:59,152][14226] Waiting for process rollout_proc6 to join...
[2023-02-25 19:37:59,153][14226] Waiting for process rollout_proc7 to join...
[2023-02-25 19:37:59,154][14226] Batcher 0 profile tree view:
batching: 26.7102, releasing_batches: 0.0260
[2023-02-25 19:37:59,157][14226] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 559.5307
update_model: 8.4848
weight_update: 0.0017
one_step: 0.0129
handle_policy_step: 552.9041
deserialize: 15.6834, stack: 3.2341, obs_to_device_normalize: 119.6087, forward: 271.1924, send_messages: 27.0588
prepare_outputs: 88.4472
to_cpu: 55.1625
[2023-02-25 19:37:59,159][14226] Learner 0 profile tree view:
misc: 0.0056, prepare_batch: 18.1858
train: 76.5167
epoch_init: 0.0061, minibatch_init: 0.0215, losses_postprocess: 0.5958, kl_divergence: 0.6215, after_optimizer: 33.0838
calculate_losses: 26.8819
losses_init: 0.0037, forward_head: 1.7387, bptt_initial: 17.6697, tail: 1.1877, advantages_returns: 0.2623, losses: 3.3503
bptt: 2.2678
bptt_forward_core: 2.1907
update: 14.5884
clip: 1.4210
[2023-02-25 19:37:59,160][14226] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.4103, enqueue_policy_requests: 156.8018, env_step: 869.3712, overhead: 23.9531, complete_rollouts: 6.7658
save_policy_outputs: 22.0836
split_output_tensors: 10.9491
[2023-02-25 19:37:59,163][14226] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.3762, enqueue_policy_requests: 159.3640, env_step: 873.8569, overhead: 24.0627, complete_rollouts: 7.6767
save_policy_outputs: 21.8641
split_output_tensors: 10.5020
[2023-02-25 19:37:59,164][14226] Loop Runner_EvtLoop terminating...
[2023-02-25 19:37:59,166][14226] Runner profile tree view:
main_loop: 1191.3321
[2023-02-25 19:37:59,167][14226] Collected {0: 4005888}, FPS: 3362.5
[2023-02-25 19:37:59,347][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:37:59,350][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:37:59,354][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:37:59,355][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:37:59,357][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:37:59,358][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:37:59,359][14226] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:37:59,360][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:37:59,363][14226] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-25 19:37:59,365][14226] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-25 19:37:59,366][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:37:59,368][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:37:59,369][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:37:59,371][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:37:59,372][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:37:59,405][14226] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:37:59,412][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:37:59,415][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:37:59,443][14226] ConvEncoder: input_channels=3
[2023-02-25 19:38:00,241][14226] Conv encoder output size: 512
[2023-02-25 19:38:00,246][14226] Policy head output size: 512
[2023-02-25 19:38:02,793][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:38:04,054][14226] Num frames 100...
[2023-02-25 19:38:04,166][14226] Num frames 200...
[2023-02-25 19:38:04,277][14226] Num frames 300...
[2023-02-25 19:38:04,386][14226] Num frames 400...
[2023-02-25 19:38:04,503][14226] Num frames 500...
[2023-02-25 19:38:04,619][14226] Num frames 600...
[2023-02-25 19:38:04,737][14226] Num frames 700...
[2023-02-25 19:38:04,855][14226] Num frames 800...
[2023-02-25 19:38:04,968][14226] Num frames 900...
[2023-02-25 19:38:05,081][14226] Num frames 1000...
[2023-02-25 19:38:05,205][14226] Num frames 1100...
[2023-02-25 19:38:05,322][14226] Num frames 1200...
[2023-02-25 19:38:05,447][14226] Num frames 1300...
[2023-02-25 19:38:05,576][14226] Num frames 1400...
[2023-02-25 19:38:05,702][14226] Num frames 1500...
[2023-02-25 19:38:05,823][14226] Num frames 1600...
[2023-02-25 19:38:05,985][14226] Avg episode rewards: #0: 39.840, true rewards: #0: 16.840
[2023-02-25 19:38:05,986][14226] Avg episode reward: 39.840, avg true_objective: 16.840
[2023-02-25 19:38:06,012][14226] Num frames 1700...
[2023-02-25 19:38:06,142][14226] Num frames 1800...
[2023-02-25 19:38:06,277][14226] Num frames 1900...
[2023-02-25 19:38:06,404][14226] Num frames 2000...
[2023-02-25 19:38:06,531][14226] Num frames 2100...
[2023-02-25 19:38:06,649][14226] Num frames 2200...
[2023-02-25 19:38:06,770][14226] Num frames 2300...
[2023-02-25 19:38:06,890][14226] Num frames 2400...
[2023-02-25 19:38:07,008][14226] Num frames 2500...
[2023-02-25 19:38:07,128][14226] Num frames 2600...
[2023-02-25 19:38:07,250][14226] Num frames 2700...
[2023-02-25 19:38:07,352][14226] Avg episode rewards: #0: 33.700, true rewards: #0: 13.700
[2023-02-25 19:38:07,356][14226] Avg episode reward: 33.700, avg true_objective: 13.700
[2023-02-25 19:38:07,430][14226] Num frames 2800...
[2023-02-25 19:38:07,549][14226] Num frames 2900...
[2023-02-25 19:38:07,663][14226] Num frames 3000...
[2023-02-25 19:38:07,782][14226] Num frames 3100...
[2023-02-25 19:38:07,901][14226] Num frames 3200...
[2023-02-25 19:38:08,016][14226] Num frames 3300...
[2023-02-25 19:38:08,135][14226] Num frames 3400...
[2023-02-25 19:38:08,256][14226] Num frames 3500...
[2023-02-25 19:38:08,368][14226] Num frames 3600...
[2023-02-25 19:38:08,483][14226] Num frames 3700...
[2023-02-25 19:38:08,602][14226] Num frames 3800...
[2023-02-25 19:38:08,715][14226] Num frames 3900...
[2023-02-25 19:38:08,835][14226] Num frames 4000...
[2023-02-25 19:38:08,953][14226] Num frames 4100...
[2023-02-25 19:38:09,065][14226] Num frames 4200...
[2023-02-25 19:38:09,182][14226] Avg episode rewards: #0: 34.500, true rewards: #0: 14.167
[2023-02-25 19:38:09,184][14226] Avg episode reward: 34.500, avg true_objective: 14.167
[2023-02-25 19:38:09,245][14226] Num frames 4300...
[2023-02-25 19:38:09,362][14226] Num frames 4400...
[2023-02-25 19:38:09,482][14226] Num frames 4500...
[2023-02-25 19:38:09,599][14226] Num frames 4600...
[2023-02-25 19:38:09,714][14226] Num frames 4700...
[2023-02-25 19:38:09,836][14226] Num frames 4800...
[2023-02-25 19:38:09,949][14226] Num frames 4900...
[2023-02-25 19:38:10,066][14226] Num frames 5000...
[2023-02-25 19:38:10,181][14226] Num frames 5100...
[2023-02-25 19:38:10,293][14226] Num frames 5200...
[2023-02-25 19:38:10,409][14226] Num frames 5300...
[2023-02-25 19:38:10,529][14226] Num frames 5400...
[2023-02-25 19:38:10,642][14226] Num frames 5500...
[2023-02-25 19:38:10,760][14226] Num frames 5600...
[2023-02-25 19:38:10,879][14226] Num frames 5700...
[2023-02-25 19:38:11,002][14226] Num frames 5800...
[2023-02-25 19:38:11,118][14226] Num frames 5900...
[2023-02-25 19:38:11,240][14226] Num frames 6000...
[2023-02-25 19:38:11,403][14226] Num frames 6100...
[2023-02-25 19:38:11,475][14226] Avg episode rewards: #0: 35.765, true rewards: #0: 15.265
[2023-02-25 19:38:11,477][14226] Avg episode reward: 35.765, avg true_objective: 15.265
[2023-02-25 19:38:11,633][14226] Num frames 6200...
[2023-02-25 19:38:11,801][14226] Num frames 6300...
[2023-02-25 19:38:11,958][14226] Num frames 6400...
[2023-02-25 19:38:12,119][14226] Num frames 6500...
[2023-02-25 19:38:12,278][14226] Num frames 6600...
[2023-02-25 19:38:12,438][14226] Num frames 6700...
[2023-02-25 19:38:12,607][14226] Num frames 6800...
[2023-02-25 19:38:12,772][14226] Num frames 6900...
[2023-02-25 19:38:12,948][14226] Num frames 7000...
[2023-02-25 19:38:13,107][14226] Num frames 7100...
[2023-02-25 19:38:13,271][14226] Num frames 7200...
[2023-02-25 19:38:13,440][14226] Num frames 7300...
[2023-02-25 19:38:13,608][14226] Num frames 7400...
[2023-02-25 19:38:13,772][14226] Num frames 7500...
[2023-02-25 19:38:13,938][14226] Num frames 7600...
[2023-02-25 19:38:14,093][14226] Num frames 7700...
[2023-02-25 19:38:14,207][14226] Num frames 7800...
[2023-02-25 19:38:14,327][14226] Num frames 7900...
[2023-02-25 19:38:14,424][14226] Avg episode rewards: #0: 38.670, true rewards: #0: 15.870
[2023-02-25 19:38:14,426][14226] Avg episode reward: 38.670, avg true_objective: 15.870
[2023-02-25 19:38:14,510][14226] Num frames 8000...
[2023-02-25 19:38:14,633][14226] Num frames 8100...
[2023-02-25 19:38:14,750][14226] Num frames 8200...
[2023-02-25 19:38:14,870][14226] Num frames 8300...
[2023-02-25 19:38:14,996][14226] Num frames 8400...
[2023-02-25 19:38:15,112][14226] Num frames 8500...
[2023-02-25 19:38:15,228][14226] Num frames 8600...
[2023-02-25 19:38:15,348][14226] Num frames 8700...
[2023-02-25 19:38:15,463][14226] Num frames 8800...
[2023-02-25 19:38:15,580][14226] Num frames 8900...
[2023-02-25 19:38:15,698][14226] Num frames 9000...
[2023-02-25 19:38:15,816][14226] Num frames 9100...
[2023-02-25 19:38:15,946][14226] Num frames 9200...
[2023-02-25 19:38:16,062][14226] Num frames 9300...
[2023-02-25 19:38:16,180][14226] Num frames 9400...
[2023-02-25 19:38:16,295][14226] Num frames 9500...
[2023-02-25 19:38:16,412][14226] Num frames 9600...
[2023-02-25 19:38:16,532][14226] Num frames 9700...
[2023-02-25 19:38:16,650][14226] Num frames 9800...
[2023-02-25 19:38:16,745][14226] Avg episode rewards: #0: 40.383, true rewards: #0: 16.383
[2023-02-25 19:38:16,750][14226] Avg episode reward: 40.383, avg true_objective: 16.383
[2023-02-25 19:38:16,843][14226] Num frames 9900...
[2023-02-25 19:38:16,971][14226] Num frames 10000...
[2023-02-25 19:38:17,094][14226] Num frames 10100...
[2023-02-25 19:38:17,223][14226] Num frames 10200...
[2023-02-25 19:38:17,347][14226] Num frames 10300...
[2023-02-25 19:38:17,471][14226] Num frames 10400...
[2023-02-25 19:38:17,596][14226] Num frames 10500...
[2023-02-25 19:38:17,714][14226] Num frames 10600...
[2023-02-25 19:38:17,833][14226] Num frames 10700...
[2023-02-25 19:38:17,948][14226] Num frames 10800...
[2023-02-25 19:38:18,078][14226] Num frames 10900...
[2023-02-25 19:38:18,194][14226] Num frames 11000...
[2023-02-25 19:38:18,313][14226] Num frames 11100...
[2023-02-25 19:38:18,437][14226] Num frames 11200...
[2023-02-25 19:38:18,552][14226] Num frames 11300...
[2023-02-25 19:38:18,667][14226] Num frames 11400...
[2023-02-25 19:38:18,789][14226] Num frames 11500...
[2023-02-25 19:38:18,908][14226] Num frames 11600...
[2023-02-25 19:38:19,040][14226] Num frames 11700...
[2023-02-25 19:38:19,162][14226] Num frames 11800...
[2023-02-25 19:38:19,279][14226] Num frames 11900...
[2023-02-25 19:38:19,372][14226] Avg episode rewards: #0: 43.899, true rewards: #0: 17.043
[2023-02-25 19:38:19,373][14226] Avg episode reward: 43.899, avg true_objective: 17.043
[2023-02-25 19:38:19,464][14226] Num frames 12000...
[2023-02-25 19:38:19,594][14226] Num frames 12100...
[2023-02-25 19:38:19,714][14226] Num frames 12200...
[2023-02-25 19:38:19,832][14226] Num frames 12300...
[2023-02-25 19:38:19,947][14226] Num frames 12400...
[2023-02-25 19:38:20,070][14226] Num frames 12500...
[2023-02-25 19:38:20,192][14226] Num frames 12600...
[2023-02-25 19:38:20,307][14226] Num frames 12700...
[2023-02-25 19:38:20,421][14226] Num frames 12800...
[2023-02-25 19:38:20,546][14226] Num frames 12900...
[2023-02-25 19:38:20,666][14226] Num frames 13000...
[2023-02-25 19:38:20,782][14226] Num frames 13100...
[2023-02-25 19:38:20,903][14226] Num frames 13200...
[2023-02-25 19:38:21,025][14226] Num frames 13300...
[2023-02-25 19:38:21,142][14226] Num frames 13400...
[2023-02-25 19:38:21,264][14226] Num frames 13500...
[2023-02-25 19:38:21,383][14226] Num frames 13600...
[2023-02-25 19:38:21,501][14226] Num frames 13700...
[2023-02-25 19:38:21,676][14226] Avg episode rewards: #0: 44.496, true rewards: #0: 17.246
[2023-02-25 19:38:21,678][14226] Avg episode reward: 44.496, avg true_objective: 17.246
[2023-02-25 19:38:21,685][14226] Num frames 13800...
[2023-02-25 19:38:21,799][14226] Num frames 13900...
[2023-02-25 19:38:21,920][14226] Num frames 14000...
[2023-02-25 19:38:22,043][14226] Num frames 14100...
[2023-02-25 19:38:22,162][14226] Num frames 14200...
[2023-02-25 19:38:22,282][14226] Avg episode rewards: #0: 40.494, true rewards: #0: 15.828
[2023-02-25 19:38:22,284][14226] Avg episode reward: 40.494, avg true_objective: 15.828
[2023-02-25 19:38:22,366][14226] Num frames 14300...
[2023-02-25 19:38:22,493][14226] Num frames 14400...
[2023-02-25 19:38:22,617][14226] Num frames 14500...
[2023-02-25 19:38:22,734][14226] Num frames 14600...
[2023-02-25 19:38:22,850][14226] Num frames 14700...
[2023-02-25 19:38:22,962][14226] Num frames 14800...
[2023-02-25 19:38:23,080][14226] Num frames 14900...
[2023-02-25 19:38:23,198][14226] Num frames 15000...
[2023-02-25 19:38:23,322][14226] Num frames 15100...
[2023-02-25 19:38:23,442][14226] Num frames 15200...
[2023-02-25 19:38:23,570][14226] Num frames 15300...
[2023-02-25 19:38:23,691][14226] Num frames 15400...
[2023-02-25 19:38:23,815][14226] Avg episode rewards: #0: 39.358, true rewards: #0: 15.458
[2023-02-25 19:38:23,817][14226] Avg episode reward: 39.358, avg true_objective: 15.458
[2023-02-25 19:40:03,313][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 19:40:04,191][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:40:04,193][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:40:04,195][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:40:04,198][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:40:04,199][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:40:04,201][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:40:04,203][14226] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 19:40:04,204][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:40:04,205][14226] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 19:40:04,206][14226] Adding new argument 'hf_repository'='ThomasSimonini/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 19:40:04,207][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:40:04,208][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:40:04,209][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:40:04,211][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:40:04,212][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:40:04,238][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:40:04,241][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:40:04,261][14226] ConvEncoder: input_channels=3
[2023-02-25 19:40:04,325][14226] Conv encoder output size: 512
[2023-02-25 19:40:04,328][14226] Policy head output size: 512
[2023-02-25 19:40:04,358][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:40:05,060][14226] Num frames 100...
[2023-02-25 19:40:05,225][14226] Num frames 200...
[2023-02-25 19:40:05,335][14226] Num frames 300...
[2023-02-25 19:40:05,451][14226] Num frames 400...
[2023-02-25 19:40:05,579][14226] Num frames 500...
[2023-02-25 19:40:05,691][14226] Num frames 600...
[2023-02-25 19:40:05,808][14226] Num frames 700...
[2023-02-25 19:40:05,922][14226] Num frames 800...
[2023-02-25 19:40:06,043][14226] Num frames 900...
[2023-02-25 19:40:06,179][14226] Avg episode rewards: #0: 20.560, true rewards: #0: 9.560
[2023-02-25 19:40:06,181][14226] Avg episode reward: 20.560, avg true_objective: 9.560
[2023-02-25 19:40:06,237][14226] Num frames 1000...
[2023-02-25 19:40:06,352][14226] Num frames 1100...
[2023-02-25 19:40:06,468][14226] Num frames 1200...
[2023-02-25 19:40:06,583][14226] Num frames 1300...
[2023-02-25 19:40:06,699][14226] Num frames 1400...
[2023-02-25 19:40:06,820][14226] Num frames 1500...
[2023-02-25 19:40:06,935][14226] Num frames 1600...
[2023-02-25 19:40:07,048][14226] Num frames 1700...
[2023-02-25 19:40:07,171][14226] Num frames 1800...
[2023-02-25 19:40:07,285][14226] Num frames 1900...
[2023-02-25 19:40:07,436][14226] Avg episode rewards: #0: 22.400, true rewards: #0: 9.900
[2023-02-25 19:40:07,440][14226] Avg episode reward: 22.400, avg true_objective: 9.900
[2023-02-25 19:40:07,471][14226] Num frames 2000...
[2023-02-25 19:40:07,596][14226] Num frames 2100...
[2023-02-25 19:40:07,716][14226] Num frames 2200...
[2023-02-25 19:40:07,832][14226] Num frames 2300...
[2023-02-25 19:40:07,947][14226] Num frames 2400...
[2023-02-25 19:40:08,076][14226] Num frames 2500...
[2023-02-25 19:40:08,206][14226] Num frames 2600...
[2023-02-25 19:40:08,333][14226] Num frames 2700...
[2023-02-25 19:40:08,450][14226] Avg episode rewards: #0: 20.493, true rewards: #0: 9.160
[2023-02-25 19:40:08,452][14226] Avg episode reward: 20.493, avg true_objective: 9.160
[2023-02-25 19:40:08,526][14226] Num frames 2800...
[2023-02-25 19:40:08,665][14226] Num frames 2900...
[2023-02-25 19:40:08,779][14226] Num frames 3000...
[2023-02-25 19:40:08,889][14226] Num frames 3100...
[2023-02-25 19:40:09,053][14226] Avg episode rewards: #0: 16.740, true rewards: #0: 7.990
[2023-02-25 19:40:09,056][14226] Avg episode reward: 16.740, avg true_objective: 7.990
[2023-02-25 19:40:09,066][14226] Num frames 3200...
[2023-02-25 19:40:09,202][14226] Num frames 3300...
[2023-02-25 19:40:09,336][14226] Num frames 3400...
[2023-02-25 19:40:09,450][14226] Num frames 3500...
[2023-02-25 19:40:09,565][14226] Num frames 3600...
[2023-02-25 19:40:09,681][14226] Num frames 3700...
[2023-02-25 19:40:09,796][14226] Num frames 3800...
[2023-02-25 19:40:09,908][14226] Num frames 3900...
[2023-02-25 19:40:09,960][14226] Avg episode rewards: #0: 16.200, true rewards: #0: 7.800
[2023-02-25 19:40:09,962][14226] Avg episode reward: 16.200, avg true_objective: 7.800
[2023-02-25 19:40:10,091][14226] Num frames 4000...
[2023-02-25 19:40:10,209][14226] Num frames 4100...
[2023-02-25 19:40:10,321][14226] Num frames 4200...
[2023-02-25 19:40:10,434][14226] Num frames 4300...
[2023-02-25 19:40:10,548][14226] Num frames 4400...
[2023-02-25 19:40:10,661][14226] Num frames 4500...
[2023-02-25 19:40:10,784][14226] Num frames 4600...
[2023-02-25 19:40:10,881][14226] Avg episode rewards: #0: 16.060, true rewards: #0: 7.727
[2023-02-25 19:40:10,883][14226] Avg episode reward: 16.060, avg true_objective: 7.727
[2023-02-25 19:40:10,959][14226] Num frames 4700...
[2023-02-25 19:40:11,092][14226] Num frames 4800...
[2023-02-25 19:40:11,214][14226] Num frames 4900...
[2023-02-25 19:40:11,330][14226] Num frames 5000...
[2023-02-25 19:40:11,446][14226] Num frames 5100...
[2023-02-25 19:40:11,559][14226] Num frames 5200...
[2023-02-25 19:40:11,671][14226] Num frames 5300...
[2023-02-25 19:40:11,795][14226] Num frames 5400...
[2023-02-25 19:40:11,910][14226] Num frames 5500...
[2023-02-25 19:40:12,023][14226] Num frames 5600...
[2023-02-25 19:40:12,145][14226] Num frames 5700...
[2023-02-25 19:40:12,267][14226] Num frames 5800...
[2023-02-25 19:40:12,379][14226] Num frames 5900...
[2023-02-25 19:40:12,494][14226] Num frames 6000...
[2023-02-25 19:40:12,611][14226] Num frames 6100...
[2023-02-25 19:40:12,720][14226] Avg episode rewards: #0: 19.486, true rewards: #0: 8.771
[2023-02-25 19:40:12,722][14226] Avg episode reward: 19.486, avg true_objective: 8.771
[2023-02-25 19:40:12,794][14226] Num frames 6200...
[2023-02-25 19:40:12,909][14226] Num frames 6300...
[2023-02-25 19:40:13,053][14226] Num frames 6400...
[2023-02-25 19:40:13,220][14226] Num frames 6500...
[2023-02-25 19:40:13,382][14226] Num frames 6600...
[2023-02-25 19:40:13,540][14226] Num frames 6700...
[2023-02-25 19:40:13,702][14226] Num frames 6800...
[2023-02-25 19:40:13,895][14226] Avg episode rewards: #0: 18.609, true rewards: #0: 8.609
[2023-02-25 19:40:13,900][14226] Avg episode reward: 18.609, avg true_objective: 8.609
[2023-02-25 19:40:13,931][14226] Num frames 6900...
[2023-02-25 19:40:14,093][14226] Num frames 7000...
[2023-02-25 19:40:14,259][14226] Num frames 7100...
[2023-02-25 19:40:14,420][14226] Num frames 7200...
[2023-02-25 19:40:14,580][14226] Num frames 7300...
[2023-02-25 19:40:14,744][14226] Num frames 7400...
[2023-02-25 19:40:14,908][14226] Num frames 7500...
[2023-02-25 19:40:15,077][14226] Num frames 7600...
[2023-02-25 19:40:15,239][14226] Num frames 7700...
[2023-02-25 19:40:15,403][14226] Num frames 7800...
[2023-02-25 19:40:15,492][14226] Avg episode rewards: #0: 19.020, true rewards: #0: 8.687
[2023-02-25 19:40:15,494][14226] Avg episode reward: 19.020, avg true_objective: 8.687
[2023-02-25 19:40:15,626][14226] Num frames 7900...
[2023-02-25 19:40:15,789][14226] Num frames 8000...
[2023-02-25 19:40:15,925][14226] Num frames 8100...
[2023-02-25 19:40:16,036][14226] Num frames 8200...
[2023-02-25 19:40:16,151][14226] Num frames 8300...
[2023-02-25 19:40:16,263][14226] Num frames 8400...
[2023-02-25 19:40:16,378][14226] Num frames 8500...
[2023-02-25 19:40:16,489][14226] Num frames 8600...
[2023-02-25 19:40:16,648][14226] Avg episode rewards: #0: 19.394, true rewards: #0: 8.694
[2023-02-25 19:40:16,650][14226] Avg episode reward: 19.394, avg true_objective: 8.694
[2023-02-25 19:41:13,877][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 19:59:09,396][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:59:09,400][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:59:09,403][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:59:09,405][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:59:09,407][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:59:09,410][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:59:09,411][14226] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 19:59:09,413][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:59:09,415][14226] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 19:59:09,416][14226] Adding new argument 'hf_repository'='SergejSchweizer/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 19:59:09,417][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:59:09,418][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:59:09,420][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:59:09,422][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:59:09,423][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:59:09,453][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:59:09,457][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:59:09,479][14226] ConvEncoder: input_channels=3
[2023-02-25 19:59:09,535][14226] Conv encoder output size: 512
[2023-02-25 19:59:09,536][14226] Policy head output size: 512
[2023-02-25 19:59:09,557][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:59:10,011][14226] Num frames 100...
[2023-02-25 19:59:10,134][14226] Num frames 200...
[2023-02-25 19:59:10,257][14226] Num frames 300...
[2023-02-25 19:59:10,375][14226] Num frames 400...
[2023-02-25 19:59:10,495][14226] Num frames 500...
[2023-02-25 19:59:10,606][14226] Num frames 600...
[2023-02-25 19:59:10,721][14226] Num frames 700...
[2023-02-25 19:59:10,838][14226] Num frames 800...
[2023-02-25 19:59:10,956][14226] Num frames 900...
[2023-02-25 19:59:11,074][14226] Num frames 1000...
[2023-02-25 19:59:11,246][14226] Avg episode rewards: #0: 30.920, true rewards: #0: 10.920
[2023-02-25 19:59:11,248][14226] Avg episode reward: 30.920, avg true_objective: 10.920
[2023-02-25 19:59:11,262][14226] Num frames 1100...
[2023-02-25 19:59:11,381][14226] Num frames 1200...
[2023-02-25 19:59:11,496][14226] Num frames 1300...
[2023-02-25 19:59:11,618][14226] Num frames 1400...
[2023-02-25 19:59:11,729][14226] Num frames 1500...
[2023-02-25 19:59:11,840][14226] Num frames 1600...
[2023-02-25 19:59:11,963][14226] Num frames 1700...
[2023-02-25 19:59:12,085][14226] Num frames 1800...
[2023-02-25 19:59:12,207][14226] Num frames 1900...
[2023-02-25 19:59:12,321][14226] Num frames 2000...
[2023-02-25 19:59:12,446][14226] Num frames 2100...
[2023-02-25 19:59:12,561][14226] Num frames 2200...
[2023-02-25 19:59:12,681][14226] Num frames 2300...
[2023-02-25 19:59:12,791][14226] Num frames 2400...
[2023-02-25 19:59:12,901][14226] Num frames 2500...
[2023-02-25 19:59:13,017][14226] Num frames 2600...
[2023-02-25 19:59:13,128][14226] Num frames 2700...
[2023-02-25 19:59:13,247][14226] Num frames 2800...
[2023-02-25 19:59:13,360][14226] Num frames 2900...
[2023-02-25 19:59:13,475][14226] Num frames 3000...
[2023-02-25 19:59:13,590][14226] Num frames 3100...
[2023-02-25 19:59:13,755][14226] Avg episode rewards: #0: 44.460, true rewards: #0: 15.960
[2023-02-25 19:59:13,756][14226] Avg episode reward: 44.460, avg true_objective: 15.960
[2023-02-25 19:59:13,782][14226] Num frames 3200...
[2023-02-25 19:59:13,891][14226] Num frames 3300...
[2023-02-25 19:59:14,007][14226] Num frames 3400...
[2023-02-25 19:59:14,120][14226] Num frames 3500...
[2023-02-25 19:59:14,239][14226] Num frames 3600...
[2023-02-25 19:59:14,308][14226] Avg episode rewards: #0: 31.373, true rewards: #0: 12.040
[2023-02-25 19:59:14,310][14226] Avg episode reward: 31.373, avg true_objective: 12.040
[2023-02-25 19:59:14,415][14226] Num frames 3700...
[2023-02-25 19:59:14,533][14226] Num frames 3800...
[2023-02-25 19:59:14,651][14226] Num frames 3900...
[2023-02-25 19:59:14,765][14226] Num frames 4000...
[2023-02-25 19:59:14,886][14226] Num frames 4100...
[2023-02-25 19:59:15,002][14226] Num frames 4200...
[2023-02-25 19:59:15,118][14226] Num frames 4300...
[2023-02-25 19:59:15,242][14226] Num frames 4400...
[2023-02-25 19:59:15,354][14226] Num frames 4500...
[2023-02-25 19:59:15,465][14226] Num frames 4600...
[2023-02-25 19:59:15,588][14226] Num frames 4700...
[2023-02-25 19:59:15,698][14226] Num frames 4800...
[2023-02-25 19:59:15,812][14226] Num frames 4900...
[2023-02-25 19:59:15,928][14226] Num frames 5000...
[2023-02-25 19:59:16,052][14226] Num frames 5100...
[2023-02-25 19:59:16,189][14226] Avg episode rewards: #0: 34.677, true rewards: #0: 12.927
[2023-02-25 19:59:16,191][14226] Avg episode reward: 34.677, avg true_objective: 12.927
[2023-02-25 19:59:16,236][14226] Num frames 5200...
[2023-02-25 19:59:16,358][14226] Num frames 5300...
[2023-02-25 19:59:16,473][14226] Num frames 5400...
[2023-02-25 19:59:16,596][14226] Num frames 5500...
[2023-02-25 19:59:16,707][14226] Num frames 5600...
[2023-02-25 19:59:16,826][14226] Num frames 5700...
[2023-02-25 19:59:16,940][14226] Num frames 5800...
[2023-02-25 19:59:17,053][14226] Num frames 5900...
[2023-02-25 19:59:17,170][14226] Num frames 6000...
[2023-02-25 19:59:17,296][14226] Num frames 6100...
[2023-02-25 19:59:17,408][14226] Num frames 6200...
[2023-02-25 19:59:17,521][14226] Num frames 6300...
[2023-02-25 19:59:17,644][14226] Num frames 6400...
[2023-02-25 19:59:17,708][14226] Avg episode rewards: #0: 33.610, true rewards: #0: 12.810
[2023-02-25 19:59:17,710][14226] Avg episode reward: 33.610, avg true_objective: 12.810
[2023-02-25 19:59:17,818][14226] Num frames 6500...
[2023-02-25 19:59:17,931][14226] Num frames 6600...
[2023-02-25 19:59:18,048][14226] Num frames 6700...
[2023-02-25 19:59:18,161][14226] Num frames 6800...
[2023-02-25 19:59:18,279][14226] Num frames 6900...
[2023-02-25 19:59:18,400][14226] Num frames 7000...
[2023-02-25 19:59:18,516][14226] Num frames 7100...
[2023-02-25 19:59:18,632][14226] Num frames 7200...
[2023-02-25 19:59:18,751][14226] Num frames 7300...
[2023-02-25 19:59:18,928][14226] Avg episode rewards: #0: 31.828, true rewards: #0: 12.328
[2023-02-25 19:59:18,930][14226] Avg episode reward: 31.828, avg true_objective: 12.328
[2023-02-25 19:59:18,937][14226] Num frames 7400...
[2023-02-25 19:59:19,052][14226] Num frames 7500...
[2023-02-25 19:59:19,161][14226] Num frames 7600...
[2023-02-25 19:59:19,281][14226] Num frames 7700...
[2023-02-25 19:59:19,393][14226] Num frames 7800...
[2023-02-25 19:59:19,510][14226] Num frames 7900...
[2023-02-25 19:59:19,671][14226] Num frames 8000...
[2023-02-25 19:59:19,830][14226] Num frames 8100...
[2023-02-25 19:59:19,986][14226] Num frames 8200...
[2023-02-25 19:59:20,142][14226] Num frames 8300...
[2023-02-25 19:59:20,304][14226] Num frames 8400...
[2023-02-25 19:59:20,459][14226] Num frames 8500...
[2023-02-25 19:59:20,615][14226] Num frames 8600...
[2023-02-25 19:59:20,774][14226] Num frames 8700...
[2023-02-25 19:59:20,935][14226] Num frames 8800...
[2023-02-25 19:59:21,099][14226] Num frames 8900...
[2023-02-25 19:59:21,228][14226] Avg episode rewards: #0: 32.920, true rewards: #0: 12.777
[2023-02-25 19:59:21,231][14226] Avg episode reward: 32.920, avg true_objective: 12.777
[2023-02-25 19:59:21,343][14226] Num frames 9000...
[2023-02-25 19:59:21,504][14226] Num frames 9100...
[2023-02-25 19:59:21,671][14226] Num frames 9200...
[2023-02-25 19:59:21,838][14226] Num frames 9300...
[2023-02-25 19:59:21,998][14226] Num frames 9400...
[2023-02-25 19:59:22,166][14226] Num frames 9500...
[2023-02-25 19:59:22,306][14226] Avg episode rewards: #0: 30.315, true rewards: #0: 11.940
[2023-02-25 19:59:22,308][14226] Avg episode reward: 30.315, avg true_objective: 11.940
[2023-02-25 19:59:22,393][14226] Num frames 9600...
[2023-02-25 19:59:22,562][14226] Num frames 9700...
[2023-02-25 19:59:22,727][14226] Num frames 9800...
[2023-02-25 19:59:22,904][14226] Num frames 9900...
[2023-02-25 19:59:23,061][14226] Num frames 10000...
[2023-02-25 19:59:23,171][14226] Num frames 10100...
[2023-02-25 19:59:23,283][14226] Num frames 10200...
[2023-02-25 19:59:23,398][14226] Num frames 10300...
[2023-02-25 19:59:23,510][14226] Num frames 10400...
[2023-02-25 19:59:23,625][14226] Num frames 10500...
[2023-02-25 19:59:23,739][14226] Num frames 10600...
[2023-02-25 19:59:23,856][14226] Num frames 10700...
[2023-02-25 19:59:23,970][14226] Num frames 10800...
[2023-02-25 19:59:24,056][14226] Avg episode rewards: #0: 30.251, true rewards: #0: 12.029
[2023-02-25 19:59:24,057][14226] Avg episode reward: 30.251, avg true_objective: 12.029
[2023-02-25 19:59:24,153][14226] Num frames 10900...
[2023-02-25 19:59:24,278][14226] Num frames 11000...
[2023-02-25 19:59:24,404][14226] Num frames 11100...
[2023-02-25 19:59:24,530][14226] Num frames 11200...
[2023-02-25 19:59:24,682][14226] Avg episode rewards: #0: 28.076, true rewards: #0: 11.276
[2023-02-25 19:59:24,684][14226] Avg episode reward: 28.076, avg true_objective: 11.276
[2023-02-25 20:00:37,511][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!