SergejSchweizer's picture
Upload . with huggingface_hub
3911018
[2023-02-25 19:18:07,534][14226] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-02-25 19:18:07,536][14226] Rollout worker 0 uses device cpu
[2023-02-25 19:18:07,537][14226] Rollout worker 1 uses device cpu
[2023-02-25 19:18:07,542][14226] Rollout worker 2 uses device cpu
[2023-02-25 19:18:07,543][14226] Rollout worker 3 uses device cpu
[2023-02-25 19:18:07,544][14226] Rollout worker 4 uses device cpu
[2023-02-25 19:18:07,545][14226] Rollout worker 5 uses device cpu
[2023-02-25 19:18:07,547][14226] Rollout worker 6 uses device cpu
[2023-02-25 19:18:07,549][14226] Rollout worker 7 uses device cpu
[2023-02-25 19:18:07,782][14226] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:07,787][14226] InferenceWorker_p0-w0: min num requests: 2
[2023-02-25 19:18:07,834][14226] Starting all processes...
[2023-02-25 19:18:07,838][14226] Starting process learner_proc0
[2023-02-25 19:18:07,939][14226] Starting all processes...
[2023-02-25 19:18:07,987][14226] Starting process inference_proc0-0
[2023-02-25 19:18:07,989][14226] Starting process rollout_proc0
[2023-02-25 19:18:07,989][14226] Starting process rollout_proc1
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc2
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc3
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc4
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc5
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc6
[2023-02-25 19:18:07,990][14226] Starting process rollout_proc7
[2023-02-25 19:18:17,980][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:17,981][19851] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-02-25 19:18:18,158][19873] Worker 3 uses CPU cores [1]
[2023-02-25 19:18:18,346][19866] Worker 0 uses CPU cores [0]
[2023-02-25 19:18:18,548][19867] Worker 1 uses CPU cores [1]
[2023-02-25 19:18:18,981][19874] Worker 4 uses CPU cores [0]
[2023-02-25 19:18:19,012][19877] Worker 7 uses CPU cores [1]
[2023-02-25 19:18:19,048][19865] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:19,055][19865] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-02-25 19:18:19,062][19872] Worker 2 uses CPU cores [0]
[2023-02-25 19:18:19,091][19875] Worker 5 uses CPU cores [1]
[2023-02-25 19:18:19,121][19876] Worker 6 uses CPU cores [0]
[2023-02-25 19:18:19,172][19851] Num visible devices: 1
[2023-02-25 19:18:19,173][19865] Num visible devices: 1
[2023-02-25 19:18:19,188][19851] Starting seed is not provided
[2023-02-25 19:18:19,188][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:19,189][19851] Initializing actor-critic model on device cuda:0
[2023-02-25 19:18:19,189][19851] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:19,191][19851] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:19,203][19851] ConvEncoder: input_channels=3
[2023-02-25 19:18:19,483][19851] Conv encoder output size: 512
[2023-02-25 19:18:19,483][19851] Policy head output size: 512
[2023-02-25 19:18:19,530][19851] Created Actor Critic model with architecture:
[2023-02-25 19:18:19,530][19851] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-02-25 19:18:26,966][19851] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-02-25 19:18:26,967][19851] No checkpoints found
[2023-02-25 19:18:26,967][19851] Did not load from checkpoint, starting from scratch!
[2023-02-25 19:18:26,968][19851] Initialized policy 0 weights for model version 0
[2023-02-25 19:18:26,971][19851] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:26,977][19851] LearnerWorker_p0 finished initialization!
[2023-02-25 19:18:27,072][19865] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:27,074][19865] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:27,089][19865] ConvEncoder: input_channels=3
[2023-02-25 19:18:27,190][19865] Conv encoder output size: 512
[2023-02-25 19:18:27,190][19865] Policy head output size: 512
[2023-02-25 19:18:27,772][14226] Heartbeat connected on Batcher_0
[2023-02-25 19:18:27,782][14226] Heartbeat connected on LearnerWorker_p0
[2023-02-25 19:18:27,797][14226] Heartbeat connected on RolloutWorker_w0
[2023-02-25 19:18:27,804][14226] Heartbeat connected on RolloutWorker_w1
[2023-02-25 19:18:27,811][14226] Heartbeat connected on RolloutWorker_w2
[2023-02-25 19:18:27,816][14226] Heartbeat connected on RolloutWorker_w3
[2023-02-25 19:18:27,825][14226] Heartbeat connected on RolloutWorker_w4
[2023-02-25 19:18:27,828][14226] Heartbeat connected on RolloutWorker_w5
[2023-02-25 19:18:27,831][14226] Heartbeat connected on RolloutWorker_w6
[2023-02-25 19:18:27,837][14226] Heartbeat connected on RolloutWorker_w7
[2023-02-25 19:18:28,315][14226] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:29,503][14226] Inference worker 0-0 is ready!
[2023-02-25 19:18:29,510][14226] All inference workers are ready! Signal rollout workers to start!
[2023-02-25 19:18:29,517][14226] Heartbeat connected on InferenceWorker_p0-w0
[2023-02-25 19:18:29,619][19877] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,630][19875] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,640][19867] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,680][19873] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,692][19866] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,698][19872] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,698][19876] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:29,710][19874] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:30,552][19872] Decorrelating experience for 0 frames...
[2023-02-25 19:18:30,554][19876] Decorrelating experience for 0 frames...
[2023-02-25 19:18:30,891][19872] Decorrelating experience for 32 frames...
[2023-02-25 19:18:31,103][19875] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,115][19867] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,112][19877] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,122][19873] Decorrelating experience for 0 frames...
[2023-02-25 19:18:31,306][19872] Decorrelating experience for 64 frames...
[2023-02-25 19:18:31,713][19872] Decorrelating experience for 96 frames...
[2023-02-25 19:18:32,285][19875] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,287][19877] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,319][19867] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,330][19873] Decorrelating experience for 32 frames...
[2023-02-25 19:18:32,763][19866] Decorrelating experience for 0 frames...
[2023-02-25 19:18:33,315][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:33,473][19866] Decorrelating experience for 32 frames...
[2023-02-25 19:18:33,540][19876] Decorrelating experience for 32 frames...
[2023-02-25 19:18:34,002][19874] Decorrelating experience for 0 frames...
[2023-02-25 19:18:34,600][19875] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,603][19877] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,677][19874] Decorrelating experience for 32 frames...
[2023-02-25 19:18:34,693][19873] Decorrelating experience for 64 frames...
[2023-02-25 19:18:34,772][19876] Decorrelating experience for 64 frames...
[2023-02-25 19:18:35,594][19874] Decorrelating experience for 64 frames...
[2023-02-25 19:18:35,651][19876] Decorrelating experience for 96 frames...
[2023-02-25 19:18:36,732][19867] Decorrelating experience for 64 frames...
[2023-02-25 19:18:36,870][19877] Decorrelating experience for 96 frames...
[2023-02-25 19:18:37,010][19873] Decorrelating experience for 96 frames...
[2023-02-25 19:18:38,315][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 1.6. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:38,320][14226] Avg episode reward: [(0, '1.792')]
[2023-02-25 19:18:40,717][19867] Decorrelating experience for 96 frames...
[2023-02-25 19:18:40,719][19875] Decorrelating experience for 96 frames...
[2023-02-25 19:18:40,754][19874] Decorrelating experience for 96 frames...
[2023-02-25 19:18:43,318][14226] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 119.0. Samples: 1786. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:43,321][14226] Avg episode reward: [(0, '2.936')]
[2023-02-25 19:18:43,816][19851] Signal inference workers to stop experience collection...
[2023-02-25 19:18:43,832][19865] InferenceWorker_p0-w0: stopping experience collection
[2023-02-25 19:18:44,047][19866] Decorrelating experience for 64 frames...
[2023-02-25 19:18:44,670][19866] Decorrelating experience for 96 frames...
[2023-02-25 19:18:45,338][19851] Signal inference workers to resume experience collection...
[2023-02-25 19:18:45,341][19865] InferenceWorker_p0-w0: resuming experience collection
[2023-02-25 19:18:48,317][14226] Fps is (10 sec: 1638.0, 60 sec: 819.1, 300 sec: 819.1). Total num frames: 16384. Throughput: 0: 204.3. Samples: 4086. Policy #0 lag: (min: 0.0, avg: 1.3, max: 3.0)
[2023-02-25 19:18:48,319][14226] Avg episode reward: [(0, '3.181')]
[2023-02-25 19:18:53,315][14226] Fps is (10 sec: 3687.7, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 36864. Throughput: 0: 292.2. Samples: 7304. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:18:53,322][14226] Avg episode reward: [(0, '3.828')]
[2023-02-25 19:18:53,896][19865] Updated weights for policy 0, policy_version 10 (0.0367)
[2023-02-25 19:18:58,315][14226] Fps is (10 sec: 3687.3, 60 sec: 1774.9, 300 sec: 1774.9). Total num frames: 53248. Throughput: 0: 444.7. Samples: 13342. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:18:58,318][14226] Avg episode reward: [(0, '4.401')]
[2023-02-25 19:19:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 1989.5, 300 sec: 1989.5). Total num frames: 69632. Throughput: 0: 500.6. Samples: 17520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:19:03,321][14226] Avg episode reward: [(0, '4.379')]
[2023-02-25 19:19:06,605][19865] Updated weights for policy 0, policy_version 20 (0.0026)
[2023-02-25 19:19:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 2150.4, 300 sec: 2150.4). Total num frames: 86016. Throughput: 0: 495.5. Samples: 19822. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:08,322][14226] Avg episode reward: [(0, '4.350')]
[2023-02-25 19:19:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 2366.6, 300 sec: 2366.6). Total num frames: 106496. Throughput: 0: 580.7. Samples: 26132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:13,323][14226] Avg episode reward: [(0, '4.350')]
[2023-02-25 19:19:13,376][19851] Saving new best policy, reward=4.350!
[2023-02-25 19:19:17,093][19865] Updated weights for policy 0, policy_version 30 (0.0021)
[2023-02-25 19:19:18,316][14226] Fps is (10 sec: 3686.2, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 122880. Throughput: 0: 701.1. Samples: 31548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:18,320][14226] Avg episode reward: [(0, '4.438')]
[2023-02-25 19:19:18,394][19851] Saving new best policy, reward=4.438!
[2023-02-25 19:19:23,317][14226] Fps is (10 sec: 3276.1, 60 sec: 2532.0, 300 sec: 2532.0). Total num frames: 139264. Throughput: 0: 745.0. Samples: 33544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:19:23,320][14226] Avg episode reward: [(0, '4.402')]
[2023-02-25 19:19:28,315][14226] Fps is (10 sec: 3277.0, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 155648. Throughput: 0: 813.5. Samples: 38392. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:28,323][14226] Avg episode reward: [(0, '4.139')]
[2023-02-25 19:19:29,571][19865] Updated weights for policy 0, policy_version 40 (0.0026)
[2023-02-25 19:19:33,315][14226] Fps is (10 sec: 4096.9, 60 sec: 3003.7, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 907.8. Samples: 44934. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:19:33,318][14226] Avg episode reward: [(0, '4.429')]
[2023-02-25 19:19:38,317][14226] Fps is (10 sec: 4095.1, 60 sec: 3276.7, 300 sec: 2808.6). Total num frames: 196608. Throughput: 0: 903.9. Samples: 47980. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:38,322][14226] Avg episode reward: [(0, '4.523')]
[2023-02-25 19:19:38,326][19851] Saving new best policy, reward=4.523!
[2023-02-25 19:19:40,773][19865] Updated weights for policy 0, policy_version 50 (0.0022)
[2023-02-25 19:19:43,315][14226] Fps is (10 sec: 2867.1, 60 sec: 3481.8, 300 sec: 2785.3). Total num frames: 208896. Throughput: 0: 861.2. Samples: 52098. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:43,322][14226] Avg episode reward: [(0, '4.520')]
[2023-02-25 19:19:48,315][14226] Fps is (10 sec: 3277.5, 60 sec: 3550.0, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 887.5. Samples: 57458. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:48,318][14226] Avg episode reward: [(0, '4.441')]
[2023-02-25 19:19:51,553][19865] Updated weights for policy 0, policy_version 60 (0.0013)
[2023-02-25 19:19:53,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 2939.5). Total num frames: 249856. Throughput: 0: 909.9. Samples: 60768. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:53,323][14226] Avg episode reward: [(0, '4.447')]
[2023-02-25 19:19:58,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 2958.2). Total num frames: 266240. Throughput: 0: 900.1. Samples: 66636. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:58,318][14226] Avg episode reward: [(0, '4.563')]
[2023-02-25 19:19:58,321][19851] Saving new best policy, reward=4.563!
[2023-02-25 19:20:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2975.0). Total num frames: 282624. Throughput: 0: 870.8. Samples: 70734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:20:03,325][14226] Avg episode reward: [(0, '4.480')]
[2023-02-25 19:20:03,343][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth...
[2023-02-25 19:20:04,563][19865] Updated weights for policy 0, policy_version 70 (0.0044)
[2023-02-25 19:20:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 2990.1). Total num frames: 299008. Throughput: 0: 879.2. Samples: 73106. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:08,317][14226] Avg episode reward: [(0, '4.543')]
[2023-02-25 19:20:13,316][14226] Fps is (10 sec: 3686.2, 60 sec: 3549.8, 300 sec: 3042.7). Total num frames: 319488. Throughput: 0: 915.1. Samples: 79570. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:13,320][14226] Avg episode reward: [(0, '4.348')]
[2023-02-25 19:20:14,322][19865] Updated weights for policy 0, policy_version 80 (0.0016)
[2023-02-25 19:20:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3053.4). Total num frames: 335872. Throughput: 0: 887.9. Samples: 84888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:20:18,318][14226] Avg episode reward: [(0, '4.296')]
[2023-02-25 19:20:23,315][14226] Fps is (10 sec: 3277.0, 60 sec: 3550.0, 300 sec: 3063.1). Total num frames: 352256. Throughput: 0: 864.8. Samples: 86896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:23,323][14226] Avg episode reward: [(0, '4.383')]
[2023-02-25 19:20:27,503][19865] Updated weights for policy 0, policy_version 90 (0.0019)
[2023-02-25 19:20:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3072.0). Total num frames: 368640. Throughput: 0: 882.7. Samples: 91818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:28,331][14226] Avg episode reward: [(0, '4.467')]
[2023-02-25 19:20:33,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3145.7). Total num frames: 393216. Throughput: 0: 906.7. Samples: 98260. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:20:33,324][14226] Avg episode reward: [(0, '4.713')]
[2023-02-25 19:20:33,336][19851] Saving new best policy, reward=4.713!
[2023-02-25 19:20:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3119.3). Total num frames: 405504. Throughput: 0: 893.5. Samples: 100974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:38,322][14226] Avg episode reward: [(0, '4.881')]
[2023-02-25 19:20:38,332][19851] Saving new best policy, reward=4.881!
[2023-02-25 19:20:38,703][19865] Updated weights for policy 0, policy_version 100 (0.0017)
[2023-02-25 19:20:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3125.1). Total num frames: 421888. Throughput: 0: 852.2. Samples: 104986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:20:43,323][14226] Avg episode reward: [(0, '4.685')]
[2023-02-25 19:20:48,321][14226] Fps is (10 sec: 3274.7, 60 sec: 3481.2, 300 sec: 3130.4). Total num frames: 438272. Throughput: 0: 864.5. Samples: 109644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:20:48,324][14226] Avg episode reward: [(0, '4.658')]
[2023-02-25 19:20:52,425][19865] Updated weights for policy 0, policy_version 110 (0.0038)
[2023-02-25 19:20:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3107.3). Total num frames: 450560. Throughput: 0: 858.8. Samples: 111750. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:53,320][14226] Avg episode reward: [(0, '4.510')]
[2023-02-25 19:20:58,315][14226] Fps is (10 sec: 2459.2, 60 sec: 3276.8, 300 sec: 3085.7). Total num frames: 462848. Throughput: 0: 800.9. Samples: 115612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:58,319][14226] Avg episode reward: [(0, '4.457')]
[2023-02-25 19:21:03,316][14226] Fps is (10 sec: 2866.8, 60 sec: 3276.7, 300 sec: 3091.8). Total num frames: 479232. Throughput: 0: 774.5. Samples: 119740. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:21:03,325][14226] Avg episode reward: [(0, '4.546')]
[2023-02-25 19:21:06,587][19865] Updated weights for policy 0, policy_version 120 (0.0025)
[2023-02-25 19:21:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3097.6). Total num frames: 495616. Throughput: 0: 783.9. Samples: 122172. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:21:08,322][14226] Avg episode reward: [(0, '4.692')]
[2023-02-25 19:21:13,324][14226] Fps is (10 sec: 3683.6, 60 sec: 3276.3, 300 sec: 3127.7). Total num frames: 516096. Throughput: 0: 815.8. Samples: 128536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:13,327][14226] Avg episode reward: [(0, '4.668')]
[2023-02-25 19:21:17,189][19865] Updated weights for policy 0, policy_version 130 (0.0013)
[2023-02-25 19:21:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3132.2). Total num frames: 532480. Throughput: 0: 790.4. Samples: 133828. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:21:18,319][14226] Avg episode reward: [(0, '4.697')]
[2023-02-25 19:21:23,315][14226] Fps is (10 sec: 2869.6, 60 sec: 3208.5, 300 sec: 3113.0). Total num frames: 544768. Throughput: 0: 775.0. Samples: 135850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:23,319][14226] Avg episode reward: [(0, '4.660')]
[2023-02-25 19:21:28,315][14226] Fps is (10 sec: 3276.9, 60 sec: 3276.8, 300 sec: 3140.3). Total num frames: 565248. Throughput: 0: 791.7. Samples: 140612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:28,318][14226] Avg episode reward: [(0, '4.568')]
[2023-02-25 19:21:29,687][19865] Updated weights for policy 0, policy_version 140 (0.0023)
[2023-02-25 19:21:33,315][14226] Fps is (10 sec: 4096.2, 60 sec: 3208.5, 300 sec: 3166.1). Total num frames: 585728. Throughput: 0: 833.5. Samples: 147146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:33,318][14226] Avg episode reward: [(0, '4.555')]
[2023-02-25 19:21:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3169.0). Total num frames: 602112. Throughput: 0: 850.8. Samples: 150036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:38,321][14226] Avg episode reward: [(0, '4.598')]
[2023-02-25 19:21:41,465][19865] Updated weights for policy 0, policy_version 150 (0.0015)
[2023-02-25 19:21:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3171.8). Total num frames: 618496. Throughput: 0: 854.6. Samples: 154070. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:21:43,323][14226] Avg episode reward: [(0, '4.711')]
[2023-02-25 19:21:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3277.2, 300 sec: 3174.4). Total num frames: 634880. Throughput: 0: 881.4. Samples: 159402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:48,318][14226] Avg episode reward: [(0, '4.881')]
[2023-02-25 19:21:52,130][19865] Updated weights for policy 0, policy_version 160 (0.0015)
[2023-02-25 19:21:53,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3216.9). Total num frames: 659456. Throughput: 0: 898.2. Samples: 162590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:53,318][14226] Avg episode reward: [(0, '4.745')]
[2023-02-25 19:21:58,319][14226] Fps is (10 sec: 3684.7, 60 sec: 3481.3, 300 sec: 3198.7). Total num frames: 671744. Throughput: 0: 880.6. Samples: 168160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:58,322][14226] Avg episode reward: [(0, '4.497')]
[2023-02-25 19:22:03,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.7, 300 sec: 3200.6). Total num frames: 688128. Throughput: 0: 854.0. Samples: 172258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:03,323][14226] Avg episode reward: [(0, '4.471')]
[2023-02-25 19:22:03,342][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000168_688128.pth...
[2023-02-25 19:22:05,269][19865] Updated weights for policy 0, policy_version 170 (0.0015)
[2023-02-25 19:22:08,315][14226] Fps is (10 sec: 3688.1, 60 sec: 3549.9, 300 sec: 3220.9). Total num frames: 708608. Throughput: 0: 865.9. Samples: 174814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:08,318][14226] Avg episode reward: [(0, '4.460')]
[2023-02-25 19:22:13,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3550.4, 300 sec: 3240.4). Total num frames: 729088. Throughput: 0: 906.2. Samples: 181390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:13,322][14226] Avg episode reward: [(0, '4.657')]
[2023-02-25 19:22:15,300][19865] Updated weights for policy 0, policy_version 180 (0.0017)
[2023-02-25 19:22:18,316][14226] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3241.2). Total num frames: 745472. Throughput: 0: 874.2. Samples: 186484. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:18,320][14226] Avg episode reward: [(0, '4.803')]
[2023-02-25 19:22:23,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3224.5). Total num frames: 757760. Throughput: 0: 854.8. Samples: 188500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:23,318][14226] Avg episode reward: [(0, '4.814')]
[2023-02-25 19:22:27,865][19865] Updated weights for policy 0, policy_version 190 (0.0034)
[2023-02-25 19:22:28,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3549.9, 300 sec: 3242.7). Total num frames: 778240. Throughput: 0: 880.7. Samples: 193700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:28,318][14226] Avg episode reward: [(0, '4.906')]
[2023-02-25 19:22:28,321][19851] Saving new best policy, reward=4.906!
[2023-02-25 19:22:33,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3260.1). Total num frames: 798720. Throughput: 0: 906.2. Samples: 200182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:33,320][14226] Avg episode reward: [(0, '5.080')]
[2023-02-25 19:22:33,431][19851] Saving new best policy, reward=5.080!
[2023-02-25 19:22:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3260.4). Total num frames: 815104. Throughput: 0: 892.2. Samples: 202740. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:22:38,317][14226] Avg episode reward: [(0, '5.052')]
[2023-02-25 19:22:39,268][19865] Updated weights for policy 0, policy_version 200 (0.0018)
[2023-02-25 19:22:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3244.7). Total num frames: 827392. Throughput: 0: 860.5. Samples: 206880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:43,323][14226] Avg episode reward: [(0, '5.128')]
[2023-02-25 19:22:43,340][19851] Saving new best policy, reward=5.128!
[2023-02-25 19:22:48,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3549.8, 300 sec: 3261.0). Total num frames: 847872. Throughput: 0: 896.3. Samples: 212594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:48,318][14226] Avg episode reward: [(0, '5.166')]
[2023-02-25 19:22:48,324][19851] Saving new best policy, reward=5.166!
[2023-02-25 19:22:50,370][19865] Updated weights for policy 0, policy_version 210 (0.0030)
[2023-02-25 19:22:53,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3292.3). Total num frames: 872448. Throughput: 0: 908.7. Samples: 215706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:53,320][14226] Avg episode reward: [(0, '5.270')]
[2023-02-25 19:22:53,333][19851] Saving new best policy, reward=5.270!
[2023-02-25 19:22:58,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3550.1, 300 sec: 3276.8). Total num frames: 884736. Throughput: 0: 879.1. Samples: 220950. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:58,318][14226] Avg episode reward: [(0, '5.427')]
[2023-02-25 19:22:58,322][19851] Saving new best policy, reward=5.427!
[2023-02-25 19:23:03,320][14226] Fps is (10 sec: 2456.3, 60 sec: 3481.3, 300 sec: 3261.8). Total num frames: 897024. Throughput: 0: 855.3. Samples: 224974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:03,322][14226] Avg episode reward: [(0, '5.288')]
[2023-02-25 19:23:03,648][19865] Updated weights for policy 0, policy_version 220 (0.0015)
[2023-02-25 19:23:08,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 917504. Throughput: 0: 875.9. Samples: 227914. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:08,318][14226] Avg episode reward: [(0, '5.646')]
[2023-02-25 19:23:08,322][19851] Saving new best policy, reward=5.646!
[2023-02-25 19:23:13,274][19865] Updated weights for policy 0, policy_version 230 (0.0013)
[2023-02-25 19:23:13,315][14226] Fps is (10 sec: 4507.9, 60 sec: 3549.9, 300 sec: 3305.5). Total num frames: 942080. Throughput: 0: 902.0. Samples: 234292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:13,325][14226] Avg episode reward: [(0, '5.589')]
[2023-02-25 19:23:18,315][14226] Fps is (10 sec: 3686.5, 60 sec: 3481.7, 300 sec: 3290.9). Total num frames: 954368. Throughput: 0: 864.4. Samples: 239080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:18,319][14226] Avg episode reward: [(0, '5.338')]
[2023-02-25 19:23:23,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 966656. Throughput: 0: 851.6. Samples: 241064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:23,318][14226] Avg episode reward: [(0, '5.171')]
[2023-02-25 19:23:26,260][19865] Updated weights for policy 0, policy_version 240 (0.0027)
[2023-02-25 19:23:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3360.1). Total num frames: 991232. Throughput: 0: 883.4. Samples: 246634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:28,318][14226] Avg episode reward: [(0, '5.395')]
[2023-02-25 19:23:33,318][14226] Fps is (10 sec: 4504.0, 60 sec: 3549.7, 300 sec: 3429.5). Total num frames: 1011712. Throughput: 0: 903.2. Samples: 253240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:23:33,321][14226] Avg episode reward: [(0, '5.912')]
[2023-02-25 19:23:33,337][19851] Saving new best policy, reward=5.912!
[2023-02-25 19:23:36,899][19865] Updated weights for policy 0, policy_version 250 (0.0013)
[2023-02-25 19:23:38,318][14226] Fps is (10 sec: 3276.0, 60 sec: 3481.4, 300 sec: 3471.2). Total num frames: 1024000. Throughput: 0: 884.1. Samples: 255494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:38,320][14226] Avg episode reward: [(0, '6.284')]
[2023-02-25 19:23:38,335][19851] Saving new best policy, reward=6.284!
[2023-02-25 19:23:43,315][14226] Fps is (10 sec: 2868.1, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1040384. Throughput: 0: 861.0. Samples: 259696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:23:43,318][14226] Avg episode reward: [(0, '6.602')]
[2023-02-25 19:23:43,327][19851] Saving new best policy, reward=6.602!
[2023-02-25 19:23:48,315][14226] Fps is (10 sec: 3687.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1060864. Throughput: 0: 905.8. Samples: 265732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:48,317][14226] Avg episode reward: [(0, '6.752')]
[2023-02-25 19:23:48,325][19851] Saving new best policy, reward=6.752!
[2023-02-25 19:23:48,720][19865] Updated weights for policy 0, policy_version 260 (0.0013)
[2023-02-25 19:23:53,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 1081344. Throughput: 0: 909.8. Samples: 268856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:53,324][14226] Avg episode reward: [(0, '6.650')]
[2023-02-25 19:23:58,317][14226] Fps is (10 sec: 3685.8, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 1097728. Throughput: 0: 883.3. Samples: 274042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:58,323][14226] Avg episode reward: [(0, '6.485')]
[2023-02-25 19:24:01,036][19865] Updated weights for policy 0, policy_version 270 (0.0028)
[2023-02-25 19:24:03,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3550.2, 300 sec: 3471.2). Total num frames: 1110016. Throughput: 0: 869.6. Samples: 278210. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:24:03,324][14226] Avg episode reward: [(0, '6.814')]
[2023-02-25 19:24:03,415][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth...
[2023-02-25 19:24:03,532][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth
[2023-02-25 19:24:03,550][19851] Saving new best policy, reward=6.814!
[2023-02-25 19:24:08,315][14226] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1130496. Throughput: 0: 893.4. Samples: 281268. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:24:08,320][14226] Avg episode reward: [(0, '7.101')]
[2023-02-25 19:24:08,326][19851] Saving new best policy, reward=7.101!
[2023-02-25 19:24:11,144][19865] Updated weights for policy 0, policy_version 280 (0.0020)
[2023-02-25 19:24:13,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1155072. Throughput: 0: 910.8. Samples: 287620. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:24:13,317][14226] Avg episode reward: [(0, '7.348')]
[2023-02-25 19:24:13,332][19851] Saving new best policy, reward=7.348!
[2023-02-25 19:24:18,316][14226] Fps is (10 sec: 3685.8, 60 sec: 3549.8, 300 sec: 3485.1). Total num frames: 1167360. Throughput: 0: 866.6. Samples: 292234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:18,321][14226] Avg episode reward: [(0, '7.290')]
[2023-02-25 19:24:23,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1183744. Throughput: 0: 862.0. Samples: 294282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:24:23,321][14226] Avg episode reward: [(0, '7.303')]
[2023-02-25 19:24:24,133][19865] Updated weights for policy 0, policy_version 290 (0.0021)
[2023-02-25 19:24:28,315][14226] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1204224. Throughput: 0: 898.5. Samples: 300128. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:24:28,320][14226] Avg episode reward: [(0, '7.302')]
[2023-02-25 19:24:33,315][14226] Fps is (10 sec: 4095.9, 60 sec: 3550.1, 300 sec: 3485.1). Total num frames: 1224704. Throughput: 0: 910.7. Samples: 306712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:33,320][14226] Avg episode reward: [(0, '7.641')]
[2023-02-25 19:24:33,340][19851] Saving new best policy, reward=7.641!
[2023-02-25 19:24:34,070][19865] Updated weights for policy 0, policy_version 300 (0.0033)
[2023-02-25 19:24:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3485.1). Total num frames: 1236992. Throughput: 0: 885.3. Samples: 308696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:24:38,322][14226] Avg episode reward: [(0, '7.532')]
[2023-02-25 19:24:43,315][14226] Fps is (10 sec: 2867.3, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 1253376. Throughput: 0: 861.9. Samples: 312824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:24:43,322][14226] Avg episode reward: [(0, '7.387')]
[2023-02-25 19:24:46,358][19865] Updated weights for policy 0, policy_version 310 (0.0037)
[2023-02-25 19:24:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1277952. Throughput: 0: 913.9. Samples: 319336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:48,318][14226] Avg episode reward: [(0, '6.834')]
[2023-02-25 19:24:53,322][14226] Fps is (10 sec: 4502.6, 60 sec: 3617.7, 300 sec: 3498.9). Total num frames: 1298432. Throughput: 0: 918.1. Samples: 322588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:53,323][14226] Avg episode reward: [(0, '6.901')]
[2023-02-25 19:24:57,731][19865] Updated weights for policy 0, policy_version 320 (0.0022)
[2023-02-25 19:24:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3485.1). Total num frames: 1310720. Throughput: 0: 884.2. Samples: 327410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:58,317][14226] Avg episode reward: [(0, '7.164')]
[2023-02-25 19:25:03,315][14226] Fps is (10 sec: 2869.1, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1327104. Throughput: 0: 884.1. Samples: 332018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:03,317][14226] Avg episode reward: [(0, '7.855')]
[2023-02-25 19:25:03,334][19851] Saving new best policy, reward=7.855!
[2023-02-25 19:25:08,295][19865] Updated weights for policy 0, policy_version 330 (0.0021)
[2023-02-25 19:25:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3499.0). Total num frames: 1351680. Throughput: 0: 913.4. Samples: 335386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:08,318][14226] Avg episode reward: [(0, '8.127')]
[2023-02-25 19:25:08,328][19851] Saving new best policy, reward=8.127!
[2023-02-25 19:25:13,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3499.0). Total num frames: 1368064. Throughput: 0: 929.8. Samples: 341970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:13,321][14226] Avg episode reward: [(0, '9.417')]
[2023-02-25 19:25:13,332][19851] Saving new best policy, reward=9.417!
[2023-02-25 19:25:18,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3499.0). Total num frames: 1384448. Throughput: 0: 881.5. Samples: 346378. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:18,323][14226] Avg episode reward: [(0, '9.889')]
[2023-02-25 19:25:18,329][19851] Saving new best policy, reward=9.889!
[2023-02-25 19:25:21,017][19865] Updated weights for policy 0, policy_version 340 (0.0018)
[2023-02-25 19:25:23,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3498.9). Total num frames: 1400832. Throughput: 0: 882.8. Samples: 348422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:23,319][14226] Avg episode reward: [(0, '10.379')]
[2023-02-25 19:25:23,340][19851] Saving new best policy, reward=10.379!
[2023-02-25 19:25:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 1421312. Throughput: 0: 929.0. Samples: 354628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:25:28,323][14226] Avg episode reward: [(0, '10.857')]
[2023-02-25 19:25:28,329][19851] Saving new best policy, reward=10.857!
[2023-02-25 19:25:30,421][19865] Updated weights for policy 0, policy_version 350 (0.0013)
[2023-02-25 19:25:33,315][14226] Fps is (10 sec: 4096.3, 60 sec: 3618.2, 300 sec: 3512.8). Total num frames: 1441792. Throughput: 0: 924.9. Samples: 360956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:33,320][14226] Avg episode reward: [(0, '10.658')]
[2023-02-25 19:25:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 1458176. Throughput: 0: 900.2. Samples: 363092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:38,318][14226] Avg episode reward: [(0, '10.712')]
[2023-02-25 19:25:43,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 1470464. Throughput: 0: 893.9. Samples: 367636. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:43,318][14226] Avg episode reward: [(0, '10.512')]
[2023-02-25 19:25:43,550][19865] Updated weights for policy 0, policy_version 360 (0.0033)
[2023-02-25 19:25:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3512.8). Total num frames: 1486848. Throughput: 0: 891.8. Samples: 372148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:48,318][14226] Avg episode reward: [(0, '11.118')]
[2023-02-25 19:25:48,325][19851] Saving new best policy, reward=11.118!
[2023-02-25 19:25:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.4, 300 sec: 3512.8). Total num frames: 1499136. Throughput: 0: 862.4. Samples: 374194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:53,322][14226] Avg episode reward: [(0, '11.432')]
[2023-02-25 19:25:53,335][19851] Saving new best policy, reward=11.432!
[2023-02-25 19:25:58,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 1511424. Throughput: 0: 805.6. Samples: 378220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:58,317][14226] Avg episode reward: [(0, '11.527')]
[2023-02-25 19:25:58,402][19851] Saving new best policy, reward=11.527!
[2023-02-25 19:25:58,416][19865] Updated weights for policy 0, policy_version 370 (0.0037)
[2023-02-25 19:26:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 1531904. Throughput: 0: 816.9. Samples: 383140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:03,318][14226] Avg episode reward: [(0, '12.076')]
[2023-02-25 19:26:03,330][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000374_1531904.pth...
[2023-02-25 19:26:03,453][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000168_688128.pth
[2023-02-25 19:26:03,462][19851] Saving new best policy, reward=12.076!
[2023-02-25 19:26:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3512.9). Total num frames: 1552384. Throughput: 0: 842.9. Samples: 386352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:08,318][14226] Avg episode reward: [(0, '12.471')]
[2023-02-25 19:26:08,323][19851] Saving new best policy, reward=12.471!
[2023-02-25 19:26:08,650][19865] Updated weights for policy 0, policy_version 380 (0.0027)
[2023-02-25 19:26:13,321][14226] Fps is (10 sec: 4093.7, 60 sec: 3413.0, 300 sec: 3526.7). Total num frames: 1572864. Throughput: 0: 846.5. Samples: 392724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:26:13,326][14226] Avg episode reward: [(0, '12.906')]
[2023-02-25 19:26:13,344][19851] Saving new best policy, reward=12.906!
[2023-02-25 19:26:18,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 1585152. Throughput: 0: 799.5. Samples: 396934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:26:18,320][14226] Avg episode reward: [(0, '13.514')]
[2023-02-25 19:26:18,326][19851] Saving new best policy, reward=13.514!
[2023-02-25 19:26:21,565][19865] Updated weights for policy 0, policy_version 390 (0.0029)
[2023-02-25 19:26:23,315][14226] Fps is (10 sec: 2868.8, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 1601536. Throughput: 0: 797.9. Samples: 398998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:23,324][14226] Avg episode reward: [(0, '14.463')]
[2023-02-25 19:26:23,388][19851] Saving new best policy, reward=14.463!
[2023-02-25 19:26:28,316][14226] Fps is (10 sec: 4095.7, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 1626112. Throughput: 0: 841.9. Samples: 405520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:28,319][14226] Avg episode reward: [(0, '13.096')]
[2023-02-25 19:26:30,593][19865] Updated weights for policy 0, policy_version 400 (0.0022)
[2023-02-25 19:26:33,317][14226] Fps is (10 sec: 4095.3, 60 sec: 3345.0, 300 sec: 3526.7). Total num frames: 1642496. Throughput: 0: 875.6. Samples: 411550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:33,322][14226] Avg episode reward: [(0, '13.859')]
[2023-02-25 19:26:38,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 1658880. Throughput: 0: 877.4. Samples: 413678. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:38,318][14226] Avg episode reward: [(0, '13.539')]
[2023-02-25 19:26:43,051][19865] Updated weights for policy 0, policy_version 410 (0.0018)
[2023-02-25 19:26:43,315][14226] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1679360. Throughput: 0: 895.9. Samples: 418534. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:26:43,317][14226] Avg episode reward: [(0, '13.623')]
[2023-02-25 19:26:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1699840. Throughput: 0: 936.2. Samples: 425268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:48,324][14226] Avg episode reward: [(0, '15.830')]
[2023-02-25 19:26:48,328][19851] Saving new best policy, reward=15.830!
[2023-02-25 19:26:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3540.7). Total num frames: 1716224. Throughput: 0: 934.4. Samples: 428402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:53,327][14226] Avg episode reward: [(0, '16.726')]
[2023-02-25 19:26:53,340][19851] Saving new best policy, reward=16.726!
[2023-02-25 19:26:53,771][19865] Updated weights for policy 0, policy_version 420 (0.0015)
[2023-02-25 19:26:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1732608. Throughput: 0: 884.3. Samples: 432512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:26:58,322][14226] Avg episode reward: [(0, '17.417')]
[2023-02-25 19:26:58,330][19851] Saving new best policy, reward=17.417!
[2023-02-25 19:27:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1753088. Throughput: 0: 908.5. Samples: 437816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:03,318][14226] Avg episode reward: [(0, '17.333')]
[2023-02-25 19:27:05,135][19865] Updated weights for policy 0, policy_version 430 (0.0017)
[2023-02-25 19:27:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1773568. Throughput: 0: 938.8. Samples: 441246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:27:08,318][14226] Avg episode reward: [(0, '17.300')]
[2023-02-25 19:27:13,318][14226] Fps is (10 sec: 3685.4, 60 sec: 3618.3, 300 sec: 3540.6). Total num frames: 1789952. Throughput: 0: 925.6. Samples: 447172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:13,321][14226] Avg episode reward: [(0, '17.437')]
[2023-02-25 19:27:13,333][19851] Saving new best policy, reward=17.437!
[2023-02-25 19:27:17,270][19865] Updated weights for policy 0, policy_version 440 (0.0017)
[2023-02-25 19:27:18,316][14226] Fps is (10 sec: 2867.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1802240. Throughput: 0: 884.6. Samples: 451354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:18,319][14226] Avg episode reward: [(0, '15.806')]
[2023-02-25 19:27:23,315][14226] Fps is (10 sec: 3277.6, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 1822720. Throughput: 0: 890.6. Samples: 453756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:27:23,325][14226] Avg episode reward: [(0, '15.729')]
[2023-02-25 19:27:27,769][19865] Updated weights for policy 0, policy_version 450 (0.0013)
[2023-02-25 19:27:28,315][14226] Fps is (10 sec: 4096.3, 60 sec: 3618.2, 300 sec: 3540.6). Total num frames: 1843200. Throughput: 0: 922.1. Samples: 460030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:27:28,326][14226] Avg episode reward: [(0, '15.815')]
[2023-02-25 19:27:33,317][14226] Fps is (10 sec: 3685.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1859584. Throughput: 0: 891.2. Samples: 465372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:33,321][14226] Avg episode reward: [(0, '15.677')]
[2023-02-25 19:27:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1871872. Throughput: 0: 866.0. Samples: 467372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:38,321][14226] Avg episode reward: [(0, '15.586')]
[2023-02-25 19:27:40,867][19865] Updated weights for policy 0, policy_version 460 (0.0021)
[2023-02-25 19:27:43,315][14226] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1892352. Throughput: 0: 882.2. Samples: 472210. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:27:43,318][14226] Avg episode reward: [(0, '16.347')]
[2023-02-25 19:27:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1912832. Throughput: 0: 904.7. Samples: 478526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:27:48,326][14226] Avg episode reward: [(0, '17.477')]
[2023-02-25 19:27:48,328][19851] Saving new best policy, reward=17.477!
[2023-02-25 19:27:51,636][19865] Updated weights for policy 0, policy_version 470 (0.0025)
[2023-02-25 19:27:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1929216. Throughput: 0: 888.0. Samples: 481206. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:27:53,323][14226] Avg episode reward: [(0, '18.722')]
[2023-02-25 19:27:53,342][19851] Saving new best policy, reward=18.722!
[2023-02-25 19:27:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3540.7). Total num frames: 1941504. Throughput: 0: 843.3. Samples: 485120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:58,323][14226] Avg episode reward: [(0, '19.037')]
[2023-02-25 19:27:58,327][19851] Saving new best policy, reward=19.037!
[2023-02-25 19:28:03,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1961984. Throughput: 0: 866.9. Samples: 490362. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:03,317][14226] Avg episode reward: [(0, '18.261')]
[2023-02-25 19:28:03,330][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth...
[2023-02-25 19:28:03,467][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth
[2023-02-25 19:28:04,134][19865] Updated weights for policy 0, policy_version 480 (0.0014)
[2023-02-25 19:28:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1982464. Throughput: 0: 881.6. Samples: 493430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:08,318][14226] Avg episode reward: [(0, '19.426')]
[2023-02-25 19:28:08,326][19851] Saving new best policy, reward=19.426!
[2023-02-25 19:28:13,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3526.7). Total num frames: 1994752. Throughput: 0: 860.7. Samples: 498760. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:13,322][14226] Avg episode reward: [(0, '19.911')]
[2023-02-25 19:28:13,330][19851] Saving new best policy, reward=19.911!
[2023-02-25 19:28:16,968][19865] Updated weights for policy 0, policy_version 490 (0.0027)
[2023-02-25 19:28:18,315][14226] Fps is (10 sec: 2457.5, 60 sec: 3413.4, 300 sec: 3526.7). Total num frames: 2007040. Throughput: 0: 828.9. Samples: 502672. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:18,318][14226] Avg episode reward: [(0, '19.128')]
[2023-02-25 19:28:23,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2027520. Throughput: 0: 844.0. Samples: 505352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:23,323][14226] Avg episode reward: [(0, '17.767')]
[2023-02-25 19:28:27,496][19865] Updated weights for policy 0, policy_version 500 (0.0031)
[2023-02-25 19:28:28,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 2048000. Throughput: 0: 875.2. Samples: 511592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:28,323][14226] Avg episode reward: [(0, '19.024')]
[2023-02-25 19:28:33,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3526.8). Total num frames: 2064384. Throughput: 0: 843.1. Samples: 516466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:33,321][14226] Avg episode reward: [(0, '19.443')]
[2023-02-25 19:28:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2076672. Throughput: 0: 828.1. Samples: 518472. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:38,326][14226] Avg episode reward: [(0, '18.573')]
[2023-02-25 19:28:40,596][19865] Updated weights for policy 0, policy_version 510 (0.0023)
[2023-02-25 19:28:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2097152. Throughput: 0: 856.9. Samples: 523680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:28:43,322][14226] Avg episode reward: [(0, '20.181')]
[2023-02-25 19:28:43,339][19851] Saving new best policy, reward=20.181!
[2023-02-25 19:28:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2117632. Throughput: 0: 882.4. Samples: 530072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:48,323][14226] Avg episode reward: [(0, '21.558')]
[2023-02-25 19:28:48,329][19851] Saving new best policy, reward=21.558!
[2023-02-25 19:28:51,388][19865] Updated weights for policy 0, policy_version 520 (0.0012)
[2023-02-25 19:28:53,316][14226] Fps is (10 sec: 3686.0, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2134016. Throughput: 0: 865.3. Samples: 532370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:53,321][14226] Avg episode reward: [(0, '22.117')]
[2023-02-25 19:28:53,334][19851] Saving new best policy, reward=22.117!
[2023-02-25 19:28:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2146304. Throughput: 0: 833.4. Samples: 536264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:58,318][14226] Avg episode reward: [(0, '21.822')]
[2023-02-25 19:29:03,315][14226] Fps is (10 sec: 3277.1, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 2166784. Throughput: 0: 877.2. Samples: 542148. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:29:03,318][14226] Avg episode reward: [(0, '22.573')]
[2023-02-25 19:29:03,328][19851] Saving new best policy, reward=22.573!
[2023-02-25 19:29:03,840][19865] Updated weights for policy 0, policy_version 530 (0.0014)
[2023-02-25 19:29:08,316][14226] Fps is (10 sec: 4095.4, 60 sec: 3413.2, 300 sec: 3498.9). Total num frames: 2187264. Throughput: 0: 885.4. Samples: 545196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:29:08,322][14226] Avg episode reward: [(0, '21.887')]
[2023-02-25 19:29:13,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 2199552. Throughput: 0: 851.0. Samples: 549888. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:29:13,323][14226] Avg episode reward: [(0, '20.853')]
[2023-02-25 19:29:16,885][19865] Updated weights for policy 0, policy_version 540 (0.0016)
[2023-02-25 19:29:18,317][14226] Fps is (10 sec: 2867.0, 60 sec: 3481.5, 300 sec: 3498.9). Total num frames: 2215936. Throughput: 0: 832.8. Samples: 553946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:18,320][14226] Avg episode reward: [(0, '20.671')]
[2023-02-25 19:29:23,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 2236416. Throughput: 0: 857.2. Samples: 557048. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-02-25 19:29:23,318][14226] Avg episode reward: [(0, '20.370')]
[2023-02-25 19:29:27,194][19865] Updated weights for policy 0, policy_version 550 (0.0020)
[2023-02-25 19:29:28,315][14226] Fps is (10 sec: 3687.3, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2252800. Throughput: 0: 878.7. Samples: 563220. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:28,318][14226] Avg episode reward: [(0, '19.554')]
[2023-02-25 19:29:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3499.0). Total num frames: 2269184. Throughput: 0: 831.7. Samples: 567500. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:29:33,321][14226] Avg episode reward: [(0, '19.293')]
[2023-02-25 19:29:38,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3485.1). Total num frames: 2281472. Throughput: 0: 822.5. Samples: 569382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:38,318][14226] Avg episode reward: [(0, '20.078')]
[2023-02-25 19:29:40,343][19865] Updated weights for policy 0, policy_version 560 (0.0024)
[2023-02-25 19:29:43,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2306048. Throughput: 0: 867.2. Samples: 575286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:43,325][14226] Avg episode reward: [(0, '19.684')]
[2023-02-25 19:29:48,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3471.3). Total num frames: 2322432. Throughput: 0: 870.8. Samples: 581334. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:48,318][14226] Avg episode reward: [(0, '20.644')]
[2023-02-25 19:29:51,933][19865] Updated weights for policy 0, policy_version 570 (0.0016)
[2023-02-25 19:29:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3471.2). Total num frames: 2334720. Throughput: 0: 846.7. Samples: 583296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:53,321][14226] Avg episode reward: [(0, '20.446')]
[2023-02-25 19:29:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 2351104. Throughput: 0: 830.3. Samples: 587252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:58,318][14226] Avg episode reward: [(0, '22.169')]
[2023-02-25 19:30:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2371584. Throughput: 0: 879.8. Samples: 593534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:03,318][14226] Avg episode reward: [(0, '21.614')]
[2023-02-25 19:30:03,399][19865] Updated weights for policy 0, policy_version 580 (0.0027)
[2023-02-25 19:30:03,405][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000580_2375680.pth...
[2023-02-25 19:30:03,529][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000374_1531904.pth
[2023-02-25 19:30:08,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 2392064. Throughput: 0: 877.5. Samples: 596536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:08,325][14226] Avg episode reward: [(0, '22.972')]
[2023-02-25 19:30:08,332][19851] Saving new best policy, reward=22.972!
[2023-02-25 19:30:13,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2404352. Throughput: 0: 832.5. Samples: 600682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:13,318][14226] Avg episode reward: [(0, '21.815')]
[2023-02-25 19:30:16,965][19865] Updated weights for policy 0, policy_version 590 (0.0029)
[2023-02-25 19:30:18,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.5, 300 sec: 3457.3). Total num frames: 2420736. Throughput: 0: 840.2. Samples: 605310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:18,318][14226] Avg episode reward: [(0, '21.502')]
[2023-02-25 19:30:23,315][14226] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2441216. Throughput: 0: 866.4. Samples: 608372. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:30:23,317][14226] Avg episode reward: [(0, '23.034')]
[2023-02-25 19:30:23,341][19851] Saving new best policy, reward=23.034!
[2023-02-25 19:30:27,343][19865] Updated weights for policy 0, policy_version 600 (0.0036)
[2023-02-25 19:30:28,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2457600. Throughput: 0: 868.0. Samples: 614348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:28,318][14226] Avg episode reward: [(0, '22.007')]
[2023-02-25 19:30:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 2469888. Throughput: 0: 824.2. Samples: 618422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:33,318][14226] Avg episode reward: [(0, '22.557')]
[2023-02-25 19:30:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2490368. Throughput: 0: 825.8. Samples: 620456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:38,318][14226] Avg episode reward: [(0, '22.130')]
[2023-02-25 19:30:41,184][19865] Updated weights for policy 0, policy_version 610 (0.0022)
[2023-02-25 19:30:43,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 2502656. Throughput: 0: 847.1. Samples: 625370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:43,318][14226] Avg episode reward: [(0, '23.367')]
[2023-02-25 19:30:43,328][19851] Saving new best policy, reward=23.367!
[2023-02-25 19:30:48,315][14226] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3443.4). Total num frames: 2514944. Throughput: 0: 789.7. Samples: 629070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:48,319][14226] Avg episode reward: [(0, '23.750')]
[2023-02-25 19:30:48,324][19851] Saving new best policy, reward=23.750!
[2023-02-25 19:30:53,315][14226] Fps is (10 sec: 2457.7, 60 sec: 3208.5, 300 sec: 3443.4). Total num frames: 2527232. Throughput: 0: 766.0. Samples: 631006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:53,323][14226] Avg episode reward: [(0, '22.551')]
[2023-02-25 19:30:56,738][19865] Updated weights for policy 0, policy_version 620 (0.0026)
[2023-02-25 19:30:58,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2543616. Throughput: 0: 766.2. Samples: 635162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:30:58,324][14226] Avg episode reward: [(0, '21.819')]
[2023-02-25 19:31:03,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2564096. Throughput: 0: 803.5. Samples: 641468. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:03,318][14226] Avg episode reward: [(0, '21.845')]
[2023-02-25 19:31:06,737][19865] Updated weights for policy 0, policy_version 630 (0.0030)
[2023-02-25 19:31:08,315][14226] Fps is (10 sec: 3686.2, 60 sec: 3140.2, 300 sec: 3415.7). Total num frames: 2580480. Throughput: 0: 805.6. Samples: 644626. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:08,318][14226] Avg episode reward: [(0, '22.819')]
[2023-02-25 19:31:13,316][14226] Fps is (10 sec: 3276.3, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2596864. Throughput: 0: 764.7. Samples: 648760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:13,322][14226] Avg episode reward: [(0, '22.386')]
[2023-02-25 19:31:18,315][14226] Fps is (10 sec: 3277.0, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 2613248. Throughput: 0: 776.7. Samples: 653374. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:18,324][14226] Avg episode reward: [(0, '22.061')]
[2023-02-25 19:31:20,082][19865] Updated weights for policy 0, policy_version 640 (0.0025)
[2023-02-25 19:31:23,315][14226] Fps is (10 sec: 3686.9, 60 sec: 3208.5, 300 sec: 3415.7). Total num frames: 2633728. Throughput: 0: 800.1. Samples: 656460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:23,318][14226] Avg episode reward: [(0, '23.373')]
[2023-02-25 19:31:28,319][14226] Fps is (10 sec: 3685.1, 60 sec: 3208.3, 300 sec: 3415.6). Total num frames: 2650112. Throughput: 0: 820.8. Samples: 662308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:28,321][14226] Avg episode reward: [(0, '22.192')]
[2023-02-25 19:31:32,371][19865] Updated weights for policy 0, policy_version 650 (0.0035)
[2023-02-25 19:31:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 2662400. Throughput: 0: 824.4. Samples: 666170. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:31:33,319][14226] Avg episode reward: [(0, '21.930')]
[2023-02-25 19:31:38,315][14226] Fps is (10 sec: 2868.2, 60 sec: 3140.3, 300 sec: 3387.9). Total num frames: 2678784. Throughput: 0: 829.5. Samples: 668334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:38,322][14226] Avg episode reward: [(0, '20.644')]
[2023-02-25 19:31:43,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2699264. Throughput: 0: 875.9. Samples: 674576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:31:43,317][14226] Avg episode reward: [(0, '21.910')]
[2023-02-25 19:31:43,425][19865] Updated weights for policy 0, policy_version 660 (0.0013)
[2023-02-25 19:31:48,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 2715648. Throughput: 0: 850.8. Samples: 679752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:31:48,319][14226] Avg episode reward: [(0, '22.429')]
[2023-02-25 19:31:53,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 2727936. Throughput: 0: 824.0. Samples: 681706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:31:53,325][14226] Avg episode reward: [(0, '21.283')]
[2023-02-25 19:31:56,682][19865] Updated weights for policy 0, policy_version 670 (0.0064)
[2023-02-25 19:31:58,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2748416. Throughput: 0: 839.9. Samples: 686556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:31:58,318][14226] Avg episode reward: [(0, '21.590')]
[2023-02-25 19:32:03,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2768896. Throughput: 0: 876.1. Samples: 692800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:03,317][14226] Avg episode reward: [(0, '22.223')]
[2023-02-25 19:32:03,334][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000676_2768896.pth...
[2023-02-25 19:32:03,456][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000479_1961984.pth
[2023-02-25 19:32:07,739][19865] Updated weights for policy 0, policy_version 680 (0.0013)
[2023-02-25 19:32:08,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3374.0). Total num frames: 2785280. Throughput: 0: 864.3. Samples: 695352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:08,320][14226] Avg episode reward: [(0, '21.635')]
[2023-02-25 19:32:13,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 2797568. Throughput: 0: 822.9. Samples: 699336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:13,322][14226] Avg episode reward: [(0, '21.014')]
[2023-02-25 19:32:18,317][14226] Fps is (10 sec: 3276.3, 60 sec: 3413.2, 300 sec: 3374.0). Total num frames: 2818048. Throughput: 0: 861.8. Samples: 704954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:18,323][14226] Avg episode reward: [(0, '20.343')]
[2023-02-25 19:32:19,631][19865] Updated weights for policy 0, policy_version 690 (0.0030)
[2023-02-25 19:32:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2838528. Throughput: 0: 884.5. Samples: 708136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:23,318][14226] Avg episode reward: [(0, '20.192')]
[2023-02-25 19:32:28,315][14226] Fps is (10 sec: 3686.9, 60 sec: 3413.5, 300 sec: 3374.0). Total num frames: 2854912. Throughput: 0: 858.8. Samples: 713222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:28,321][14226] Avg episode reward: [(0, '21.425')]
[2023-02-25 19:32:32,733][19865] Updated weights for policy 0, policy_version 700 (0.0026)
[2023-02-25 19:32:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2867200. Throughput: 0: 831.2. Samples: 717154. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:33,318][14226] Avg episode reward: [(0, '20.984')]
[2023-02-25 19:32:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2887680. Throughput: 0: 851.0. Samples: 720002. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:38,324][14226] Avg episode reward: [(0, '22.227')]
[2023-02-25 19:32:42,668][19865] Updated weights for policy 0, policy_version 710 (0.0018)
[2023-02-25 19:32:43,318][14226] Fps is (10 sec: 4094.9, 60 sec: 3481.4, 300 sec: 3374.0). Total num frames: 2908160. Throughput: 0: 884.3. Samples: 726350. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:32:43,325][14226] Avg episode reward: [(0, '22.079')]
[2023-02-25 19:32:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 2920448. Throughput: 0: 848.9. Samples: 731000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:48,320][14226] Avg episode reward: [(0, '23.618')]
[2023-02-25 19:32:53,315][14226] Fps is (10 sec: 2868.0, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2936832. Throughput: 0: 837.0. Samples: 733018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:32:53,318][14226] Avg episode reward: [(0, '23.949')]
[2023-02-25 19:32:53,327][19851] Saving new best policy, reward=23.949!
[2023-02-25 19:32:55,899][19865] Updated weights for policy 0, policy_version 720 (0.0018)
[2023-02-25 19:32:58,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2957312. Throughput: 0: 869.0. Samples: 738440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:32:58,318][14226] Avg episode reward: [(0, '23.313')]
[2023-02-25 19:33:03,315][14226] Fps is (10 sec: 4095.9, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 2977792. Throughput: 0: 884.2. Samples: 744744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:03,321][14226] Avg episode reward: [(0, '24.735')]
[2023-02-25 19:33:03,333][19851] Saving new best policy, reward=24.735!
[2023-02-25 19:33:07,268][19865] Updated weights for policy 0, policy_version 730 (0.0016)
[2023-02-25 19:33:08,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 2990080. Throughput: 0: 860.3. Samples: 746848. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:08,319][14226] Avg episode reward: [(0, '24.567')]
[2023-02-25 19:33:13,315][14226] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3006464. Throughput: 0: 833.3. Samples: 750722. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:33:13,317][14226] Avg episode reward: [(0, '25.508')]
[2023-02-25 19:33:13,332][19851] Saving new best policy, reward=25.508!
[2023-02-25 19:33:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3387.9). Total num frames: 3026944. Throughput: 0: 878.0. Samples: 756666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:18,318][14226] Avg episode reward: [(0, '25.210')]
[2023-02-25 19:33:19,139][19865] Updated weights for policy 0, policy_version 740 (0.0017)
[2023-02-25 19:33:23,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3043328. Throughput: 0: 882.9. Samples: 759732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:23,321][14226] Avg episode reward: [(0, '25.449')]
[2023-02-25 19:33:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3059712. Throughput: 0: 844.1. Samples: 764330. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:33:28,320][14226] Avg episode reward: [(0, '24.391')]
[2023-02-25 19:33:32,440][19865] Updated weights for policy 0, policy_version 750 (0.0012)
[2023-02-25 19:33:33,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3072000. Throughput: 0: 837.0. Samples: 768666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:33,318][14226] Avg episode reward: [(0, '24.438')]
[2023-02-25 19:33:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3096576. Throughput: 0: 862.8. Samples: 771842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:33:38,318][14226] Avg episode reward: [(0, '23.098')]
[2023-02-25 19:33:42,148][19865] Updated weights for policy 0, policy_version 760 (0.0017)
[2023-02-25 19:33:43,318][14226] Fps is (10 sec: 4094.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3112960. Throughput: 0: 884.2. Samples: 778230. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:43,321][14226] Avg episode reward: [(0, '22.200')]
[2023-02-25 19:33:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 3129344. Throughput: 0: 835.8. Samples: 782354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:33:48,318][14226] Avg episode reward: [(0, '22.994')]
[2023-02-25 19:33:53,315][14226] Fps is (10 sec: 2868.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3141632. Throughput: 0: 832.4. Samples: 784308. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:33:53,321][14226] Avg episode reward: [(0, '21.716')]
[2023-02-25 19:33:55,444][19865] Updated weights for policy 0, policy_version 770 (0.0027)
[2023-02-25 19:33:58,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3166208. Throughput: 0: 876.6. Samples: 790170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:58,318][14226] Avg episode reward: [(0, '21.671')]
[2023-02-25 19:34:03,322][14226] Fps is (10 sec: 4093.3, 60 sec: 3413.0, 300 sec: 3373.9). Total num frames: 3182592. Throughput: 0: 876.2. Samples: 796102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:03,324][14226] Avg episode reward: [(0, '21.513')]
[2023-02-25 19:34:03,351][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth...
[2023-02-25 19:34:03,563][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000580_2375680.pth
[2023-02-25 19:34:07,374][19865] Updated weights for policy 0, policy_version 780 (0.0016)
[2023-02-25 19:34:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3194880. Throughput: 0: 851.5. Samples: 798048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:08,321][14226] Avg episode reward: [(0, '21.635')]
[2023-02-25 19:34:13,315][14226] Fps is (10 sec: 2869.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3211264. Throughput: 0: 844.0. Samples: 802310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:13,321][14226] Avg episode reward: [(0, '19.901')]
[2023-02-25 19:34:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3231744. Throughput: 0: 888.0. Samples: 808628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:18,318][14226] Avg episode reward: [(0, '20.496')]
[2023-02-25 19:34:18,430][19865] Updated weights for policy 0, policy_version 790 (0.0019)
[2023-02-25 19:34:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3252224. Throughput: 0: 886.5. Samples: 811736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:23,323][14226] Avg episode reward: [(0, '21.328')]
[2023-02-25 19:34:28,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3264512. Throughput: 0: 840.3. Samples: 816042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:28,318][14226] Avg episode reward: [(0, '21.576')]
[2023-02-25 19:34:31,316][19865] Updated weights for policy 0, policy_version 800 (0.0041)
[2023-02-25 19:34:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3284992. Throughput: 0: 862.5. Samples: 821168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:33,318][14226] Avg episode reward: [(0, '22.463')]
[2023-02-25 19:34:38,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3305472. Throughput: 0: 895.9. Samples: 824624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:38,322][14226] Avg episode reward: [(0, '23.978')]
[2023-02-25 19:34:40,239][19865] Updated weights for policy 0, policy_version 810 (0.0039)
[2023-02-25 19:34:43,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3550.1, 300 sec: 3401.8). Total num frames: 3325952. Throughput: 0: 908.7. Samples: 831060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:43,320][14226] Avg episode reward: [(0, '24.812')]
[2023-02-25 19:34:48,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 3338240. Throughput: 0: 874.3. Samples: 835440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:48,321][14226] Avg episode reward: [(0, '24.109')]
[2023-02-25 19:34:52,742][19865] Updated weights for policy 0, policy_version 820 (0.0021)
[2023-02-25 19:34:53,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3415.6). Total num frames: 3358720. Throughput: 0: 883.7. Samples: 837816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:53,322][14226] Avg episode reward: [(0, '22.731')]
[2023-02-25 19:34:58,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3429.5). Total num frames: 3383296. Throughput: 0: 937.0. Samples: 844474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:34:58,325][14226] Avg episode reward: [(0, '22.719')]
[2023-02-25 19:35:02,693][19865] Updated weights for policy 0, policy_version 830 (0.0015)
[2023-02-25 19:35:03,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3618.5, 300 sec: 3415.6). Total num frames: 3399680. Throughput: 0: 927.6. Samples: 850368. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:03,321][14226] Avg episode reward: [(0, '23.358')]
[2023-02-25 19:35:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3415.7). Total num frames: 3411968. Throughput: 0: 904.2. Samples: 852426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:08,319][14226] Avg episode reward: [(0, '23.036')]
[2023-02-25 19:35:13,315][14226] Fps is (10 sec: 3276.6, 60 sec: 3686.4, 300 sec: 3429.5). Total num frames: 3432448. Throughput: 0: 922.3. Samples: 857546. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:13,324][14226] Avg episode reward: [(0, '22.818')]
[2023-02-25 19:35:14,426][19865] Updated weights for policy 0, policy_version 840 (0.0028)
[2023-02-25 19:35:18,323][14226] Fps is (10 sec: 4502.0, 60 sec: 3754.2, 300 sec: 3443.3). Total num frames: 3457024. Throughput: 0: 956.0. Samples: 864196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:18,326][14226] Avg episode reward: [(0, '23.309')]
[2023-02-25 19:35:23,315][14226] Fps is (10 sec: 4096.2, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3473408. Throughput: 0: 944.8. Samples: 867142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:23,318][14226] Avg episode reward: [(0, '22.427')]
[2023-02-25 19:35:25,739][19865] Updated weights for policy 0, policy_version 850 (0.0021)
[2023-02-25 19:35:28,315][14226] Fps is (10 sec: 2869.5, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3485696. Throughput: 0: 899.5. Samples: 871536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:28,323][14226] Avg episode reward: [(0, '22.797')]
[2023-02-25 19:35:33,316][14226] Fps is (10 sec: 3276.6, 60 sec: 3686.4, 300 sec: 3443.4). Total num frames: 3506176. Throughput: 0: 924.6. Samples: 877046. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:35:33,321][14226] Avg episode reward: [(0, '22.067')]
[2023-02-25 19:35:38,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3518464. Throughput: 0: 918.0. Samples: 879128. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:38,317][14226] Avg episode reward: [(0, '23.399')]
[2023-02-25 19:35:38,338][19865] Updated weights for policy 0, policy_version 860 (0.0013)
[2023-02-25 19:35:43,315][14226] Fps is (10 sec: 2457.7, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 3530752. Throughput: 0: 857.9. Samples: 883080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:43,323][14226] Avg episode reward: [(0, '23.213')]
[2023-02-25 19:35:48,315][14226] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3547136. Throughput: 0: 817.6. Samples: 887162. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:48,322][14226] Avg episode reward: [(0, '21.848')]
[2023-02-25 19:35:52,161][19865] Updated weights for policy 0, policy_version 870 (0.0023)
[2023-02-25 19:35:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3567616. Throughput: 0: 827.9. Samples: 889682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:53,318][14226] Avg episode reward: [(0, '22.204')]
[2023-02-25 19:35:58,315][14226] Fps is (10 sec: 4096.1, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3588096. Throughput: 0: 863.7. Samples: 896414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:58,318][14226] Avg episode reward: [(0, '22.700')]
[2023-02-25 19:36:01,954][19865] Updated weights for policy 0, policy_version 880 (0.0014)
[2023-02-25 19:36:03,315][14226] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3604480. Throughput: 0: 843.8. Samples: 902160. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:36:03,320][14226] Avg episode reward: [(0, '22.883')]
[2023-02-25 19:36:03,332][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth...
[2023-02-25 19:36:03,514][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000676_2768896.pth
[2023-02-25 19:36:08,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3620864. Throughput: 0: 823.0. Samples: 904178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:08,318][14226] Avg episode reward: [(0, '22.916')]
[2023-02-25 19:36:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3641344. Throughput: 0: 845.2. Samples: 909570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:13,317][14226] Avg episode reward: [(0, '22.524')]
[2023-02-25 19:36:13,555][19865] Updated weights for policy 0, policy_version 890 (0.0012)
[2023-02-25 19:36:18,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3482.1, 300 sec: 3499.0). Total num frames: 3665920. Throughput: 0: 874.8. Samples: 916410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:36:18,318][14226] Avg episode reward: [(0, '23.046')]
[2023-02-25 19:36:23,315][14226] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3682304. Throughput: 0: 893.7. Samples: 919344. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:23,325][14226] Avg episode reward: [(0, '22.642')]
[2023-02-25 19:36:24,639][19865] Updated weights for policy 0, policy_version 900 (0.0012)
[2023-02-25 19:36:28,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3499.0). Total num frames: 3694592. Throughput: 0: 900.4. Samples: 923598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:28,318][14226] Avg episode reward: [(0, '22.364')]
[2023-02-25 19:36:33,315][14226] Fps is (10 sec: 3686.2, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3719168. Throughput: 0: 939.0. Samples: 929418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:33,324][14226] Avg episode reward: [(0, '22.105')]
[2023-02-25 19:36:35,400][19865] Updated weights for policy 0, policy_version 910 (0.0029)
[2023-02-25 19:36:38,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3739648. Throughput: 0: 955.6. Samples: 932684. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:36:38,318][14226] Avg episode reward: [(0, '23.615')]
[2023-02-25 19:36:43,315][14226] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3512.8). Total num frames: 3751936. Throughput: 0: 931.9. Samples: 938350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:43,318][14226] Avg episode reward: [(0, '23.424')]
[2023-02-25 19:36:47,799][19865] Updated weights for policy 0, policy_version 920 (0.0038)
[2023-02-25 19:36:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3768320. Throughput: 0: 897.8. Samples: 942560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:48,318][14226] Avg episode reward: [(0, '24.465')]
[2023-02-25 19:36:53,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3788800. Throughput: 0: 916.3. Samples: 945412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:36:53,318][14226] Avg episode reward: [(0, '24.379')]
[2023-02-25 19:36:57,361][19865] Updated weights for policy 0, policy_version 930 (0.0025)
[2023-02-25 19:36:58,315][14226] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3540.6). Total num frames: 3813376. Throughput: 0: 943.6. Samples: 952034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:36:58,317][14226] Avg episode reward: [(0, '25.000')]
[2023-02-25 19:37:03,315][14226] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3526.7). Total num frames: 3825664. Throughput: 0: 908.0. Samples: 957268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:03,326][14226] Avg episode reward: [(0, '25.342')]
[2023-02-25 19:37:08,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3842048. Throughput: 0: 887.8. Samples: 959294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:37:08,321][14226] Avg episode reward: [(0, '24.691')]
[2023-02-25 19:37:10,035][19865] Updated weights for policy 0, policy_version 940 (0.0025)
[2023-02-25 19:37:13,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3540.6). Total num frames: 3862528. Throughput: 0: 908.0. Samples: 964460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:37:13,318][14226] Avg episode reward: [(0, '24.170')]
[2023-02-25 19:37:18,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3878912. Throughput: 0: 913.7. Samples: 970536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:18,318][14226] Avg episode reward: [(0, '24.228')]
[2023-02-25 19:37:21,494][19865] Updated weights for policy 0, policy_version 950 (0.0013)
[2023-02-25 19:37:23,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3895296. Throughput: 0: 891.3. Samples: 972794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:23,322][14226] Avg episode reward: [(0, '26.323')]
[2023-02-25 19:37:23,338][19851] Saving new best policy, reward=26.323!
[2023-02-25 19:37:28,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 3907584. Throughput: 0: 846.6. Samples: 976448. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:37:28,318][14226] Avg episode reward: [(0, '26.837')]
[2023-02-25 19:37:28,324][19851] Saving new best policy, reward=26.837!
[2023-02-25 19:37:33,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3928064. Throughput: 0: 873.6. Samples: 981874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:37:33,323][14226] Avg episode reward: [(0, '25.361')]
[2023-02-25 19:37:34,138][19865] Updated weights for policy 0, policy_version 960 (0.0019)
[2023-02-25 19:37:38,315][14226] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3512.9). Total num frames: 3944448. Throughput: 0: 878.2. Samples: 984932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:37:38,318][14226] Avg episode reward: [(0, '26.093')]
[2023-02-25 19:37:43,315][14226] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 3960832. Throughput: 0: 838.4. Samples: 989760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:43,317][14226] Avg episode reward: [(0, '26.741')]
[2023-02-25 19:37:47,713][19865] Updated weights for policy 0, policy_version 970 (0.0021)
[2023-02-25 19:37:48,315][14226] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3973120. Throughput: 0: 808.9. Samples: 993668. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:48,318][14226] Avg episode reward: [(0, '26.553')]
[2023-02-25 19:37:53,315][14226] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3512.8). Total num frames: 3993600. Throughput: 0: 830.7. Samples: 996674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:37:53,318][14226] Avg episode reward: [(0, '24.478')]
[2023-02-25 19:37:55,607][19851] Stopping Batcher_0...
[2023-02-25 19:37:55,608][19851] Loop batcher_evt_loop terminating...
[2023-02-25 19:37:55,609][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:37:55,607][14226] Component Batcher_0 stopped!
[2023-02-25 19:37:55,657][19865] Weights refcount: 2 0
[2023-02-25 19:37:55,671][19865] Stopping InferenceWorker_p0-w0...
[2023-02-25 19:37:55,672][19865] Loop inference_proc0-0_evt_loop terminating...
[2023-02-25 19:37:55,671][14226] Component InferenceWorker_p0-w0 stopped!
[2023-02-25 19:37:55,699][19875] Stopping RolloutWorker_w5...
[2023-02-25 19:37:55,699][14226] Component RolloutWorker_w2 stopped!
[2023-02-25 19:37:55,708][14226] Component RolloutWorker_w5 stopped!
[2023-02-25 19:37:55,716][19873] Stopping RolloutWorker_w3...
[2023-02-25 19:37:55,717][19873] Loop rollout_proc3_evt_loop terminating...
[2023-02-25 19:37:55,717][19872] Stopping RolloutWorker_w2...
[2023-02-25 19:37:55,717][19872] Loop rollout_proc2_evt_loop terminating...
[2023-02-25 19:37:55,718][19875] Loop rollout_proc5_evt_loop terminating...
[2023-02-25 19:37:55,716][14226] Component RolloutWorker_w3 stopped!
[2023-02-25 19:37:55,726][14226] Component RolloutWorker_w7 stopped!
[2023-02-25 19:37:55,726][19877] Stopping RolloutWorker_w7...
[2023-02-25 19:37:55,734][19877] Loop rollout_proc7_evt_loop terminating...
[2023-02-25 19:37:55,741][14226] Component RolloutWorker_w1 stopped!
[2023-02-25 19:37:55,741][19867] Stopping RolloutWorker_w1...
[2023-02-25 19:37:55,744][19867] Loop rollout_proc1_evt_loop terminating...
[2023-02-25 19:37:55,745][19876] Stopping RolloutWorker_w6...
[2023-02-25 19:37:55,745][14226] Component RolloutWorker_w6 stopped!
[2023-02-25 19:37:55,745][19876] Loop rollout_proc6_evt_loop terminating...
[2023-02-25 19:37:55,756][19874] Stopping RolloutWorker_w4...
[2023-02-25 19:37:55,758][19874] Loop rollout_proc4_evt_loop terminating...
[2023-02-25 19:37:55,756][14226] Component RolloutWorker_w4 stopped!
[2023-02-25 19:37:55,760][19866] Stopping RolloutWorker_w0...
[2023-02-25 19:37:55,760][14226] Component RolloutWorker_w0 stopped!
[2023-02-25 19:37:55,773][19866] Loop rollout_proc0_evt_loop terminating...
[2023-02-25 19:37:55,809][19851] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth
[2023-02-25 19:37:55,826][19851] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:37:56,028][19851] Stopping LearnerWorker_p0...
[2023-02-25 19:37:56,028][19851] Loop learner_proc0_evt_loop terminating...
[2023-02-25 19:37:56,027][14226] Component LearnerWorker_p0 stopped!
[2023-02-25 19:37:56,031][14226] Waiting for process learner_proc0 to stop...
[2023-02-25 19:37:57,892][14226] Waiting for process inference_proc0-0 to join...
[2023-02-25 19:37:58,433][14226] Waiting for process rollout_proc0 to join...
[2023-02-25 19:37:59,138][14226] Waiting for process rollout_proc1 to join...
[2023-02-25 19:37:59,147][14226] Waiting for process rollout_proc2 to join...
[2023-02-25 19:37:59,148][14226] Waiting for process rollout_proc3 to join...
[2023-02-25 19:37:59,149][14226] Waiting for process rollout_proc4 to join...
[2023-02-25 19:37:59,151][14226] Waiting for process rollout_proc5 to join...
[2023-02-25 19:37:59,152][14226] Waiting for process rollout_proc6 to join...
[2023-02-25 19:37:59,153][14226] Waiting for process rollout_proc7 to join...
[2023-02-25 19:37:59,154][14226] Batcher 0 profile tree view:
batching: 26.7102, releasing_batches: 0.0260
[2023-02-25 19:37:59,157][14226] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 559.5307
update_model: 8.4848
weight_update: 0.0017
one_step: 0.0129
handle_policy_step: 552.9041
deserialize: 15.6834, stack: 3.2341, obs_to_device_normalize: 119.6087, forward: 271.1924, send_messages: 27.0588
prepare_outputs: 88.4472
to_cpu: 55.1625
[2023-02-25 19:37:59,159][14226] Learner 0 profile tree view:
misc: 0.0056, prepare_batch: 18.1858
train: 76.5167
epoch_init: 0.0061, minibatch_init: 0.0215, losses_postprocess: 0.5958, kl_divergence: 0.6215, after_optimizer: 33.0838
calculate_losses: 26.8819
losses_init: 0.0037, forward_head: 1.7387, bptt_initial: 17.6697, tail: 1.1877, advantages_returns: 0.2623, losses: 3.3503
bptt: 2.2678
bptt_forward_core: 2.1907
update: 14.5884
clip: 1.4210
[2023-02-25 19:37:59,160][14226] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.4103, enqueue_policy_requests: 156.8018, env_step: 869.3712, overhead: 23.9531, complete_rollouts: 6.7658
save_policy_outputs: 22.0836
split_output_tensors: 10.9491
[2023-02-25 19:37:59,163][14226] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.3762, enqueue_policy_requests: 159.3640, env_step: 873.8569, overhead: 24.0627, complete_rollouts: 7.6767
save_policy_outputs: 21.8641
split_output_tensors: 10.5020
[2023-02-25 19:37:59,164][14226] Loop Runner_EvtLoop terminating...
[2023-02-25 19:37:59,166][14226] Runner profile tree view:
main_loop: 1191.3321
[2023-02-25 19:37:59,167][14226] Collected {0: 4005888}, FPS: 3362.5
[2023-02-25 19:37:59,347][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:37:59,350][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:37:59,354][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:37:59,355][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:37:59,357][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:37:59,358][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:37:59,359][14226] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:37:59,360][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:37:59,363][14226] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-25 19:37:59,365][14226] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-25 19:37:59,366][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:37:59,368][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:37:59,369][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:37:59,371][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:37:59,372][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:37:59,405][14226] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:37:59,412][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:37:59,415][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:37:59,443][14226] ConvEncoder: input_channels=3
[2023-02-25 19:38:00,241][14226] Conv encoder output size: 512
[2023-02-25 19:38:00,246][14226] Policy head output size: 512
[2023-02-25 19:38:02,793][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:38:04,054][14226] Num frames 100...
[2023-02-25 19:38:04,166][14226] Num frames 200...
[2023-02-25 19:38:04,277][14226] Num frames 300...
[2023-02-25 19:38:04,386][14226] Num frames 400...
[2023-02-25 19:38:04,503][14226] Num frames 500...
[2023-02-25 19:38:04,619][14226] Num frames 600...
[2023-02-25 19:38:04,737][14226] Num frames 700...
[2023-02-25 19:38:04,855][14226] Num frames 800...
[2023-02-25 19:38:04,968][14226] Num frames 900...
[2023-02-25 19:38:05,081][14226] Num frames 1000...
[2023-02-25 19:38:05,205][14226] Num frames 1100...
[2023-02-25 19:38:05,322][14226] Num frames 1200...
[2023-02-25 19:38:05,447][14226] Num frames 1300...
[2023-02-25 19:38:05,576][14226] Num frames 1400...
[2023-02-25 19:38:05,702][14226] Num frames 1500...
[2023-02-25 19:38:05,823][14226] Num frames 1600...
[2023-02-25 19:38:05,985][14226] Avg episode rewards: #0: 39.840, true rewards: #0: 16.840
[2023-02-25 19:38:05,986][14226] Avg episode reward: 39.840, avg true_objective: 16.840
[2023-02-25 19:38:06,012][14226] Num frames 1700...
[2023-02-25 19:38:06,142][14226] Num frames 1800...
[2023-02-25 19:38:06,277][14226] Num frames 1900...
[2023-02-25 19:38:06,404][14226] Num frames 2000...
[2023-02-25 19:38:06,531][14226] Num frames 2100...
[2023-02-25 19:38:06,649][14226] Num frames 2200...
[2023-02-25 19:38:06,770][14226] Num frames 2300...
[2023-02-25 19:38:06,890][14226] Num frames 2400...
[2023-02-25 19:38:07,008][14226] Num frames 2500...
[2023-02-25 19:38:07,128][14226] Num frames 2600...
[2023-02-25 19:38:07,250][14226] Num frames 2700...
[2023-02-25 19:38:07,352][14226] Avg episode rewards: #0: 33.700, true rewards: #0: 13.700
[2023-02-25 19:38:07,356][14226] Avg episode reward: 33.700, avg true_objective: 13.700
[2023-02-25 19:38:07,430][14226] Num frames 2800...
[2023-02-25 19:38:07,549][14226] Num frames 2900...
[2023-02-25 19:38:07,663][14226] Num frames 3000...
[2023-02-25 19:38:07,782][14226] Num frames 3100...
[2023-02-25 19:38:07,901][14226] Num frames 3200...
[2023-02-25 19:38:08,016][14226] Num frames 3300...
[2023-02-25 19:38:08,135][14226] Num frames 3400...
[2023-02-25 19:38:08,256][14226] Num frames 3500...
[2023-02-25 19:38:08,368][14226] Num frames 3600...
[2023-02-25 19:38:08,483][14226] Num frames 3700...
[2023-02-25 19:38:08,602][14226] Num frames 3800...
[2023-02-25 19:38:08,715][14226] Num frames 3900...
[2023-02-25 19:38:08,835][14226] Num frames 4000...
[2023-02-25 19:38:08,953][14226] Num frames 4100...
[2023-02-25 19:38:09,065][14226] Num frames 4200...
[2023-02-25 19:38:09,182][14226] Avg episode rewards: #0: 34.500, true rewards: #0: 14.167
[2023-02-25 19:38:09,184][14226] Avg episode reward: 34.500, avg true_objective: 14.167
[2023-02-25 19:38:09,245][14226] Num frames 4300...
[2023-02-25 19:38:09,362][14226] Num frames 4400...
[2023-02-25 19:38:09,482][14226] Num frames 4500...
[2023-02-25 19:38:09,599][14226] Num frames 4600...
[2023-02-25 19:38:09,714][14226] Num frames 4700...
[2023-02-25 19:38:09,836][14226] Num frames 4800...
[2023-02-25 19:38:09,949][14226] Num frames 4900...
[2023-02-25 19:38:10,066][14226] Num frames 5000...
[2023-02-25 19:38:10,181][14226] Num frames 5100...
[2023-02-25 19:38:10,293][14226] Num frames 5200...
[2023-02-25 19:38:10,409][14226] Num frames 5300...
[2023-02-25 19:38:10,529][14226] Num frames 5400...
[2023-02-25 19:38:10,642][14226] Num frames 5500...
[2023-02-25 19:38:10,760][14226] Num frames 5600...
[2023-02-25 19:38:10,879][14226] Num frames 5700...
[2023-02-25 19:38:11,002][14226] Num frames 5800...
[2023-02-25 19:38:11,118][14226] Num frames 5900...
[2023-02-25 19:38:11,240][14226] Num frames 6000...
[2023-02-25 19:38:11,403][14226] Num frames 6100...
[2023-02-25 19:38:11,475][14226] Avg episode rewards: #0: 35.765, true rewards: #0: 15.265
[2023-02-25 19:38:11,477][14226] Avg episode reward: 35.765, avg true_objective: 15.265
[2023-02-25 19:38:11,633][14226] Num frames 6200...
[2023-02-25 19:38:11,801][14226] Num frames 6300...
[2023-02-25 19:38:11,958][14226] Num frames 6400...
[2023-02-25 19:38:12,119][14226] Num frames 6500...
[2023-02-25 19:38:12,278][14226] Num frames 6600...
[2023-02-25 19:38:12,438][14226] Num frames 6700...
[2023-02-25 19:38:12,607][14226] Num frames 6800...
[2023-02-25 19:38:12,772][14226] Num frames 6900...
[2023-02-25 19:38:12,948][14226] Num frames 7000...
[2023-02-25 19:38:13,107][14226] Num frames 7100...
[2023-02-25 19:38:13,271][14226] Num frames 7200...
[2023-02-25 19:38:13,440][14226] Num frames 7300...
[2023-02-25 19:38:13,608][14226] Num frames 7400...
[2023-02-25 19:38:13,772][14226] Num frames 7500...
[2023-02-25 19:38:13,938][14226] Num frames 7600...
[2023-02-25 19:38:14,093][14226] Num frames 7700...
[2023-02-25 19:38:14,207][14226] Num frames 7800...
[2023-02-25 19:38:14,327][14226] Num frames 7900...
[2023-02-25 19:38:14,424][14226] Avg episode rewards: #0: 38.670, true rewards: #0: 15.870
[2023-02-25 19:38:14,426][14226] Avg episode reward: 38.670, avg true_objective: 15.870
[2023-02-25 19:38:14,510][14226] Num frames 8000...
[2023-02-25 19:38:14,633][14226] Num frames 8100...
[2023-02-25 19:38:14,750][14226] Num frames 8200...
[2023-02-25 19:38:14,870][14226] Num frames 8300...
[2023-02-25 19:38:14,996][14226] Num frames 8400...
[2023-02-25 19:38:15,112][14226] Num frames 8500...
[2023-02-25 19:38:15,228][14226] Num frames 8600...
[2023-02-25 19:38:15,348][14226] Num frames 8700...
[2023-02-25 19:38:15,463][14226] Num frames 8800...
[2023-02-25 19:38:15,580][14226] Num frames 8900...
[2023-02-25 19:38:15,698][14226] Num frames 9000...
[2023-02-25 19:38:15,816][14226] Num frames 9100...
[2023-02-25 19:38:15,946][14226] Num frames 9200...
[2023-02-25 19:38:16,062][14226] Num frames 9300...
[2023-02-25 19:38:16,180][14226] Num frames 9400...
[2023-02-25 19:38:16,295][14226] Num frames 9500...
[2023-02-25 19:38:16,412][14226] Num frames 9600...
[2023-02-25 19:38:16,532][14226] Num frames 9700...
[2023-02-25 19:38:16,650][14226] Num frames 9800...
[2023-02-25 19:38:16,745][14226] Avg episode rewards: #0: 40.383, true rewards: #0: 16.383
[2023-02-25 19:38:16,750][14226] Avg episode reward: 40.383, avg true_objective: 16.383
[2023-02-25 19:38:16,843][14226] Num frames 9900...
[2023-02-25 19:38:16,971][14226] Num frames 10000...
[2023-02-25 19:38:17,094][14226] Num frames 10100...
[2023-02-25 19:38:17,223][14226] Num frames 10200...
[2023-02-25 19:38:17,347][14226] Num frames 10300...
[2023-02-25 19:38:17,471][14226] Num frames 10400...
[2023-02-25 19:38:17,596][14226] Num frames 10500...
[2023-02-25 19:38:17,714][14226] Num frames 10600...
[2023-02-25 19:38:17,833][14226] Num frames 10700...
[2023-02-25 19:38:17,948][14226] Num frames 10800...
[2023-02-25 19:38:18,078][14226] Num frames 10900...
[2023-02-25 19:38:18,194][14226] Num frames 11000...
[2023-02-25 19:38:18,313][14226] Num frames 11100...
[2023-02-25 19:38:18,437][14226] Num frames 11200...
[2023-02-25 19:38:18,552][14226] Num frames 11300...
[2023-02-25 19:38:18,667][14226] Num frames 11400...
[2023-02-25 19:38:18,789][14226] Num frames 11500...
[2023-02-25 19:38:18,908][14226] Num frames 11600...
[2023-02-25 19:38:19,040][14226] Num frames 11700...
[2023-02-25 19:38:19,162][14226] Num frames 11800...
[2023-02-25 19:38:19,279][14226] Num frames 11900...
[2023-02-25 19:38:19,372][14226] Avg episode rewards: #0: 43.899, true rewards: #0: 17.043
[2023-02-25 19:38:19,373][14226] Avg episode reward: 43.899, avg true_objective: 17.043
[2023-02-25 19:38:19,464][14226] Num frames 12000...
[2023-02-25 19:38:19,594][14226] Num frames 12100...
[2023-02-25 19:38:19,714][14226] Num frames 12200...
[2023-02-25 19:38:19,832][14226] Num frames 12300...
[2023-02-25 19:38:19,947][14226] Num frames 12400...
[2023-02-25 19:38:20,070][14226] Num frames 12500...
[2023-02-25 19:38:20,192][14226] Num frames 12600...
[2023-02-25 19:38:20,307][14226] Num frames 12700...
[2023-02-25 19:38:20,421][14226] Num frames 12800...
[2023-02-25 19:38:20,546][14226] Num frames 12900...
[2023-02-25 19:38:20,666][14226] Num frames 13000...
[2023-02-25 19:38:20,782][14226] Num frames 13100...
[2023-02-25 19:38:20,903][14226] Num frames 13200...
[2023-02-25 19:38:21,025][14226] Num frames 13300...
[2023-02-25 19:38:21,142][14226] Num frames 13400...
[2023-02-25 19:38:21,264][14226] Num frames 13500...
[2023-02-25 19:38:21,383][14226] Num frames 13600...
[2023-02-25 19:38:21,501][14226] Num frames 13700...
[2023-02-25 19:38:21,676][14226] Avg episode rewards: #0: 44.496, true rewards: #0: 17.246
[2023-02-25 19:38:21,678][14226] Avg episode reward: 44.496, avg true_objective: 17.246
[2023-02-25 19:38:21,685][14226] Num frames 13800...
[2023-02-25 19:38:21,799][14226] Num frames 13900...
[2023-02-25 19:38:21,920][14226] Num frames 14000...
[2023-02-25 19:38:22,043][14226] Num frames 14100...
[2023-02-25 19:38:22,162][14226] Num frames 14200...
[2023-02-25 19:38:22,282][14226] Avg episode rewards: #0: 40.494, true rewards: #0: 15.828
[2023-02-25 19:38:22,284][14226] Avg episode reward: 40.494, avg true_objective: 15.828
[2023-02-25 19:38:22,366][14226] Num frames 14300...
[2023-02-25 19:38:22,493][14226] Num frames 14400...
[2023-02-25 19:38:22,617][14226] Num frames 14500...
[2023-02-25 19:38:22,734][14226] Num frames 14600...
[2023-02-25 19:38:22,850][14226] Num frames 14700...
[2023-02-25 19:38:22,962][14226] Num frames 14800...
[2023-02-25 19:38:23,080][14226] Num frames 14900...
[2023-02-25 19:38:23,198][14226] Num frames 15000...
[2023-02-25 19:38:23,322][14226] Num frames 15100...
[2023-02-25 19:38:23,442][14226] Num frames 15200...
[2023-02-25 19:38:23,570][14226] Num frames 15300...
[2023-02-25 19:38:23,691][14226] Num frames 15400...
[2023-02-25 19:38:23,815][14226] Avg episode rewards: #0: 39.358, true rewards: #0: 15.458
[2023-02-25 19:38:23,817][14226] Avg episode reward: 39.358, avg true_objective: 15.458
[2023-02-25 19:40:03,313][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 19:40:04,191][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:40:04,193][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:40:04,195][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:40:04,198][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:40:04,199][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:40:04,201][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:40:04,203][14226] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 19:40:04,204][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:40:04,205][14226] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 19:40:04,206][14226] Adding new argument 'hf_repository'='ThomasSimonini/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 19:40:04,207][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:40:04,208][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:40:04,209][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:40:04,211][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:40:04,212][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:40:04,238][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:40:04,241][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:40:04,261][14226] ConvEncoder: input_channels=3
[2023-02-25 19:40:04,325][14226] Conv encoder output size: 512
[2023-02-25 19:40:04,328][14226] Policy head output size: 512
[2023-02-25 19:40:04,358][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:40:05,060][14226] Num frames 100...
[2023-02-25 19:40:05,225][14226] Num frames 200...
[2023-02-25 19:40:05,335][14226] Num frames 300...
[2023-02-25 19:40:05,451][14226] Num frames 400...
[2023-02-25 19:40:05,579][14226] Num frames 500...
[2023-02-25 19:40:05,691][14226] Num frames 600...
[2023-02-25 19:40:05,808][14226] Num frames 700...
[2023-02-25 19:40:05,922][14226] Num frames 800...
[2023-02-25 19:40:06,043][14226] Num frames 900...
[2023-02-25 19:40:06,179][14226] Avg episode rewards: #0: 20.560, true rewards: #0: 9.560
[2023-02-25 19:40:06,181][14226] Avg episode reward: 20.560, avg true_objective: 9.560
[2023-02-25 19:40:06,237][14226] Num frames 1000...
[2023-02-25 19:40:06,352][14226] Num frames 1100...
[2023-02-25 19:40:06,468][14226] Num frames 1200...
[2023-02-25 19:40:06,583][14226] Num frames 1300...
[2023-02-25 19:40:06,699][14226] Num frames 1400...
[2023-02-25 19:40:06,820][14226] Num frames 1500...
[2023-02-25 19:40:06,935][14226] Num frames 1600...
[2023-02-25 19:40:07,048][14226] Num frames 1700...
[2023-02-25 19:40:07,171][14226] Num frames 1800...
[2023-02-25 19:40:07,285][14226] Num frames 1900...
[2023-02-25 19:40:07,436][14226] Avg episode rewards: #0: 22.400, true rewards: #0: 9.900
[2023-02-25 19:40:07,440][14226] Avg episode reward: 22.400, avg true_objective: 9.900
[2023-02-25 19:40:07,471][14226] Num frames 2000...
[2023-02-25 19:40:07,596][14226] Num frames 2100...
[2023-02-25 19:40:07,716][14226] Num frames 2200...
[2023-02-25 19:40:07,832][14226] Num frames 2300...
[2023-02-25 19:40:07,947][14226] Num frames 2400...
[2023-02-25 19:40:08,076][14226] Num frames 2500...
[2023-02-25 19:40:08,206][14226] Num frames 2600...
[2023-02-25 19:40:08,333][14226] Num frames 2700...
[2023-02-25 19:40:08,450][14226] Avg episode rewards: #0: 20.493, true rewards: #0: 9.160
[2023-02-25 19:40:08,452][14226] Avg episode reward: 20.493, avg true_objective: 9.160
[2023-02-25 19:40:08,526][14226] Num frames 2800...
[2023-02-25 19:40:08,665][14226] Num frames 2900...
[2023-02-25 19:40:08,779][14226] Num frames 3000...
[2023-02-25 19:40:08,889][14226] Num frames 3100...
[2023-02-25 19:40:09,053][14226] Avg episode rewards: #0: 16.740, true rewards: #0: 7.990
[2023-02-25 19:40:09,056][14226] Avg episode reward: 16.740, avg true_objective: 7.990
[2023-02-25 19:40:09,066][14226] Num frames 3200...
[2023-02-25 19:40:09,202][14226] Num frames 3300...
[2023-02-25 19:40:09,336][14226] Num frames 3400...
[2023-02-25 19:40:09,450][14226] Num frames 3500...
[2023-02-25 19:40:09,565][14226] Num frames 3600...
[2023-02-25 19:40:09,681][14226] Num frames 3700...
[2023-02-25 19:40:09,796][14226] Num frames 3800...
[2023-02-25 19:40:09,908][14226] Num frames 3900...
[2023-02-25 19:40:09,960][14226] Avg episode rewards: #0: 16.200, true rewards: #0: 7.800
[2023-02-25 19:40:09,962][14226] Avg episode reward: 16.200, avg true_objective: 7.800
[2023-02-25 19:40:10,091][14226] Num frames 4000...
[2023-02-25 19:40:10,209][14226] Num frames 4100...
[2023-02-25 19:40:10,321][14226] Num frames 4200...
[2023-02-25 19:40:10,434][14226] Num frames 4300...
[2023-02-25 19:40:10,548][14226] Num frames 4400...
[2023-02-25 19:40:10,661][14226] Num frames 4500...
[2023-02-25 19:40:10,784][14226] Num frames 4600...
[2023-02-25 19:40:10,881][14226] Avg episode rewards: #0: 16.060, true rewards: #0: 7.727
[2023-02-25 19:40:10,883][14226] Avg episode reward: 16.060, avg true_objective: 7.727
[2023-02-25 19:40:10,959][14226] Num frames 4700...
[2023-02-25 19:40:11,092][14226] Num frames 4800...
[2023-02-25 19:40:11,214][14226] Num frames 4900...
[2023-02-25 19:40:11,330][14226] Num frames 5000...
[2023-02-25 19:40:11,446][14226] Num frames 5100...
[2023-02-25 19:40:11,559][14226] Num frames 5200...
[2023-02-25 19:40:11,671][14226] Num frames 5300...
[2023-02-25 19:40:11,795][14226] Num frames 5400...
[2023-02-25 19:40:11,910][14226] Num frames 5500...
[2023-02-25 19:40:12,023][14226] Num frames 5600...
[2023-02-25 19:40:12,145][14226] Num frames 5700...
[2023-02-25 19:40:12,267][14226] Num frames 5800...
[2023-02-25 19:40:12,379][14226] Num frames 5900...
[2023-02-25 19:40:12,494][14226] Num frames 6000...
[2023-02-25 19:40:12,611][14226] Num frames 6100...
[2023-02-25 19:40:12,720][14226] Avg episode rewards: #0: 19.486, true rewards: #0: 8.771
[2023-02-25 19:40:12,722][14226] Avg episode reward: 19.486, avg true_objective: 8.771
[2023-02-25 19:40:12,794][14226] Num frames 6200...
[2023-02-25 19:40:12,909][14226] Num frames 6300...
[2023-02-25 19:40:13,053][14226] Num frames 6400...
[2023-02-25 19:40:13,220][14226] Num frames 6500...
[2023-02-25 19:40:13,382][14226] Num frames 6600...
[2023-02-25 19:40:13,540][14226] Num frames 6700...
[2023-02-25 19:40:13,702][14226] Num frames 6800...
[2023-02-25 19:40:13,895][14226] Avg episode rewards: #0: 18.609, true rewards: #0: 8.609
[2023-02-25 19:40:13,900][14226] Avg episode reward: 18.609, avg true_objective: 8.609
[2023-02-25 19:40:13,931][14226] Num frames 6900...
[2023-02-25 19:40:14,093][14226] Num frames 7000...
[2023-02-25 19:40:14,259][14226] Num frames 7100...
[2023-02-25 19:40:14,420][14226] Num frames 7200...
[2023-02-25 19:40:14,580][14226] Num frames 7300...
[2023-02-25 19:40:14,744][14226] Num frames 7400...
[2023-02-25 19:40:14,908][14226] Num frames 7500...
[2023-02-25 19:40:15,077][14226] Num frames 7600...
[2023-02-25 19:40:15,239][14226] Num frames 7700...
[2023-02-25 19:40:15,403][14226] Num frames 7800...
[2023-02-25 19:40:15,492][14226] Avg episode rewards: #0: 19.020, true rewards: #0: 8.687
[2023-02-25 19:40:15,494][14226] Avg episode reward: 19.020, avg true_objective: 8.687
[2023-02-25 19:40:15,626][14226] Num frames 7900...
[2023-02-25 19:40:15,789][14226] Num frames 8000...
[2023-02-25 19:40:15,925][14226] Num frames 8100...
[2023-02-25 19:40:16,036][14226] Num frames 8200...
[2023-02-25 19:40:16,151][14226] Num frames 8300...
[2023-02-25 19:40:16,263][14226] Num frames 8400...
[2023-02-25 19:40:16,378][14226] Num frames 8500...
[2023-02-25 19:40:16,489][14226] Num frames 8600...
[2023-02-25 19:40:16,648][14226] Avg episode rewards: #0: 19.394, true rewards: #0: 8.694
[2023-02-25 19:40:16,650][14226] Avg episode reward: 19.394, avg true_objective: 8.694
[2023-02-25 19:41:13,877][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 19:59:09,396][14226] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:59:09,400][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:59:09,403][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:59:09,405][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:59:09,407][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:59:09,410][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:59:09,411][14226] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 19:59:09,413][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:59:09,415][14226] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 19:59:09,416][14226] Adding new argument 'hf_repository'='SergejSchweizer/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 19:59:09,417][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:59:09,418][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:59:09,420][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:59:09,422][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:59:09,423][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:59:09,453][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:59:09,457][14226] RunningMeanStd input shape: (1,)
[2023-02-25 19:59:09,479][14226] ConvEncoder: input_channels=3
[2023-02-25 19:59:09,535][14226] Conv encoder output size: 512
[2023-02-25 19:59:09,536][14226] Policy head output size: 512
[2023-02-25 19:59:09,557][14226] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:59:10,011][14226] Num frames 100...
[2023-02-25 19:59:10,134][14226] Num frames 200...
[2023-02-25 19:59:10,257][14226] Num frames 300...
[2023-02-25 19:59:10,375][14226] Num frames 400...
[2023-02-25 19:59:10,495][14226] Num frames 500...
[2023-02-25 19:59:10,606][14226] Num frames 600...
[2023-02-25 19:59:10,721][14226] Num frames 700...
[2023-02-25 19:59:10,838][14226] Num frames 800...
[2023-02-25 19:59:10,956][14226] Num frames 900...
[2023-02-25 19:59:11,074][14226] Num frames 1000...
[2023-02-25 19:59:11,246][14226] Avg episode rewards: #0: 30.920, true rewards: #0: 10.920
[2023-02-25 19:59:11,248][14226] Avg episode reward: 30.920, avg true_objective: 10.920
[2023-02-25 19:59:11,262][14226] Num frames 1100...
[2023-02-25 19:59:11,381][14226] Num frames 1200...
[2023-02-25 19:59:11,496][14226] Num frames 1300...
[2023-02-25 19:59:11,618][14226] Num frames 1400...
[2023-02-25 19:59:11,729][14226] Num frames 1500...
[2023-02-25 19:59:11,840][14226] Num frames 1600...
[2023-02-25 19:59:11,963][14226] Num frames 1700...
[2023-02-25 19:59:12,085][14226] Num frames 1800...
[2023-02-25 19:59:12,207][14226] Num frames 1900...
[2023-02-25 19:59:12,321][14226] Num frames 2000...
[2023-02-25 19:59:12,446][14226] Num frames 2100...
[2023-02-25 19:59:12,561][14226] Num frames 2200...
[2023-02-25 19:59:12,681][14226] Num frames 2300...
[2023-02-25 19:59:12,791][14226] Num frames 2400...
[2023-02-25 19:59:12,901][14226] Num frames 2500...
[2023-02-25 19:59:13,017][14226] Num frames 2600...
[2023-02-25 19:59:13,128][14226] Num frames 2700...
[2023-02-25 19:59:13,247][14226] Num frames 2800...
[2023-02-25 19:59:13,360][14226] Num frames 2900...
[2023-02-25 19:59:13,475][14226] Num frames 3000...
[2023-02-25 19:59:13,590][14226] Num frames 3100...
[2023-02-25 19:59:13,755][14226] Avg episode rewards: #0: 44.460, true rewards: #0: 15.960
[2023-02-25 19:59:13,756][14226] Avg episode reward: 44.460, avg true_objective: 15.960
[2023-02-25 19:59:13,782][14226] Num frames 3200...
[2023-02-25 19:59:13,891][14226] Num frames 3300...
[2023-02-25 19:59:14,007][14226] Num frames 3400...
[2023-02-25 19:59:14,120][14226] Num frames 3500...
[2023-02-25 19:59:14,239][14226] Num frames 3600...
[2023-02-25 19:59:14,308][14226] Avg episode rewards: #0: 31.373, true rewards: #0: 12.040
[2023-02-25 19:59:14,310][14226] Avg episode reward: 31.373, avg true_objective: 12.040
[2023-02-25 19:59:14,415][14226] Num frames 3700...
[2023-02-25 19:59:14,533][14226] Num frames 3800...
[2023-02-25 19:59:14,651][14226] Num frames 3900...
[2023-02-25 19:59:14,765][14226] Num frames 4000...
[2023-02-25 19:59:14,886][14226] Num frames 4100...
[2023-02-25 19:59:15,002][14226] Num frames 4200...
[2023-02-25 19:59:15,118][14226] Num frames 4300...
[2023-02-25 19:59:15,242][14226] Num frames 4400...
[2023-02-25 19:59:15,354][14226] Num frames 4500...
[2023-02-25 19:59:15,465][14226] Num frames 4600...
[2023-02-25 19:59:15,588][14226] Num frames 4700...
[2023-02-25 19:59:15,698][14226] Num frames 4800...
[2023-02-25 19:59:15,812][14226] Num frames 4900...
[2023-02-25 19:59:15,928][14226] Num frames 5000...
[2023-02-25 19:59:16,052][14226] Num frames 5100...
[2023-02-25 19:59:16,189][14226] Avg episode rewards: #0: 34.677, true rewards: #0: 12.927
[2023-02-25 19:59:16,191][14226] Avg episode reward: 34.677, avg true_objective: 12.927
[2023-02-25 19:59:16,236][14226] Num frames 5200...
[2023-02-25 19:59:16,358][14226] Num frames 5300...
[2023-02-25 19:59:16,473][14226] Num frames 5400...
[2023-02-25 19:59:16,596][14226] Num frames 5500...
[2023-02-25 19:59:16,707][14226] Num frames 5600...
[2023-02-25 19:59:16,826][14226] Num frames 5700...
[2023-02-25 19:59:16,940][14226] Num frames 5800...
[2023-02-25 19:59:17,053][14226] Num frames 5900...
[2023-02-25 19:59:17,170][14226] Num frames 6000...
[2023-02-25 19:59:17,296][14226] Num frames 6100...
[2023-02-25 19:59:17,408][14226] Num frames 6200...
[2023-02-25 19:59:17,521][14226] Num frames 6300...
[2023-02-25 19:59:17,644][14226] Num frames 6400...
[2023-02-25 19:59:17,708][14226] Avg episode rewards: #0: 33.610, true rewards: #0: 12.810
[2023-02-25 19:59:17,710][14226] Avg episode reward: 33.610, avg true_objective: 12.810
[2023-02-25 19:59:17,818][14226] Num frames 6500...
[2023-02-25 19:59:17,931][14226] Num frames 6600...
[2023-02-25 19:59:18,048][14226] Num frames 6700...
[2023-02-25 19:59:18,161][14226] Num frames 6800...
[2023-02-25 19:59:18,279][14226] Num frames 6900...
[2023-02-25 19:59:18,400][14226] Num frames 7000...
[2023-02-25 19:59:18,516][14226] Num frames 7100...
[2023-02-25 19:59:18,632][14226] Num frames 7200...
[2023-02-25 19:59:18,751][14226] Num frames 7300...
[2023-02-25 19:59:18,928][14226] Avg episode rewards: #0: 31.828, true rewards: #0: 12.328
[2023-02-25 19:59:18,930][14226] Avg episode reward: 31.828, avg true_objective: 12.328
[2023-02-25 19:59:18,937][14226] Num frames 7400...
[2023-02-25 19:59:19,052][14226] Num frames 7500...
[2023-02-25 19:59:19,161][14226] Num frames 7600...
[2023-02-25 19:59:19,281][14226] Num frames 7700...
[2023-02-25 19:59:19,393][14226] Num frames 7800...
[2023-02-25 19:59:19,510][14226] Num frames 7900...
[2023-02-25 19:59:19,671][14226] Num frames 8000...
[2023-02-25 19:59:19,830][14226] Num frames 8100...
[2023-02-25 19:59:19,986][14226] Num frames 8200...
[2023-02-25 19:59:20,142][14226] Num frames 8300...
[2023-02-25 19:59:20,304][14226] Num frames 8400...
[2023-02-25 19:59:20,459][14226] Num frames 8500...
[2023-02-25 19:59:20,615][14226] Num frames 8600...
[2023-02-25 19:59:20,774][14226] Num frames 8700...
[2023-02-25 19:59:20,935][14226] Num frames 8800...
[2023-02-25 19:59:21,099][14226] Num frames 8900...
[2023-02-25 19:59:21,228][14226] Avg episode rewards: #0: 32.920, true rewards: #0: 12.777
[2023-02-25 19:59:21,231][14226] Avg episode reward: 32.920, avg true_objective: 12.777
[2023-02-25 19:59:21,343][14226] Num frames 9000...
[2023-02-25 19:59:21,504][14226] Num frames 9100...
[2023-02-25 19:59:21,671][14226] Num frames 9200...
[2023-02-25 19:59:21,838][14226] Num frames 9300...
[2023-02-25 19:59:21,998][14226] Num frames 9400...
[2023-02-25 19:59:22,166][14226] Num frames 9500...
[2023-02-25 19:59:22,306][14226] Avg episode rewards: #0: 30.315, true rewards: #0: 11.940
[2023-02-25 19:59:22,308][14226] Avg episode reward: 30.315, avg true_objective: 11.940
[2023-02-25 19:59:22,393][14226] Num frames 9600...
[2023-02-25 19:59:22,562][14226] Num frames 9700...
[2023-02-25 19:59:22,727][14226] Num frames 9800...
[2023-02-25 19:59:22,904][14226] Num frames 9900...
[2023-02-25 19:59:23,061][14226] Num frames 10000...
[2023-02-25 19:59:23,171][14226] Num frames 10100...
[2023-02-25 19:59:23,283][14226] Num frames 10200...
[2023-02-25 19:59:23,398][14226] Num frames 10300...
[2023-02-25 19:59:23,510][14226] Num frames 10400...
[2023-02-25 19:59:23,625][14226] Num frames 10500...
[2023-02-25 19:59:23,739][14226] Num frames 10600...
[2023-02-25 19:59:23,856][14226] Num frames 10700...
[2023-02-25 19:59:23,970][14226] Num frames 10800...
[2023-02-25 19:59:24,056][14226] Avg episode rewards: #0: 30.251, true rewards: #0: 12.029
[2023-02-25 19:59:24,057][14226] Avg episode reward: 30.251, avg true_objective: 12.029
[2023-02-25 19:59:24,153][14226] Num frames 10900...
[2023-02-25 19:59:24,278][14226] Num frames 11000...
[2023-02-25 19:59:24,404][14226] Num frames 11100...
[2023-02-25 19:59:24,530][14226] Num frames 11200...
[2023-02-25 19:59:24,682][14226] Avg episode rewards: #0: 28.076, true rewards: #0: 11.276
[2023-02-25 19:59:24,684][14226] Avg episode reward: 28.076, avg true_objective: 11.276
[2023-02-25 20:00:37,511][14226] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 20:00:43,486][14226] The model has been pushed to https://huggingface.co/SergejSchweizer/rl_course_vizdoom_health_gathering_supreme
[2023-02-25 20:05:13,354][14226] Loading legacy config file train_dir/doom_health_gathering_supreme_2222/cfg.json instead of train_dir/doom_health_gathering_supreme_2222/config.json
[2023-02-25 20:05:13,356][14226] Loading existing experiment configuration from train_dir/doom_health_gathering_supreme_2222/config.json
[2023-02-25 20:05:13,360][14226] Overriding arg 'experiment' with value 'doom_health_gathering_supreme_2222' passed from command line
[2023-02-25 20:05:13,364][14226] Overriding arg 'train_dir' with value 'train_dir' passed from command line
[2023-02-25 20:05:13,366][14226] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 20:05:13,367][14226] Adding new argument 'lr_adaptive_min'=1e-06 that is not in the saved config file!
[2023-02-25 20:05:13,369][14226] Adding new argument 'lr_adaptive_max'=0.01 that is not in the saved config file!
[2023-02-25 20:05:13,370][14226] Adding new argument 'env_gpu_observations'=True that is not in the saved config file!
[2023-02-25 20:05:13,372][14226] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 20:05:13,373][14226] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 20:05:13,374][14226] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 20:05:13,376][14226] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 20:05:13,378][14226] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 20:05:13,380][14226] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 20:05:13,382][14226] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-25 20:05:13,384][14226] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-25 20:05:13,386][14226] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 20:05:13,399][14226] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 20:05:13,401][14226] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 20:05:13,403][14226] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 20:05:13,405][14226] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 20:05:13,428][14226] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 20:05:13,430][14226] RunningMeanStd input shape: (1,)
[2023-02-25 20:05:13,444][14226] ConvEncoder: input_channels=3
[2023-02-25 20:05:13,489][14226] Conv encoder output size: 512
[2023-02-25 20:05:13,491][14226] Policy head output size: 512
[2023-02-25 20:05:13,516][14226] Loading state from checkpoint train_dir/doom_health_gathering_supreme_2222/checkpoint_p0/checkpoint_000539850_4422451200.pth...
[2023-02-25 20:05:14,002][14226] Num frames 100...
[2023-02-25 20:05:14,120][14226] Num frames 200...
[2023-02-25 20:05:14,237][14226] Num frames 300...
[2023-02-25 20:05:14,360][14226] Num frames 400...
[2023-02-25 20:05:14,483][14226] Num frames 500...
[2023-02-25 20:05:14,610][14226] Num frames 600...
[2023-02-25 20:05:14,728][14226] Num frames 700...
[2023-02-25 20:05:14,848][14226] Num frames 800...
[2023-02-25 20:05:14,974][14226] Num frames 900...
[2023-02-25 20:05:15,099][14226] Num frames 1000...
[2023-02-25 20:05:15,218][14226] Num frames 1100...
[2023-02-25 20:05:15,341][14226] Num frames 1200...
[2023-02-25 20:05:15,470][14226] Num frames 1300...
[2023-02-25 20:05:15,598][14226] Num frames 1400...
[2023-02-25 20:05:15,717][14226] Num frames 1500...
[2023-02-25 20:05:15,838][14226] Num frames 1600...
[2023-02-25 20:05:15,962][14226] Num frames 1700...
[2023-02-25 20:05:16,088][14226] Num frames 1800...
[2023-02-25 20:05:16,205][14226] Num frames 1900...
[2023-02-25 20:05:16,330][14226] Avg episode rewards: #0: 62.599, true rewards: #0: 19.600
[2023-02-25 20:05:16,331][14226] Avg episode reward: 62.599, avg true_objective: 19.600
[2023-02-25 20:05:16,389][14226] Num frames 2000...
[2023-02-25 20:05:16,520][14226] Num frames 2100...
[2023-02-25 20:05:16,643][14226] Num frames 2200...
[2023-02-25 20:05:16,762][14226] Num frames 2300...
[2023-02-25 20:05:16,884][14226] Num frames 2400...
[2023-02-25 20:05:17,009][14226] Num frames 2500...
[2023-02-25 20:05:17,124][14226] Num frames 2600...
[2023-02-25 20:05:17,241][14226] Num frames 2700...
[2023-02-25 20:05:17,363][14226] Num frames 2800...
[2023-02-25 20:05:17,476][14226] Num frames 2900...
[2023-02-25 20:05:17,601][14226] Num frames 3000...
[2023-02-25 20:05:17,719][14226] Num frames 3100...
[2023-02-25 20:05:17,835][14226] Num frames 3200...
[2023-02-25 20:05:17,957][14226] Num frames 3300...
[2023-02-25 20:05:18,076][14226] Num frames 3400...
[2023-02-25 20:05:18,197][14226] Num frames 3500...
[2023-02-25 20:05:18,317][14226] Num frames 3600...
[2023-02-25 20:05:18,438][14226] Num frames 3700...
[2023-02-25 20:05:18,555][14226] Num frames 3800...
[2023-02-25 20:05:18,688][14226] Num frames 3900...
[2023-02-25 20:05:18,813][14226] Num frames 4000...
[2023-02-25 20:05:18,939][14226] Avg episode rewards: #0: 61.799, true rewards: #0: 20.300
[2023-02-25 20:05:18,941][14226] Avg episode reward: 61.799, avg true_objective: 20.300
[2023-02-25 20:05:18,994][14226] Num frames 4100...
[2023-02-25 20:05:19,107][14226] Num frames 4200...
[2023-02-25 20:05:19,223][14226] Num frames 4300...
[2023-02-25 20:05:19,336][14226] Num frames 4400...
[2023-02-25 20:05:19,468][14226] Num frames 4500...
[2023-02-25 20:05:19,593][14226] Num frames 4600...
[2023-02-25 20:05:19,717][14226] Num frames 4700...
[2023-02-25 20:05:19,841][14226] Num frames 4800...
[2023-02-25 20:05:19,956][14226] Num frames 4900...
[2023-02-25 20:05:20,078][14226] Num frames 5000...
[2023-02-25 20:05:20,193][14226] Num frames 5100...
[2023-02-25 20:05:20,310][14226] Num frames 5200...
[2023-02-25 20:05:20,439][14226] Num frames 5300...
[2023-02-25 20:05:20,559][14226] Num frames 5400...
[2023-02-25 20:05:20,686][14226] Num frames 5500...
[2023-02-25 20:05:20,803][14226] Num frames 5600...
[2023-02-25 20:05:20,926][14226] Num frames 5700...
[2023-02-25 20:05:21,043][14226] Num frames 5800...
[2023-02-25 20:05:21,160][14226] Num frames 5900...
[2023-02-25 20:05:21,276][14226] Num frames 6000...
[2023-02-25 20:05:21,397][14226] Num frames 6100...
[2023-02-25 20:05:21,521][14226] Avg episode rewards: #0: 59.865, true rewards: #0: 20.533
[2023-02-25 20:05:21,523][14226] Avg episode reward: 59.865, avg true_objective: 20.533
[2023-02-25 20:05:21,576][14226] Num frames 6200...
[2023-02-25 20:05:21,701][14226] Num frames 6300...
[2023-02-25 20:05:21,847][14226] Num frames 6400...
[2023-02-25 20:05:22,018][14226] Num frames 6500...
[2023-02-25 20:05:22,185][14226] Num frames 6600...
[2023-02-25 20:05:22,349][14226] Num frames 6700...
[2023-02-25 20:05:22,510][14226] Num frames 6800...
[2023-02-25 20:05:22,670][14226] Num frames 6900...
[2023-02-25 20:05:22,846][14226] Num frames 7000...
[2023-02-25 20:05:23,009][14226] Num frames 7100...
[2023-02-25 20:05:23,170][14226] Num frames 7200...
[2023-02-25 20:05:23,332][14226] Num frames 7300...
[2023-02-25 20:05:23,493][14226] Num frames 7400...
[2023-02-25 20:05:23,658][14226] Num frames 7500...
[2023-02-25 20:05:23,828][14226] Num frames 7600...
[2023-02-25 20:05:23,994][14226] Num frames 7700...
[2023-02-25 20:05:24,161][14226] Num frames 7800...
[2023-02-25 20:05:24,326][14226] Num frames 7900...
[2023-02-25 20:05:24,499][14226] Num frames 8000...
[2023-02-25 20:05:24,674][14226] Num frames 8100...
[2023-02-25 20:05:24,852][14226] Num frames 8200...
[2023-02-25 20:05:25,039][14226] Avg episode rewards: #0: 58.899, true rewards: #0: 20.650
[2023-02-25 20:05:25,042][14226] Avg episode reward: 58.899, avg true_objective: 20.650
[2023-02-25 20:05:25,124][14226] Num frames 8300...
[2023-02-25 20:05:25,300][14226] Num frames 8400...
[2023-02-25 20:05:25,468][14226] Num frames 8500...
[2023-02-25 20:05:25,594][14226] Num frames 8600...
[2023-02-25 20:05:25,707][14226] Num frames 8700...
[2023-02-25 20:05:25,830][14226] Num frames 8800...
[2023-02-25 20:05:25,947][14226] Num frames 8900...
[2023-02-25 20:05:26,059][14226] Num frames 9000...
[2023-02-25 20:05:26,176][14226] Num frames 9100...
[2023-02-25 20:05:26,298][14226] Num frames 9200...
[2023-02-25 20:05:26,412][14226] Num frames 9300...
[2023-02-25 20:05:26,526][14226] Num frames 9400...
[2023-02-25 20:05:26,649][14226] Num frames 9500...
[2023-02-25 20:05:26,769][14226] Num frames 9600...
[2023-02-25 20:05:26,895][14226] Num frames 9700...
[2023-02-25 20:05:27,018][14226] Num frames 9800...
[2023-02-25 20:05:27,134][14226] Num frames 9900...
[2023-02-25 20:05:27,256][14226] Num frames 10000...
[2023-02-25 20:05:27,373][14226] Num frames 10100...
[2023-02-25 20:05:27,490][14226] Num frames 10200...
[2023-02-25 20:05:27,614][14226] Num frames 10300...
[2023-02-25 20:05:27,741][14226] Avg episode rewards: #0: 58.919, true rewards: #0: 20.720
[2023-02-25 20:05:27,744][14226] Avg episode reward: 58.919, avg true_objective: 20.720
[2023-02-25 20:05:27,800][14226] Num frames 10400...
[2023-02-25 20:05:27,937][14226] Num frames 10500...
[2023-02-25 20:05:28,052][14226] Num frames 10600...
[2023-02-25 20:05:28,168][14226] Num frames 10700...
[2023-02-25 20:05:28,287][14226] Num frames 10800...
[2023-02-25 20:05:28,406][14226] Num frames 10900...
[2023-02-25 20:05:28,524][14226] Num frames 11000...
[2023-02-25 20:05:28,642][14226] Num frames 11100...
[2023-02-25 20:05:28,765][14226] Num frames 11200...
[2023-02-25 20:05:28,888][14226] Num frames 11300...
[2023-02-25 20:05:29,006][14226] Num frames 11400...
[2023-02-25 20:05:29,128][14226] Num frames 11500...
[2023-02-25 20:05:29,252][14226] Num frames 11600...
[2023-02-25 20:05:29,372][14226] Num frames 11700...
[2023-02-25 20:05:29,494][14226] Num frames 11800...
[2023-02-25 20:05:29,617][14226] Num frames 11900...
[2023-02-25 20:05:29,736][14226] Num frames 12000...
[2023-02-25 20:05:29,862][14226] Num frames 12100...
[2023-02-25 20:05:29,999][14226] Num frames 12200...
[2023-02-25 20:05:30,126][14226] Num frames 12300...
[2023-02-25 20:05:30,247][14226] Num frames 12400...
[2023-02-25 20:05:30,373][14226] Avg episode rewards: #0: 59.265, true rewards: #0: 20.767
[2023-02-25 20:05:30,380][14226] Avg episode reward: 59.265, avg true_objective: 20.767
[2023-02-25 20:05:30,435][14226] Num frames 12500...
[2023-02-25 20:05:30,559][14226] Num frames 12600...
[2023-02-25 20:05:30,676][14226] Num frames 12700...
[2023-02-25 20:05:30,791][14226] Num frames 12800...
[2023-02-25 20:05:30,921][14226] Num frames 12900...
[2023-02-25 20:05:31,035][14226] Num frames 13000...
[2023-02-25 20:05:31,159][14226] Num frames 13100...
[2023-02-25 20:05:31,272][14226] Num frames 13200...
[2023-02-25 20:05:31,387][14226] Num frames 13300...
[2023-02-25 20:05:31,510][14226] Num frames 13400...
[2023-02-25 20:05:31,628][14226] Num frames 13500...
[2023-02-25 20:05:31,753][14226] Num frames 13600...
[2023-02-25 20:05:31,880][14226] Num frames 13700...
[2023-02-25 20:05:32,004][14226] Num frames 13800...
[2023-02-25 20:05:32,127][14226] Num frames 13900...
[2023-02-25 20:05:32,248][14226] Num frames 14000...
[2023-02-25 20:05:32,368][14226] Num frames 14100...
[2023-02-25 20:05:32,494][14226] Num frames 14200...
[2023-02-25 20:05:32,616][14226] Num frames 14300...
[2023-02-25 20:05:32,732][14226] Num frames 14400...
[2023-02-25 20:05:32,851][14226] Num frames 14500...
[2023-02-25 20:05:32,976][14226] Avg episode rewards: #0: 60.799, true rewards: #0: 20.800
[2023-02-25 20:05:32,979][14226] Avg episode reward: 60.799, avg true_objective: 20.800
[2023-02-25 20:05:33,032][14226] Num frames 14600...
[2023-02-25 20:05:33,167][14226] Num frames 14700...
[2023-02-25 20:05:33,295][14226] Num frames 14800...
[2023-02-25 20:05:33,419][14226] Num frames 14900...
[2023-02-25 20:05:33,554][14226] Num frames 15000...
[2023-02-25 20:05:33,676][14226] Num frames 15100...
[2023-02-25 20:05:33,791][14226] Num frames 15200...
[2023-02-25 20:05:33,907][14226] Num frames 15300...
[2023-02-25 20:05:34,040][14226] Num frames 15400...
[2023-02-25 20:05:34,162][14226] Num frames 15500...
[2023-02-25 20:05:34,280][14226] Num frames 15600...
[2023-02-25 20:05:34,405][14226] Num frames 15700...
[2023-02-25 20:05:34,525][14226] Num frames 15800...
[2023-02-25 20:05:34,645][14226] Num frames 15900...
[2023-02-25 20:05:34,772][14226] Num frames 16000...
[2023-02-25 20:05:34,892][14226] Num frames 16100...
[2023-02-25 20:05:35,014][14226] Num frames 16200...
[2023-02-25 20:05:35,132][14226] Num frames 16300...
[2023-02-25 20:05:35,249][14226] Num frames 16400...
[2023-02-25 20:05:35,375][14226] Num frames 16500...
[2023-02-25 20:05:35,495][14226] Num frames 16600...
[2023-02-25 20:05:35,658][14226] Avg episode rewards: #0: 61.199, true rewards: #0: 20.825
[2023-02-25 20:05:35,661][14226] Avg episode reward: 61.199, avg true_objective: 20.825
[2023-02-25 20:05:35,735][14226] Num frames 16700...
[2023-02-25 20:05:35,902][14226] Num frames 16800...
[2023-02-25 20:05:36,072][14226] Num frames 16900...
[2023-02-25 20:05:36,236][14226] Num frames 17000...
[2023-02-25 20:05:36,397][14226] Num frames 17100...
[2023-02-25 20:05:36,571][14226] Num frames 17200...
[2023-02-25 20:05:36,741][14226] Num frames 17300...
[2023-02-25 20:05:36,913][14226] Num frames 17400...
[2023-02-25 20:05:37,081][14226] Num frames 17500...
[2023-02-25 20:05:37,241][14226] Num frames 17600...
[2023-02-25 20:05:37,416][14226] Num frames 17700...
[2023-02-25 20:05:37,578][14226] Num frames 17800...
[2023-02-25 20:05:37,738][14226] Num frames 17900...
[2023-02-25 20:05:37,912][14226] Num frames 18000...
[2023-02-25 20:05:38,084][14226] Num frames 18100...
[2023-02-25 20:05:38,255][14226] Num frames 18200...
[2023-02-25 20:05:38,426][14226] Num frames 18300...
[2023-02-25 20:05:38,601][14226] Num frames 18400...
[2023-02-25 20:05:38,768][14226] Num frames 18500...
[2023-02-25 20:05:38,931][14226] Num frames 18600...
[2023-02-25 20:05:39,099][14226] Num frames 18700...
[2023-02-25 20:05:39,231][14226] Avg episode rewards: #0: 61.510, true rewards: #0: 20.844
[2023-02-25 20:05:39,233][14226] Avg episode reward: 61.510, avg true_objective: 20.844
[2023-02-25 20:05:39,289][14226] Num frames 18800...
[2023-02-25 20:05:39,412][14226] Num frames 18900...
[2023-02-25 20:05:39,546][14226] Num frames 19000...
[2023-02-25 20:05:39,675][14226] Num frames 19100...
[2023-02-25 20:05:39,796][14226] Num frames 19200...
[2023-02-25 20:05:39,926][14226] Num frames 19300...
[2023-02-25 20:05:40,045][14226] Num frames 19400...
[2023-02-25 20:05:40,178][14226] Num frames 19500...
[2023-02-25 20:05:40,295][14226] Num frames 19600...
[2023-02-25 20:05:40,420][14226] Num frames 19700...
[2023-02-25 20:05:40,565][14226] Num frames 19800...
[2023-02-25 20:05:40,686][14226] Num frames 19900...
[2023-02-25 20:05:40,814][14226] Num frames 20000...
[2023-02-25 20:05:40,932][14226] Num frames 20100...
[2023-02-25 20:05:41,057][14226] Num frames 20200...
[2023-02-25 20:05:41,189][14226] Num frames 20300...
[2023-02-25 20:05:41,309][14226] Num frames 20400...
[2023-02-25 20:05:41,433][14226] Num frames 20500...
[2023-02-25 20:05:41,552][14226] Num frames 20600...
[2023-02-25 20:05:41,675][14226] Num frames 20700...
[2023-02-25 20:05:41,799][14226] Num frames 20800...
[2023-02-25 20:05:41,925][14226] Avg episode rewards: #0: 62.359, true rewards: #0: 20.860
[2023-02-25 20:05:41,926][14226] Avg episode reward: 62.359, avg true_objective: 20.860
[2023-02-25 20:07:59,930][14226] Replay video saved to train_dir/doom_health_gathering_supreme_2222/replay.mp4!
[2023-02-25 20:13:16,207][36780] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-02-25 20:13:16,213][36780] Rollout worker 0 uses device cpu
[2023-02-25 20:13:16,216][36780] Rollout worker 1 uses device cpu
[2023-02-25 20:13:16,218][36780] Rollout worker 2 uses device cpu
[2023-02-25 20:13:16,219][36780] Rollout worker 3 uses device cpu
[2023-02-25 20:13:16,221][36780] Rollout worker 4 uses device cpu
[2023-02-25 20:13:16,223][36780] Rollout worker 5 uses device cpu
[2023-02-25 20:13:16,225][36780] Rollout worker 6 uses device cpu
[2023-02-25 20:13:16,228][36780] Rollout worker 7 uses device cpu
[2023-02-25 20:13:16,416][36780] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 20:13:16,422][36780] InferenceWorker_p0-w0: min num requests: 2
[2023-02-25 20:13:16,467][36780] Starting all processes...
[2023-02-25 20:13:16,476][36780] Starting process learner_proc0
[2023-02-25 20:13:16,554][36780] Starting all processes...
[2023-02-25 20:13:16,569][36780] Starting process inference_proc0-0
[2023-02-25 20:13:16,570][36780] Starting process rollout_proc0
[2023-02-25 20:13:16,572][36780] Starting process rollout_proc1
[2023-02-25 20:13:16,572][36780] Starting process rollout_proc2
[2023-02-25 20:13:16,573][36780] Starting process rollout_proc3
[2023-02-25 20:13:16,573][36780] Starting process rollout_proc4
[2023-02-25 20:13:16,573][36780] Starting process rollout_proc5
[2023-02-25 20:13:16,573][36780] Starting process rollout_proc6
[2023-02-25 20:13:16,573][36780] Starting process rollout_proc7
[2023-02-25 20:13:28,474][36994] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 20:13:28,475][36994] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-02-25 20:13:28,780][37011] Worker 2 uses CPU cores [0]
[2023-02-25 20:13:28,830][37015] Worker 6 uses CPU cores [0]
[2023-02-25 20:13:29,012][37007] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 20:13:29,012][37007] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-02-25 20:13:29,026][37009] Worker 1 uses CPU cores [1]
[2023-02-25 20:13:29,126][37014] Worker 5 uses CPU cores [1]
[2023-02-25 20:13:29,214][37013] Worker 4 uses CPU cores [0]
[2023-02-25 20:13:29,294][37012] Worker 3 uses CPU cores [1]
[2023-02-25 20:13:29,303][37016] Worker 7 uses CPU cores [1]
[2023-02-25 20:13:29,451][37010] Worker 0 uses CPU cores [0]
[2023-02-25 20:13:29,567][37007] Num visible devices: 1
[2023-02-25 20:13:29,580][36994] Num visible devices: 1
[2023-02-25 20:13:29,629][36994] Starting seed is not provided
[2023-02-25 20:13:29,630][36994] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 20:13:29,631][36994] Initializing actor-critic model on device cuda:0
[2023-02-25 20:13:29,633][36994] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 20:13:29,635][36994] RunningMeanStd input shape: (1,)
[2023-02-25 20:13:29,734][36994] ConvEncoder: input_channels=3
[2023-02-25 20:13:30,009][36994] Conv encoder output size: 512
[2023-02-25 20:13:30,010][36994] Policy head output size: 512
[2023-02-25 20:13:30,051][36994] Created Actor Critic model with architecture:
[2023-02-25 20:13:30,052][36994] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-02-25 20:13:36,392][36994] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-02-25 20:13:36,394][36994] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 20:13:36,406][36780] Heartbeat connected on Batcher_0
[2023-02-25 20:13:36,417][36780] Heartbeat connected on InferenceWorker_p0-w0
[2023-02-25 20:13:36,434][36780] Heartbeat connected on RolloutWorker_w0
[2023-02-25 20:13:36,439][36780] Heartbeat connected on RolloutWorker_w1
[2023-02-25 20:13:36,443][36780] Heartbeat connected on RolloutWorker_w2
[2023-02-25 20:13:36,448][36780] Heartbeat connected on RolloutWorker_w3
[2023-02-25 20:13:36,456][36780] Heartbeat connected on RolloutWorker_w4
[2023-02-25 20:13:36,460][36780] Heartbeat connected on RolloutWorker_w5
[2023-02-25 20:13:36,470][36994] Loading model from checkpoint
[2023-02-25 20:13:36,471][36780] Heartbeat connected on RolloutWorker_w6
[2023-02-25 20:13:36,480][36780] Heartbeat connected on RolloutWorker_w7
[2023-02-25 20:13:36,481][36994] Loaded experiment state at self.train_step=978, self.env_steps=4005888
[2023-02-25 20:13:36,485][36994] Initialized policy 0 weights for model version 978
[2023-02-25 20:13:36,492][36994] LearnerWorker_p0 finished initialization!
[2023-02-25 20:13:36,493][36994] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 20:13:36,506][36780] Heartbeat connected on LearnerWorker_p0
[2023-02-25 20:13:36,677][37007] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 20:13:36,678][37007] RunningMeanStd input shape: (1,)
[2023-02-25 20:13:36,690][37007] ConvEncoder: input_channels=3
[2023-02-25 20:13:36,795][37007] Conv encoder output size: 512
[2023-02-25 20:13:36,795][37007] Policy head output size: 512
[2023-02-25 20:13:37,642][36780] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 20:13:39,040][36780] Inference worker 0-0 is ready!
[2023-02-25 20:13:39,042][36780] All inference workers are ready! Signal rollout workers to start!
[2023-02-25 20:13:39,168][37012] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,173][37014] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,178][37009] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,184][37016] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,218][37015] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,219][37010] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,220][37011] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:39,217][37013] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:13:40,028][37013] Decorrelating experience for 0 frames...
[2023-02-25 20:13:40,033][37011] Decorrelating experience for 0 frames...
[2023-02-25 20:13:40,336][37014] Decorrelating experience for 0 frames...
[2023-02-25 20:13:40,342][37016] Decorrelating experience for 0 frames...
[2023-02-25 20:13:40,347][37012] Decorrelating experience for 0 frames...
[2023-02-25 20:13:41,223][37013] Decorrelating experience for 32 frames...
[2023-02-25 20:13:41,371][37016] Decorrelating experience for 32 frames...
[2023-02-25 20:13:41,374][37014] Decorrelating experience for 32 frames...
[2023-02-25 20:13:41,377][37012] Decorrelating experience for 32 frames...
[2023-02-25 20:13:41,382][37015] Decorrelating experience for 0 frames...
[2023-02-25 20:13:41,389][37010] Decorrelating experience for 0 frames...
[2023-02-25 20:13:41,692][37011] Decorrelating experience for 32 frames...
[2023-02-25 20:13:42,477][37015] Decorrelating experience for 32 frames...
[2023-02-25 20:13:42,480][37010] Decorrelating experience for 32 frames...
[2023-02-25 20:13:42,621][37016] Decorrelating experience for 64 frames...
[2023-02-25 20:13:42,642][36780] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 20:13:42,645][37014] Decorrelating experience for 64 frames...
[2023-02-25 20:13:42,904][37009] Decorrelating experience for 0 frames...
[2023-02-25 20:13:42,993][37011] Decorrelating experience for 64 frames...
[2023-02-25 20:13:43,820][37012] Decorrelating experience for 64 frames...
[2023-02-25 20:13:43,843][37013] Decorrelating experience for 64 frames...
[2023-02-25 20:13:43,909][37016] Decorrelating experience for 96 frames...
[2023-02-25 20:13:43,936][37014] Decorrelating experience for 96 frames...
[2023-02-25 20:13:44,038][37015] Decorrelating experience for 64 frames...
[2023-02-25 20:13:44,387][37010] Decorrelating experience for 64 frames...
[2023-02-25 20:13:44,603][37011] Decorrelating experience for 96 frames...
[2023-02-25 20:13:45,201][37009] Decorrelating experience for 32 frames...
[2023-02-25 20:13:45,587][37012] Decorrelating experience for 96 frames...
[2023-02-25 20:13:46,209][37013] Decorrelating experience for 96 frames...
[2023-02-25 20:13:46,376][37015] Decorrelating experience for 96 frames...
[2023-02-25 20:13:47,070][37009] Decorrelating experience for 64 frames...
[2023-02-25 20:13:47,162][37010] Decorrelating experience for 96 frames...
[2023-02-25 20:13:47,642][36780] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 2.0. Samples: 20. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 20:13:47,645][36780] Avg episode reward: [(0, '0.800')]
[2023-02-25 20:13:51,284][37009] Decorrelating experience for 96 frames...
[2023-02-25 20:13:51,868][36994] Signal inference workers to stop experience collection...
[2023-02-25 20:13:51,877][37007] InferenceWorker_p0-w0: stopping experience collection
[2023-02-25 20:13:52,642][36780] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 145.7. Samples: 2186. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 20:13:52,648][36780] Avg episode reward: [(0, '2.498')]
[2023-02-25 20:13:53,636][36994] Signal inference workers to resume experience collection...
[2023-02-25 20:13:53,637][37007] InferenceWorker_p0-w0: resuming experience collection
[2023-02-25 20:13:57,642][36780] Fps is (10 sec: 1638.4, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 4022272. Throughput: 0: 229.3. Samples: 4586. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0)
[2023-02-25 20:13:57,654][36780] Avg episode reward: [(0, '4.360')]
[2023-02-25 20:14:02,642][36780] Fps is (10 sec: 3686.4, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 4042752. Throughput: 0: 310.0. Samples: 7750. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:14:02,644][36780] Avg episode reward: [(0, '10.463')]
[2023-02-25 20:14:02,929][37007] Updated weights for policy 0, policy_version 988 (0.0012)
[2023-02-25 20:14:07,642][36780] Fps is (10 sec: 4096.0, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 4063232. Throughput: 0: 466.8. Samples: 14004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:14:07,647][36780] Avg episode reward: [(0, '17.300')]
[2023-02-25 20:14:12,642][36780] Fps is (10 sec: 3276.7, 60 sec: 1989.5, 300 sec: 1989.5). Total num frames: 4075520. Throughput: 0: 520.1. Samples: 18202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:14:12,647][36780] Avg episode reward: [(0, '20.322')]
[2023-02-25 20:14:16,173][37007] Updated weights for policy 0, policy_version 998 (0.0016)
[2023-02-25 20:14:17,642][36780] Fps is (10 sec: 2867.2, 60 sec: 2150.4, 300 sec: 2150.4). Total num frames: 4091904. Throughput: 0: 505.2. Samples: 20206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:14:17,645][36780] Avg episode reward: [(0, '20.621')]
[2023-02-25 20:14:22,642][36780] Fps is (10 sec: 3686.5, 60 sec: 2366.6, 300 sec: 2366.6). Total num frames: 4112384. Throughput: 0: 579.5. Samples: 26076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:14:22,645][36780] Avg episode reward: [(0, '21.303')]
[2023-02-25 20:14:25,898][37007] Updated weights for policy 0, policy_version 1008 (0.0020)
[2023-02-25 20:14:27,643][36780] Fps is (10 sec: 4095.4, 60 sec: 2539.4, 300 sec: 2539.4). Total num frames: 4132864. Throughput: 0: 714.9. Samples: 32172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:14:27,648][36780] Avg episode reward: [(0, '24.253')]
[2023-02-25 20:14:32,642][36780] Fps is (10 sec: 3276.8, 60 sec: 2532.1, 300 sec: 2532.1). Total num frames: 4145152. Throughput: 0: 759.6. Samples: 34202. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:14:32,647][36780] Avg episode reward: [(0, '24.321')]
[2023-02-25 20:14:37,642][36780] Fps is (10 sec: 2867.7, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 4161536. Throughput: 0: 802.0. Samples: 38276. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:14:37,645][36780] Avg episode reward: [(0, '24.009')]
[2023-02-25 20:14:39,627][37007] Updated weights for policy 0, policy_version 1018 (0.0031)
[2023-02-25 20:14:42,642][36780] Fps is (10 sec: 3686.4, 60 sec: 2935.5, 300 sec: 2709.7). Total num frames: 4182016. Throughput: 0: 884.2. Samples: 44374. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 20:14:42,645][36780] Avg episode reward: [(0, '23.108')]
[2023-02-25 20:14:47,644][36780] Fps is (10 sec: 4095.3, 60 sec: 3276.7, 300 sec: 2808.6). Total num frames: 4202496. Throughput: 0: 885.2. Samples: 47584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:14:47,648][36780] Avg episode reward: [(0, '23.678')]
[2023-02-25 20:14:50,119][37007] Updated weights for policy 0, policy_version 1028 (0.0016)
[2023-02-25 20:14:52,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 2785.3). Total num frames: 4214784. Throughput: 0: 844.8. Samples: 52018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:14:52,648][36780] Avg episode reward: [(0, '23.693')]
[2023-02-25 20:14:57,642][36780] Fps is (10 sec: 2458.0, 60 sec: 3413.3, 300 sec: 2764.8). Total num frames: 4227072. Throughput: 0: 837.5. Samples: 55888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 20:14:57,645][36780] Avg episode reward: [(0, '23.846')]
[2023-02-25 20:15:02,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2843.1). Total num frames: 4247552. Throughput: 0: 860.8. Samples: 58940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:15:02,652][36780] Avg episode reward: [(0, '24.171')]
[2023-02-25 20:15:02,950][37007] Updated weights for policy 0, policy_version 1038 (0.0020)
[2023-02-25 20:15:07,642][36780] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 2958.2). Total num frames: 4272128. Throughput: 0: 875.7. Samples: 65482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:15:07,648][36780] Avg episode reward: [(0, '24.212')]
[2023-02-25 20:15:12,645][36780] Fps is (10 sec: 3685.4, 60 sec: 3481.5, 300 sec: 2931.8). Total num frames: 4284416. Throughput: 0: 844.1. Samples: 70156. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:15:12,648][36780] Avg episode reward: [(0, '24.322')]
[2023-02-25 20:15:12,664][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001046_4284416.pth...
[2023-02-25 20:15:12,803][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000880_3604480.pth
[2023-02-25 20:15:15,046][37007] Updated weights for policy 0, policy_version 1048 (0.0012)
[2023-02-25 20:15:17,642][36780] Fps is (10 sec: 2457.5, 60 sec: 3413.3, 300 sec: 2908.2). Total num frames: 4296704. Throughput: 0: 843.1. Samples: 72142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:15:17,644][36780] Avg episode reward: [(0, '22.939')]
[2023-02-25 20:15:22,642][36780] Fps is (10 sec: 3277.7, 60 sec: 3413.3, 300 sec: 2964.7). Total num frames: 4317184. Throughput: 0: 870.6. Samples: 77454. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:15:22,645][36780] Avg episode reward: [(0, '23.019')]
[2023-02-25 20:15:25,549][37007] Updated weights for policy 0, policy_version 1058 (0.0020)
[2023-02-25 20:15:27,642][36780] Fps is (10 sec: 4505.7, 60 sec: 3481.7, 300 sec: 3053.4). Total num frames: 4341760. Throughput: 0: 880.8. Samples: 84008. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:15:27,645][36780] Avg episode reward: [(0, '24.488')]
[2023-02-25 20:15:32,645][36780] Fps is (10 sec: 3685.1, 60 sec: 3481.4, 300 sec: 3027.4). Total num frames: 4354048. Throughput: 0: 862.5. Samples: 86396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:15:32,648][36780] Avg episode reward: [(0, '25.865')]
[2023-02-25 20:15:37,642][36780] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3003.7). Total num frames: 4366336. Throughput: 0: 854.3. Samples: 90460. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:15:37,647][36780] Avg episode reward: [(0, '25.818')]
[2023-02-25 20:15:39,190][37007] Updated weights for policy 0, policy_version 1068 (0.0016)
[2023-02-25 20:15:42,642][36780] Fps is (10 sec: 3278.0, 60 sec: 3413.3, 300 sec: 3047.4). Total num frames: 4386816. Throughput: 0: 887.8. Samples: 95838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:15:42,645][36780] Avg episode reward: [(0, '25.888')]
[2023-02-25 20:15:47,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3087.8). Total num frames: 4407296. Throughput: 0: 890.6. Samples: 99016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:15:47,651][36780] Avg episode reward: [(0, '26.303')]
[2023-02-25 20:15:48,672][37007] Updated weights for policy 0, policy_version 1078 (0.0014)
[2023-02-25 20:15:52,644][36780] Fps is (10 sec: 3685.8, 60 sec: 3481.5, 300 sec: 3094.7). Total num frames: 4423680. Throughput: 0: 869.7. Samples: 104620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:15:52,650][36780] Avg episode reward: [(0, '26.379')]
[2023-02-25 20:15:57,643][36780] Fps is (10 sec: 2866.9, 60 sec: 3481.5, 300 sec: 3072.0). Total num frames: 4435968. Throughput: 0: 855.7. Samples: 108662. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:15:57,649][36780] Avg episode reward: [(0, '24.535')]
[2023-02-25 20:16:02,270][37007] Updated weights for policy 0, policy_version 1088 (0.0016)
[2023-02-25 20:16:02,642][36780] Fps is (10 sec: 3277.3, 60 sec: 3481.6, 300 sec: 3107.3). Total num frames: 4456448. Throughput: 0: 859.5. Samples: 110818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:16:02,645][36780] Avg episode reward: [(0, '24.583')]
[2023-02-25 20:16:07,642][36780] Fps is (10 sec: 4096.4, 60 sec: 3413.3, 300 sec: 3140.3). Total num frames: 4476928. Throughput: 0: 889.4. Samples: 117476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:16:07,645][36780] Avg episode reward: [(0, '23.040')]
[2023-02-25 20:16:12,477][37007] Updated weights for policy 0, policy_version 1098 (0.0025)
[2023-02-25 20:16:12,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3171.1). Total num frames: 4497408. Throughput: 0: 871.1. Samples: 123208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:16:12,645][36780] Avg episode reward: [(0, '22.578')]
[2023-02-25 20:16:17,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3148.8). Total num frames: 4509696. Throughput: 0: 864.1. Samples: 125278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:16:17,647][36780] Avg episode reward: [(0, '23.180')]
[2023-02-25 20:16:22,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3152.7). Total num frames: 4526080. Throughput: 0: 871.0. Samples: 129656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:16:22,649][36780] Avg episode reward: [(0, '24.289')]
[2023-02-25 20:16:24,708][37007] Updated weights for policy 0, policy_version 1108 (0.0025)
[2023-02-25 20:16:27,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3204.5). Total num frames: 4550656. Throughput: 0: 896.8. Samples: 136194. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:16:27,644][36780] Avg episode reward: [(0, '25.080')]
[2023-02-25 20:16:32,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3550.1, 300 sec: 3206.6). Total num frames: 4567040. Throughput: 0: 897.8. Samples: 139416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:16:32,648][36780] Avg episode reward: [(0, '24.626')]
[2023-02-25 20:16:37,616][37007] Updated weights for policy 0, policy_version 1118 (0.0039)
[2023-02-25 20:16:37,644][36780] Fps is (10 sec: 2866.5, 60 sec: 3549.7, 300 sec: 3185.7). Total num frames: 4579328. Throughput: 0: 853.1. Samples: 143010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:16:37,647][36780] Avg episode reward: [(0, '24.583')]
[2023-02-25 20:16:42,642][36780] Fps is (10 sec: 2048.0, 60 sec: 3345.1, 300 sec: 3144.0). Total num frames: 4587520. Throughput: 0: 836.7. Samples: 146312. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:16:42,652][36780] Avg episode reward: [(0, '24.644')]
[2023-02-25 20:16:47,642][36780] Fps is (10 sec: 2048.5, 60 sec: 3208.5, 300 sec: 3125.9). Total num frames: 4599808. Throughput: 0: 828.2. Samples: 148086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:16:47,644][36780] Avg episode reward: [(0, '25.830')]
[2023-02-25 20:16:51,336][37007] Updated weights for policy 0, policy_version 1128 (0.0016)
[2023-02-25 20:16:52,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3345.2, 300 sec: 3171.8). Total num frames: 4624384. Throughput: 0: 809.0. Samples: 153880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:16:52,650][36780] Avg episode reward: [(0, '25.715')]
[2023-02-25 20:16:57,642][36780] Fps is (10 sec: 4505.6, 60 sec: 3481.7, 300 sec: 3194.9). Total num frames: 4644864. Throughput: 0: 820.9. Samples: 160150. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:16:57,647][36780] Avg episode reward: [(0, '26.000')]
[2023-02-25 20:17:02,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3176.9). Total num frames: 4657152. Throughput: 0: 820.2. Samples: 162188. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:17:02,649][36780] Avg episode reward: [(0, '26.506')]
[2023-02-25 20:17:03,058][37007] Updated weights for policy 0, policy_version 1138 (0.0035)
[2023-02-25 20:17:07,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3179.3). Total num frames: 4673536. Throughput: 0: 816.8. Samples: 166412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:07,650][36780] Avg episode reward: [(0, '25.539')]
[2023-02-25 20:17:12,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3200.6). Total num frames: 4694016. Throughput: 0: 802.5. Samples: 172308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:12,650][36780] Avg episode reward: [(0, '28.332')]
[2023-02-25 20:17:12,663][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001146_4694016.pth...
[2023-02-25 20:17:12,822][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth
[2023-02-25 20:17:12,833][36994] Saving new best policy, reward=28.332!
[2023-02-25 20:17:14,422][37007] Updated weights for policy 0, policy_version 1148 (0.0028)
[2023-02-25 20:17:17,643][36780] Fps is (10 sec: 4095.7, 60 sec: 3413.3, 300 sec: 3220.9). Total num frames: 4714496. Throughput: 0: 798.6. Samples: 175354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:17,645][36780] Avg episode reward: [(0, '30.397')]
[2023-02-25 20:17:17,651][36994] Saving new best policy, reward=30.397!
[2023-02-25 20:17:22,644][36780] Fps is (10 sec: 3276.3, 60 sec: 3345.0, 300 sec: 3204.0). Total num frames: 4726784. Throughput: 0: 833.5. Samples: 180518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:17:22,649][36780] Avg episode reward: [(0, '30.386')]
[2023-02-25 20:17:27,554][37007] Updated weights for policy 0, policy_version 1158 (0.0014)
[2023-02-25 20:17:27,642][36780] Fps is (10 sec: 2867.4, 60 sec: 3208.5, 300 sec: 3205.6). Total num frames: 4743168. Throughput: 0: 848.9. Samples: 184512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:27,644][36780] Avg episode reward: [(0, '30.981')]
[2023-02-25 20:17:27,652][36994] Saving new best policy, reward=30.981!
[2023-02-25 20:17:32,642][36780] Fps is (10 sec: 3277.3, 60 sec: 3208.5, 300 sec: 3207.1). Total num frames: 4759552. Throughput: 0: 868.4. Samples: 187164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:32,645][36780] Avg episode reward: [(0, '31.256')]
[2023-02-25 20:17:32,664][36994] Saving new best policy, reward=31.256!
[2023-02-25 20:17:37,420][37007] Updated weights for policy 0, policy_version 1168 (0.0020)
[2023-02-25 20:17:37,642][36780] Fps is (10 sec: 4096.1, 60 sec: 3413.5, 300 sec: 3242.7). Total num frames: 4784128. Throughput: 0: 883.2. Samples: 193624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:17:37,645][36780] Avg episode reward: [(0, '30.355')]
[2023-02-25 20:17:42,644][36780] Fps is (10 sec: 3685.4, 60 sec: 3481.4, 300 sec: 3226.6). Total num frames: 4796416. Throughput: 0: 857.1. Samples: 198720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:17:42,650][36780] Avg episode reward: [(0, '29.875')]
[2023-02-25 20:17:47,643][36780] Fps is (10 sec: 2866.8, 60 sec: 3549.8, 300 sec: 3227.6). Total num frames: 4812800. Throughput: 0: 855.9. Samples: 200706. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:47,649][36780] Avg episode reward: [(0, '29.212')]
[2023-02-25 20:17:50,775][37007] Updated weights for policy 0, policy_version 1178 (0.0017)
[2023-02-25 20:17:52,642][36780] Fps is (10 sec: 3277.5, 60 sec: 3413.3, 300 sec: 3228.6). Total num frames: 4829184. Throughput: 0: 871.8. Samples: 205644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:17:52,650][36780] Avg episode reward: [(0, '28.389')]
[2023-02-25 20:17:57,642][36780] Fps is (10 sec: 4096.5, 60 sec: 3481.6, 300 sec: 3261.0). Total num frames: 4853760. Throughput: 0: 884.4. Samples: 212104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:17:57,647][36780] Avg episode reward: [(0, '27.861')]
[2023-02-25 20:18:01,006][37007] Updated weights for policy 0, policy_version 1188 (0.0017)
[2023-02-25 20:18:02,642][36780] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3245.9). Total num frames: 4866048. Throughput: 0: 878.6. Samples: 214892. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:18:02,649][36780] Avg episode reward: [(0, '26.830')]
[2023-02-25 20:18:07,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3246.5). Total num frames: 4882432. Throughput: 0: 857.2. Samples: 219090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:18:07,650][36780] Avg episode reward: [(0, '27.270')]
[2023-02-25 20:18:12,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3247.0). Total num frames: 4898816. Throughput: 0: 878.2. Samples: 224030. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 20:18:12,648][36780] Avg episode reward: [(0, '27.520')]
[2023-02-25 20:18:13,930][37007] Updated weights for policy 0, policy_version 1198 (0.0020)
[2023-02-25 20:18:17,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 4923392. Throughput: 0: 888.8. Samples: 227160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 20:18:17,644][36780] Avg episode reward: [(0, '27.217')]
[2023-02-25 20:18:22,647][36780] Fps is (10 sec: 4094.1, 60 sec: 3549.7, 300 sec: 3276.7). Total num frames: 4939776. Throughput: 0: 880.8. Samples: 233266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:18:22,653][36780] Avg episode reward: [(0, '26.442')]
[2023-02-25 20:18:25,255][37007] Updated weights for policy 0, policy_version 1208 (0.0014)
[2023-02-25 20:18:27,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3262.7). Total num frames: 4952064. Throughput: 0: 859.2. Samples: 237380. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:18:27,648][36780] Avg episode reward: [(0, '24.585')]
[2023-02-25 20:18:32,642][36780] Fps is (10 sec: 2868.5, 60 sec: 3481.6, 300 sec: 3262.9). Total num frames: 4968448. Throughput: 0: 861.4. Samples: 239468. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:18:32,645][36780] Avg episode reward: [(0, '25.731')]
[2023-02-25 20:18:36,645][37007] Updated weights for policy 0, policy_version 1218 (0.0021)
[2023-02-25 20:18:37,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3346.2). Total num frames: 4993024. Throughput: 0: 889.2. Samples: 245660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:18:37,645][36780] Avg episode reward: [(0, '26.719')]
[2023-02-25 20:18:42,642][36780] Fps is (10 sec: 4096.1, 60 sec: 3550.0, 300 sec: 3401.8). Total num frames: 5009408. Throughput: 0: 880.6. Samples: 251732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:18:42,650][36780] Avg episode reward: [(0, '26.710')]
[2023-02-25 20:18:47,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.7, 300 sec: 3443.4). Total num frames: 5021696. Throughput: 0: 863.8. Samples: 253762. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:18:47,653][36780] Avg episode reward: [(0, '25.818')]
[2023-02-25 20:18:49,265][37007] Updated weights for policy 0, policy_version 1228 (0.0012)
[2023-02-25 20:18:52,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5038080. Throughput: 0: 861.8. Samples: 257872. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 20:18:52,648][36780] Avg episode reward: [(0, '24.994')]
[2023-02-25 20:18:57,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 5058560. Throughput: 0: 888.8. Samples: 264026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:18:57,648][36780] Avg episode reward: [(0, '25.185')]
[2023-02-25 20:18:59,577][37007] Updated weights for policy 0, policy_version 1238 (0.0019)
[2023-02-25 20:19:02,642][36780] Fps is (10 sec: 4095.9, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 5079040. Throughput: 0: 890.8. Samples: 267244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 20:19:02,650][36780] Avg episode reward: [(0, '25.028')]
[2023-02-25 20:19:07,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 5095424. Throughput: 0: 859.7. Samples: 271948. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:19:07,647][36780] Avg episode reward: [(0, '23.659')]
[2023-02-25 20:19:12,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5107712. Throughput: 0: 859.9. Samples: 276076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:19:12,648][36780] Avg episode reward: [(0, '22.873')]
[2023-02-25 20:19:12,661][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001247_5107712.pth...
[2023-02-25 20:19:12,788][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001046_4284416.pth
[2023-02-25 20:19:13,164][37007] Updated weights for policy 0, policy_version 1248 (0.0031)
[2023-02-25 20:19:17,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 5128192. Throughput: 0: 881.0. Samples: 279114. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:19:17,652][36780] Avg episode reward: [(0, '22.641')]
[2023-02-25 20:19:22,491][37007] Updated weights for policy 0, policy_version 1258 (0.0018)
[2023-02-25 20:19:22,645][36780] Fps is (10 sec: 4504.4, 60 sec: 3550.0, 300 sec: 3457.3). Total num frames: 5152768. Throughput: 0: 891.1. Samples: 285762. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:19:22,648][36780] Avg episode reward: [(0, '23.764')]
[2023-02-25 20:19:27,642][36780] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 5165056. Throughput: 0: 858.8. Samples: 290376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:19:27,648][36780] Avg episode reward: [(0, '25.544')]
[2023-02-25 20:19:32,642][36780] Fps is (10 sec: 2458.3, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5177344. Throughput: 0: 859.4. Samples: 292434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:19:32,645][36780] Avg episode reward: [(0, '24.751')]
[2023-02-25 20:19:35,881][37007] Updated weights for policy 0, policy_version 1268 (0.0021)
[2023-02-25 20:19:37,642][36780] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 5197824. Throughput: 0: 886.7. Samples: 297774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:19:37,644][36780] Avg episode reward: [(0, '24.980')]
[2023-02-25 20:19:42,642][36780] Fps is (10 sec: 4505.7, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 5222400. Throughput: 0: 893.0. Samples: 304210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:19:42,648][36780] Avg episode reward: [(0, '27.353')]
[2023-02-25 20:19:46,690][37007] Updated weights for policy 0, policy_version 1278 (0.0026)
[2023-02-25 20:19:47,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 5234688. Throughput: 0: 878.1. Samples: 306758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:19:47,648][36780] Avg episode reward: [(0, '27.667')]
[2023-02-25 20:19:52,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 5251072. Throughput: 0: 864.0. Samples: 310826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:19:52,647][36780] Avg episode reward: [(0, '27.888')]
[2023-02-25 20:19:57,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5267456. Throughput: 0: 887.2. Samples: 316000. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:19:57,648][36780] Avg episode reward: [(0, '27.351')]
[2023-02-25 20:19:58,933][37007] Updated weights for policy 0, policy_version 1288 (0.0017)
[2023-02-25 20:20:02,647][36780] Fps is (10 sec: 4093.8, 60 sec: 3549.6, 300 sec: 3457.2). Total num frames: 5292032. Throughput: 0: 891.0. Samples: 319214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:02,656][36780] Avg episode reward: [(0, '28.102')]
[2023-02-25 20:20:07,644][36780] Fps is (10 sec: 4095.2, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 5308416. Throughput: 0: 876.1. Samples: 325184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:07,648][36780] Avg episode reward: [(0, '27.974')]
[2023-02-25 20:20:10,581][37007] Updated weights for policy 0, policy_version 1298 (0.0028)
[2023-02-25 20:20:12,642][36780] Fps is (10 sec: 2868.7, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 5320704. Throughput: 0: 862.7. Samples: 329196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:12,651][36780] Avg episode reward: [(0, '27.162')]
[2023-02-25 20:20:17,642][36780] Fps is (10 sec: 2867.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5337088. Throughput: 0: 864.2. Samples: 331324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:17,644][36780] Avg episode reward: [(0, '27.150')]
[2023-02-25 20:20:21,712][37007] Updated weights for policy 0, policy_version 1308 (0.0020)
[2023-02-25 20:20:22,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3481.8, 300 sec: 3457.3). Total num frames: 5361664. Throughput: 0: 891.4. Samples: 337888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:20:22,644][36780] Avg episode reward: [(0, '26.277')]
[2023-02-25 20:20:27,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 5378048. Throughput: 0: 872.8. Samples: 343486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:20:27,648][36780] Avg episode reward: [(0, '27.977')]
[2023-02-25 20:20:32,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 5390336. Throughput: 0: 862.5. Samples: 345570. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:20:32,653][36780] Avg episode reward: [(0, '27.962')]
[2023-02-25 20:20:34,509][37007] Updated weights for policy 0, policy_version 1318 (0.0012)
[2023-02-25 20:20:37,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5406720. Throughput: 0: 867.9. Samples: 349882. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:37,649][36780] Avg episode reward: [(0, '28.336')]
[2023-02-25 20:20:42,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3429.5). Total num frames: 5419008. Throughput: 0: 842.5. Samples: 353912. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 20:20:42,646][36780] Avg episode reward: [(0, '28.040')]
[2023-02-25 20:20:47,642][36780] Fps is (10 sec: 2457.5, 60 sec: 3276.8, 300 sec: 3415.7). Total num frames: 5431296. Throughput: 0: 817.4. Samples: 355994. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:20:47,645][36780] Avg episode reward: [(0, '28.838')]
[2023-02-25 20:20:49,364][37007] Updated weights for policy 0, policy_version 1328 (0.0029)
[2023-02-25 20:20:52,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3429.5). Total num frames: 5447680. Throughput: 0: 774.5. Samples: 360034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:20:52,649][36780] Avg episode reward: [(0, '28.336')]
[2023-02-25 20:20:57,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 5459968. Throughput: 0: 776.0. Samples: 364116. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 20:20:57,645][36780] Avg episode reward: [(0, '27.955')]
[2023-02-25 20:21:01,530][37007] Updated weights for policy 0, policy_version 1338 (0.0028)
[2023-02-25 20:21:02,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3208.8, 300 sec: 3415.6). Total num frames: 5484544. Throughput: 0: 802.2. Samples: 367424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:21:02,646][36780] Avg episode reward: [(0, '28.915')]
[2023-02-25 20:21:07,646][36780] Fps is (10 sec: 4504.1, 60 sec: 3276.7, 300 sec: 3415.6). Total num frames: 5505024. Throughput: 0: 805.1. Samples: 374120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:07,649][36780] Avg episode reward: [(0, '26.195')]
[2023-02-25 20:21:12,643][36780] Fps is (10 sec: 3276.5, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 5517312. Throughput: 0: 779.6. Samples: 378568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:12,649][36780] Avg episode reward: [(0, '26.725')]
[2023-02-25 20:21:12,661][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001347_5517312.pth...
[2023-02-25 20:21:12,796][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001146_4694016.pth
[2023-02-25 20:21:13,377][37007] Updated weights for policy 0, policy_version 1348 (0.0023)
[2023-02-25 20:21:17,642][36780] Fps is (10 sec: 2458.4, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 5529600. Throughput: 0: 778.7. Samples: 380612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:17,649][36780] Avg episode reward: [(0, '26.737')]
[2023-02-25 20:21:22,642][36780] Fps is (10 sec: 3277.1, 60 sec: 3140.3, 300 sec: 3387.9). Total num frames: 5550080. Throughput: 0: 802.0. Samples: 385972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:21:22,645][36780] Avg episode reward: [(0, '25.824')]
[2023-02-25 20:21:24,566][37007] Updated weights for policy 0, policy_version 1358 (0.0016)
[2023-02-25 20:21:27,642][36780] Fps is (10 sec: 4505.9, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 5574656. Throughput: 0: 856.4. Samples: 392452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:27,645][36780] Avg episode reward: [(0, '25.973')]
[2023-02-25 20:21:32,644][36780] Fps is (10 sec: 3685.7, 60 sec: 3276.7, 300 sec: 3415.7). Total num frames: 5586944. Throughput: 0: 863.3. Samples: 394844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:32,651][36780] Avg episode reward: [(0, '26.090')]
[2023-02-25 20:21:37,438][37007] Updated weights for policy 0, policy_version 1368 (0.0030)
[2023-02-25 20:21:37,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 5603328. Throughput: 0: 863.2. Samples: 398878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:37,648][36780] Avg episode reward: [(0, '27.051')]
[2023-02-25 20:21:42,642][36780] Fps is (10 sec: 3277.2, 60 sec: 3345.0, 300 sec: 3457.3). Total num frames: 5619712. Throughput: 0: 890.1. Samples: 404172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:21:42,648][36780] Avg episode reward: [(0, '28.320')]
[2023-02-25 20:21:47,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5640192. Throughput: 0: 889.1. Samples: 407434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:21:47,644][36780] Avg episode reward: [(0, '27.490')]
[2023-02-25 20:21:47,675][37007] Updated weights for policy 0, policy_version 1378 (0.0023)
[2023-02-25 20:21:52,643][36780] Fps is (10 sec: 3686.1, 60 sec: 3481.5, 300 sec: 3429.5). Total num frames: 5656576. Throughput: 0: 865.8. Samples: 413080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:21:52,647][36780] Avg episode reward: [(0, '27.297')]
[2023-02-25 20:21:57,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 5668864. Throughput: 0: 852.7. Samples: 416940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:21:57,648][36780] Avg episode reward: [(0, '27.868')]
[2023-02-25 20:22:01,327][37007] Updated weights for policy 0, policy_version 1388 (0.0027)
[2023-02-25 20:22:02,642][36780] Fps is (10 sec: 3277.3, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 5689344. Throughput: 0: 856.5. Samples: 419156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:22:02,644][36780] Avg episode reward: [(0, '28.201')]
[2023-02-25 20:22:07,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3413.5, 300 sec: 3443.4). Total num frames: 5709824. Throughput: 0: 878.8. Samples: 425518. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:07,649][36780] Avg episode reward: [(0, '29.350')]
[2023-02-25 20:22:11,719][37007] Updated weights for policy 0, policy_version 1398 (0.0012)
[2023-02-25 20:22:12,645][36780] Fps is (10 sec: 3685.4, 60 sec: 3481.5, 300 sec: 3429.5). Total num frames: 5726208. Throughput: 0: 853.7. Samples: 430870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:22:12,647][36780] Avg episode reward: [(0, '27.573')]
[2023-02-25 20:22:17,647][36780] Fps is (10 sec: 3275.2, 60 sec: 3549.6, 300 sec: 3443.4). Total num frames: 5742592. Throughput: 0: 847.0. Samples: 432964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:17,653][36780] Avg episode reward: [(0, '27.818')]
[2023-02-25 20:22:22,642][36780] Fps is (10 sec: 3277.7, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5758976. Throughput: 0: 856.4. Samples: 437416. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 20:22:22,649][36780] Avg episode reward: [(0, '28.267')]
[2023-02-25 20:22:24,316][37007] Updated weights for policy 0, policy_version 1408 (0.0022)
[2023-02-25 20:22:27,642][36780] Fps is (10 sec: 3688.3, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 5779456. Throughput: 0: 881.9. Samples: 443858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:27,649][36780] Avg episode reward: [(0, '27.919')]
[2023-02-25 20:22:32,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3481.7, 300 sec: 3429.5). Total num frames: 5795840. Throughput: 0: 880.5. Samples: 447058. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:32,645][36780] Avg episode reward: [(0, '27.537')]
[2023-02-25 20:22:35,808][37007] Updated weights for policy 0, policy_version 1418 (0.0013)
[2023-02-25 20:22:37,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5812224. Throughput: 0: 848.4. Samples: 451258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:37,646][36780] Avg episode reward: [(0, '26.920')]
[2023-02-25 20:22:42,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5828608. Throughput: 0: 866.8. Samples: 455944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 20:22:42,650][36780] Avg episode reward: [(0, '26.853')]
[2023-02-25 20:22:47,028][37007] Updated weights for policy 0, policy_version 1428 (0.0015)
[2023-02-25 20:22:47,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5849088. Throughput: 0: 891.8. Samples: 459288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:22:47,647][36780] Avg episode reward: [(0, '26.227')]
[2023-02-25 20:22:52,642][36780] Fps is (10 sec: 4096.1, 60 sec: 3550.0, 300 sec: 3443.4). Total num frames: 5869568. Throughput: 0: 897.9. Samples: 465924. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 20:22:52,644][36780] Avg episode reward: [(0, '27.641')]
[2023-02-25 20:22:57,644][36780] Fps is (10 sec: 3276.1, 60 sec: 3549.8, 300 sec: 3443.4). Total num frames: 5881856. Throughput: 0: 871.2. Samples: 470074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:22:57,656][36780] Avg episode reward: [(0, '27.899')]
[2023-02-25 20:22:59,309][37007] Updated weights for policy 0, policy_version 1438 (0.0013)
[2023-02-25 20:23:02,642][36780] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5898240. Throughput: 0: 870.6. Samples: 472136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 20:23:02,649][36780] Avg episode reward: [(0, '27.259')]
[2023-02-25 20:23:07,642][36780] Fps is (10 sec: 3687.1, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5918720. Throughput: 0: 900.9. Samples: 477956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:23:07,645][36780] Avg episode reward: [(0, '27.029')]
[2023-02-25 20:23:09,920][37007] Updated weights for policy 0, policy_version 1448 (0.0014)
[2023-02-25 20:23:12,642][36780] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3443.4). Total num frames: 5939200. Throughput: 0: 898.5. Samples: 484292. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 20:23:12,647][36780] Avg episode reward: [(0, '26.202')]
[2023-02-25 20:23:12,659][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001450_5939200.pth...
[2023-02-25 20:23:12,827][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001247_5107712.pth
[2023-02-25 20:23:17,642][36780] Fps is (10 sec: 3276.8, 60 sec: 3481.9, 300 sec: 3429.6). Total num frames: 5951488. Throughput: 0: 871.7. Samples: 486284. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 20:23:17,647][36780] Avg episode reward: [(0, '27.058')]
[2023-02-25 20:23:22,642][36780] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 5967872. Throughput: 0: 866.4. Samples: 490246. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 20:23:22,645][36780] Avg episode reward: [(0, '26.485')]
[2023-02-25 20:23:23,432][37007] Updated weights for policy 0, policy_version 1458 (0.0028)
[2023-02-25 20:23:27,642][36780] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 5988352. Throughput: 0: 894.0. Samples: 496172. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-02-25 20:23:27,645][36780] Avg episode reward: [(0, '25.915')]
[2023-02-25 20:23:31,005][36994] Stopping Batcher_0...
[2023-02-25 20:23:31,007][36994] Loop batcher_evt_loop terminating...
[2023-02-25 20:23:31,005][36780] Component Batcher_0 stopped!
[2023-02-25 20:23:31,009][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
[2023-02-25 20:23:31,056][37007] Weights refcount: 2 0
[2023-02-25 20:23:31,076][36780] Component InferenceWorker_p0-w0 stopped!
[2023-02-25 20:23:31,076][37007] Stopping InferenceWorker_p0-w0...
[2023-02-25 20:23:31,087][37007] Loop inference_proc0-0_evt_loop terminating...
[2023-02-25 20:23:31,092][36780] Component RolloutWorker_w0 stopped!
[2023-02-25 20:23:31,095][36780] Component RolloutWorker_w2 stopped!
[2023-02-25 20:23:31,098][37011] Stopping RolloutWorker_w2...
[2023-02-25 20:23:31,098][37011] Loop rollout_proc2_evt_loop terminating...
[2023-02-25 20:23:31,100][37016] Stopping RolloutWorker_w7...
[2023-02-25 20:23:31,100][37016] Loop rollout_proc7_evt_loop terminating...
[2023-02-25 20:23:31,100][36780] Component RolloutWorker_w7 stopped!
[2023-02-25 20:23:31,108][36780] Component RolloutWorker_w4 stopped!
[2023-02-25 20:23:31,108][37013] Stopping RolloutWorker_w4...
[2023-02-25 20:23:31,099][37010] Stopping RolloutWorker_w0...
[2023-02-25 20:23:31,115][37015] Stopping RolloutWorker_w6...
[2023-02-25 20:23:31,114][36780] Component RolloutWorker_w6 stopped!
[2023-02-25 20:23:31,119][37010] Loop rollout_proc0_evt_loop terminating...
[2023-02-25 20:23:31,111][37013] Loop rollout_proc4_evt_loop terminating...
[2023-02-25 20:23:31,123][37015] Loop rollout_proc6_evt_loop terminating...
[2023-02-25 20:23:31,129][36780] Component RolloutWorker_w1 stopped!
[2023-02-25 20:23:31,137][37014] Stopping RolloutWorker_w5...
[2023-02-25 20:23:31,137][36780] Component RolloutWorker_w5 stopped!
[2023-02-25 20:23:31,129][37009] Stopping RolloutWorker_w1...
[2023-02-25 20:23:31,150][37012] Stopping RolloutWorker_w3...
[2023-02-25 20:23:31,150][36780] Component RolloutWorker_w3 stopped!
[2023-02-25 20:23:31,138][37014] Loop rollout_proc5_evt_loop terminating...
[2023-02-25 20:23:31,142][37009] Loop rollout_proc1_evt_loop terminating...
[2023-02-25 20:23:31,151][37012] Loop rollout_proc3_evt_loop terminating...
[2023-02-25 20:23:31,175][36994] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001347_5517312.pth
[2023-02-25 20:23:31,185][36994] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
[2023-02-25 20:23:31,392][36780] Component LearnerWorker_p0 stopped!
[2023-02-25 20:23:31,394][36780] Waiting for process learner_proc0 to stop...
[2023-02-25 20:23:31,403][36994] Stopping LearnerWorker_p0...
[2023-02-25 20:23:31,404][36994] Loop learner_proc0_evt_loop terminating...
[2023-02-25 20:23:33,398][36780] Waiting for process inference_proc0-0 to join...
[2023-02-25 20:23:33,904][36780] Waiting for process rollout_proc0 to join...
[2023-02-25 20:23:35,031][36780] Waiting for process rollout_proc1 to join...
[2023-02-25 20:23:35,037][36780] Waiting for process rollout_proc2 to join...
[2023-02-25 20:23:35,045][36780] Waiting for process rollout_proc3 to join...
[2023-02-25 20:23:35,049][36780] Waiting for process rollout_proc4 to join...
[2023-02-25 20:23:35,050][36780] Waiting for process rollout_proc5 to join...
[2023-02-25 20:23:35,053][36780] Waiting for process rollout_proc6 to join...
[2023-02-25 20:23:35,054][36780] Waiting for process rollout_proc7 to join...
[2023-02-25 20:23:35,056][36780] Batcher 0 profile tree view:
batching: 13.9036, releasing_batches: 0.0135
[2023-02-25 20:23:35,057][36780] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0000
wait_policy_total: 282.2654
update_model: 3.9689
weight_update: 0.0013
one_step: 0.0023
handle_policy_step: 281.5777
deserialize: 8.0065, stack: 1.5816, obs_to_device_normalize: 60.4016, forward: 138.2783, send_messages: 14.1621
prepare_outputs: 44.6963
to_cpu: 27.5721
[2023-02-25 20:23:35,060][36780] Learner 0 profile tree view:
misc: 0.0026, prepare_batch: 12.2238
train: 41.1266
epoch_init: 0.0076, minibatch_init: 0.0054, losses_postprocess: 0.3181, kl_divergence: 0.3252, after_optimizer: 1.9096
calculate_losses: 13.5311
losses_init: 0.0018, forward_head: 1.0308, bptt_initial: 8.6948, tail: 0.5480, advantages_returns: 0.1486, losses: 1.7926
bptt: 1.1595
bptt_forward_core: 1.1141
update: 24.6911
clip: 0.7190
[2023-02-25 20:23:35,062][36780] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.1817, enqueue_policy_requests: 78.4674, env_step: 438.6337, overhead: 12.2173, complete_rollouts: 4.2548
save_policy_outputs: 11.4993
split_output_tensors: 5.4428
[2023-02-25 20:23:35,063][36780] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.1953, enqueue_policy_requests: 80.0619, env_step: 438.7543, overhead: 11.5066, complete_rollouts: 3.6980
save_policy_outputs: 11.7847
split_output_tensors: 5.5995
[2023-02-25 20:23:35,067][36780] Loop Runner_EvtLoop terminating...
[2023-02-25 20:23:35,069][36780] Runner profile tree view:
main_loop: 618.6015
[2023-02-25 20:23:35,070][36780] Collected {0: 6004736}, FPS: 3231.2
[2023-02-25 20:23:35,214][36780] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 20:23:35,217][36780] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 20:23:35,219][36780] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 20:23:35,221][36780] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 20:23:35,222][36780] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 20:23:35,227][36780] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 20:23:35,228][36780] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 20:23:35,229][36780] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 20:23:35,230][36780] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-25 20:23:35,231][36780] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-25 20:23:35,233][36780] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 20:23:35,235][36780] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 20:23:35,238][36780] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 20:23:35,239][36780] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 20:23:35,241][36780] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 20:23:35,285][36780] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 20:23:35,290][36780] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 20:23:35,295][36780] RunningMeanStd input shape: (1,)
[2023-02-25 20:23:35,325][36780] ConvEncoder: input_channels=3
[2023-02-25 20:23:36,110][36780] Conv encoder output size: 512
[2023-02-25 20:23:36,116][36780] Policy head output size: 512
[2023-02-25 20:23:38,730][36780] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
[2023-02-25 20:23:39,998][36780] Num frames 100...
[2023-02-25 20:23:40,128][36780] Num frames 200...
[2023-02-25 20:23:40,238][36780] Num frames 300...
[2023-02-25 20:23:40,350][36780] Num frames 400...
[2023-02-25 20:23:40,461][36780] Num frames 500...
[2023-02-25 20:23:40,575][36780] Num frames 600...
[2023-02-25 20:23:40,691][36780] Num frames 700...
[2023-02-25 20:23:40,811][36780] Num frames 800...
[2023-02-25 20:23:40,924][36780] Num frames 900...
[2023-02-25 20:23:41,038][36780] Num frames 1000...
[2023-02-25 20:23:41,170][36780] Num frames 1100...
[2023-02-25 20:23:41,294][36780] Num frames 1200...
[2023-02-25 20:23:41,410][36780] Num frames 1300...
[2023-02-25 20:23:41,528][36780] Num frames 1400...
[2023-02-25 20:23:41,652][36780] Num frames 1500...
[2023-02-25 20:23:41,767][36780] Num frames 1600...
[2023-02-25 20:23:41,883][36780] Num frames 1700...
[2023-02-25 20:23:42,016][36780] Avg episode rewards: #0: 47.639, true rewards: #0: 17.640
[2023-02-25 20:23:42,018][36780] Avg episode reward: 47.639, avg true_objective: 17.640
[2023-02-25 20:23:42,072][36780] Num frames 1800...
[2023-02-25 20:23:42,196][36780] Num frames 1900...
[2023-02-25 20:23:42,312][36780] Num frames 2000...
[2023-02-25 20:23:42,433][36780] Num frames 2100...
[2023-02-25 20:23:42,547][36780] Num frames 2200...
[2023-02-25 20:23:42,660][36780] Num frames 2300...
[2023-02-25 20:23:42,723][36780] Avg episode rewards: #0: 28.525, true rewards: #0: 11.525
[2023-02-25 20:23:42,726][36780] Avg episode reward: 28.525, avg true_objective: 11.525
[2023-02-25 20:23:42,847][36780] Num frames 2400...
[2023-02-25 20:23:42,960][36780] Num frames 2500...
[2023-02-25 20:23:43,077][36780] Num frames 2600...
[2023-02-25 20:23:43,198][36780] Num frames 2700...
[2023-02-25 20:23:43,325][36780] Num frames 2800...
[2023-02-25 20:23:43,444][36780] Num frames 2900...
[2023-02-25 20:23:43,563][36780] Num frames 3000...
[2023-02-25 20:23:43,688][36780] Num frames 3100...
[2023-02-25 20:23:43,805][36780] Num frames 3200...
[2023-02-25 20:23:43,918][36780] Num frames 3300...
[2023-02-25 20:23:44,030][36780] Num frames 3400...
[2023-02-25 20:23:44,159][36780] Num frames 3500...
[2023-02-25 20:23:44,273][36780] Num frames 3600...
[2023-02-25 20:23:44,351][36780] Avg episode rewards: #0: 29.723, true rewards: #0: 12.057
[2023-02-25 20:23:44,352][36780] Avg episode reward: 29.723, avg true_objective: 12.057
[2023-02-25 20:23:44,449][36780] Num frames 3700...
[2023-02-25 20:23:44,562][36780] Num frames 3800...
[2023-02-25 20:23:44,679][36780] Num frames 3900...
[2023-02-25 20:23:44,792][36780] Num frames 4000...
[2023-02-25 20:23:44,905][36780] Num frames 4100...
[2023-02-25 20:23:45,024][36780] Num frames 4200...
[2023-02-25 20:23:45,138][36780] Num frames 4300...
[2023-02-25 20:23:45,296][36780] Avg episode rewards: #0: 26.212, true rewards: #0: 10.962
[2023-02-25 20:23:45,298][36780] Avg episode reward: 26.212, avg true_objective: 10.962
[2023-02-25 20:23:45,319][36780] Num frames 4400...
[2023-02-25 20:23:45,440][36780] Num frames 4500...
[2023-02-25 20:23:45,551][36780] Num frames 4600...
[2023-02-25 20:23:45,662][36780] Num frames 4700...
[2023-02-25 20:23:45,779][36780] Num frames 4800...
[2023-02-25 20:23:45,895][36780] Num frames 4900...
[2023-02-25 20:23:46,010][36780] Num frames 5000...
[2023-02-25 20:23:46,124][36780] Num frames 5100...
[2023-02-25 20:23:46,244][36780] Num frames 5200...
[2023-02-25 20:23:46,364][36780] Num frames 5300...
[2023-02-25 20:23:46,479][36780] Num frames 5400...
[2023-02-25 20:23:46,583][36780] Avg episode rewards: #0: 25.682, true rewards: #0: 10.882
[2023-02-25 20:23:46,585][36780] Avg episode reward: 25.682, avg true_objective: 10.882
[2023-02-25 20:23:46,665][36780] Num frames 5500...
[2023-02-25 20:23:46,781][36780] Num frames 5600...
[2023-02-25 20:23:46,892][36780] Num frames 5700...
[2023-02-25 20:23:47,006][36780] Num frames 5800...
[2023-02-25 20:23:47,124][36780] Num frames 5900...
[2023-02-25 20:23:47,239][36780] Num frames 6000...
[2023-02-25 20:23:47,404][36780] Num frames 6100...
[2023-02-25 20:23:47,567][36780] Num frames 6200...
[2023-02-25 20:23:47,645][36780] Avg episode rewards: #0: 23.682, true rewards: #0: 10.348
[2023-02-25 20:23:47,648][36780] Avg episode reward: 23.682, avg true_objective: 10.348
[2023-02-25 20:23:47,795][36780] Num frames 6300...
[2023-02-25 20:23:47,958][36780] Num frames 6400...
[2023-02-25 20:23:48,117][36780] Num frames 6500...
[2023-02-25 20:23:48,285][36780] Num frames 6600...
[2023-02-25 20:23:48,446][36780] Num frames 6700...
[2023-02-25 20:23:48,609][36780] Num frames 6800...
[2023-02-25 20:23:48,771][36780] Num frames 6900...
[2023-02-25 20:23:48,945][36780] Num frames 7000...
[2023-02-25 20:23:49,107][36780] Num frames 7100...
[2023-02-25 20:23:49,270][36780] Num frames 7200...
[2023-02-25 20:23:49,437][36780] Num frames 7300...
[2023-02-25 20:23:49,604][36780] Num frames 7400...
[2023-02-25 20:23:49,777][36780] Num frames 7500...
[2023-02-25 20:23:49,946][36780] Num frames 7600...
[2023-02-25 20:23:50,116][36780] Num frames 7700...
[2023-02-25 20:23:50,283][36780] Num frames 7800...
[2023-02-25 20:23:50,455][36780] Num frames 7900...
[2023-02-25 20:23:50,627][36780] Num frames 8000...
[2023-02-25 20:23:50,741][36780] Avg episode rewards: #0: 27.904, true rewards: #0: 11.476
[2023-02-25 20:23:50,744][36780] Avg episode reward: 27.904, avg true_objective: 11.476
[2023-02-25 20:23:50,856][36780] Num frames 8100...
[2023-02-25 20:23:50,984][36780] Num frames 8200...
[2023-02-25 20:23:51,099][36780] Num frames 8300...
[2023-02-25 20:23:51,225][36780] Num frames 8400...
[2023-02-25 20:23:51,343][36780] Num frames 8500...
[2023-02-25 20:23:51,463][36780] Num frames 8600...
[2023-02-25 20:23:51,590][36780] Num frames 8700...
[2023-02-25 20:23:51,703][36780] Num frames 8800...
[2023-02-25 20:23:51,834][36780] Num frames 8900...
[2023-02-25 20:23:51,948][36780] Num frames 9000...
[2023-02-25 20:23:52,064][36780] Num frames 9100...
[2023-02-25 20:23:52,185][36780] Num frames 9200...
[2023-02-25 20:23:52,323][36780] Num frames 9300...
[2023-02-25 20:23:52,443][36780] Num frames 9400...
[2023-02-25 20:23:52,555][36780] Num frames 9500...
[2023-02-25 20:23:52,673][36780] Num frames 9600...
[2023-02-25 20:23:52,786][36780] Num frames 9700...
[2023-02-25 20:23:52,875][36780] Avg episode rewards: #0: 30.411, true rewards: #0: 12.161
[2023-02-25 20:23:52,876][36780] Avg episode reward: 30.411, avg true_objective: 12.161
[2023-02-25 20:23:52,965][36780] Num frames 9800...
[2023-02-25 20:23:53,089][36780] Num frames 9900...
[2023-02-25 20:23:53,212][36780] Num frames 10000...
[2023-02-25 20:23:53,339][36780] Num frames 10100...
[2023-02-25 20:23:53,466][36780] Num frames 10200...
[2023-02-25 20:23:53,591][36780] Num frames 10300...
[2023-02-25 20:23:53,659][36780] Avg episode rewards: #0: 28.117, true rewards: #0: 11.450
[2023-02-25 20:23:53,661][36780] Avg episode reward: 28.117, avg true_objective: 11.450
[2023-02-25 20:23:53,771][36780] Num frames 10400...
[2023-02-25 20:23:53,885][36780] Num frames 10500...
[2023-02-25 20:23:54,007][36780] Num frames 10600...
[2023-02-25 20:23:54,126][36780] Num frames 10700...
[2023-02-25 20:23:54,250][36780] Num frames 10800...
[2023-02-25 20:23:54,364][36780] Num frames 10900...
[2023-02-25 20:23:54,486][36780] Num frames 11000...
[2023-02-25 20:23:54,613][36780] Num frames 11100...
[2023-02-25 20:23:54,727][36780] Num frames 11200...
[2023-02-25 20:23:54,840][36780] Num frames 11300...
[2023-02-25 20:23:54,956][36780] Num frames 11400...
[2023-02-25 20:23:55,068][36780] Num frames 11500...
[2023-02-25 20:23:55,193][36780] Avg episode rewards: #0: 28.561, true rewards: #0: 11.561
[2023-02-25 20:23:55,195][36780] Avg episode reward: 28.561, avg true_objective: 11.561
[2023-02-25 20:25:11,633][36780] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 20:25:12,374][36780] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 20:25:12,376][36780] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 20:25:12,378][36780] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 20:25:12,380][36780] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 20:25:12,382][36780] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 20:25:12,383][36780] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 20:25:12,385][36780] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 20:25:12,386][36780] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 20:25:12,387][36780] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 20:25:12,388][36780] Adding new argument 'hf_repository'='SergejSchweizer/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 20:25:12,389][36780] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 20:25:12,390][36780] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 20:25:12,391][36780] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 20:25:12,392][36780] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 20:25:12,393][36780] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 20:25:12,418][36780] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 20:25:12,420][36780] RunningMeanStd input shape: (1,)
[2023-02-25 20:25:12,437][36780] ConvEncoder: input_channels=3
[2023-02-25 20:25:12,501][36780] Conv encoder output size: 512
[2023-02-25 20:25:12,503][36780] Policy head output size: 512
[2023-02-25 20:25:12,533][36780] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
[2023-02-25 20:25:13,263][36780] Num frames 100...
[2023-02-25 20:25:13,424][36780] Num frames 200...
[2023-02-25 20:25:13,577][36780] Num frames 300...
[2023-02-25 20:25:13,734][36780] Num frames 400...
[2023-02-25 20:25:13,885][36780] Num frames 500...
[2023-02-25 20:25:14,062][36780] Num frames 600...
[2023-02-25 20:25:14,213][36780] Num frames 700...
[2023-02-25 20:25:14,373][36780] Num frames 800...
[2023-02-25 20:25:14,552][36780] Num frames 900...
[2023-02-25 20:25:14,733][36780] Num frames 1000...
[2023-02-25 20:25:14,906][36780] Num frames 1100...
[2023-02-25 20:25:15,082][36780] Num frames 1200...
[2023-02-25 20:25:15,246][36780] Num frames 1300...
[2023-02-25 20:25:15,428][36780] Num frames 1400...
[2023-02-25 20:25:15,599][36780] Num frames 1500...
[2023-02-25 20:25:15,762][36780] Num frames 1600...
[2023-02-25 20:25:15,916][36780] Num frames 1700...
[2023-02-25 20:25:16,122][36780] Num frames 1800...
[2023-02-25 20:25:16,298][36780] Num frames 1900...
[2023-02-25 20:25:16,491][36780] Num frames 2000...
[2023-02-25 20:25:16,697][36780] Num frames 2100...
[2023-02-25 20:25:16,754][36780] Avg episode rewards: #0: 57.999, true rewards: #0: 21.000
[2023-02-25 20:25:16,757][36780] Avg episode reward: 57.999, avg true_objective: 21.000
[2023-02-25 20:25:16,963][36780] Num frames 2200...
[2023-02-25 20:25:17,133][36780] Num frames 2300...
[2023-02-25 20:25:17,299][36780] Num frames 2400...
[2023-02-25 20:25:17,491][36780] Num frames 2500...
[2023-02-25 20:25:17,692][36780] Num frames 2600...
[2023-02-25 20:25:17,885][36780] Num frames 2700...
[2023-02-25 20:25:18,087][36780] Num frames 2800...
[2023-02-25 20:25:18,284][36780] Num frames 2900...
[2023-02-25 20:25:18,483][36780] Num frames 3000...
[2023-02-25 20:25:18,667][36780] Num frames 3100...
[2023-02-25 20:25:18,855][36780] Num frames 3200...
[2023-02-25 20:25:19,033][36780] Num frames 3300...
[2023-02-25 20:25:19,224][36780] Num frames 3400...
[2023-02-25 20:25:19,409][36780] Num frames 3500...
[2023-02-25 20:25:19,599][36780] Num frames 3600...
[2023-02-25 20:25:19,800][36780] Num frames 3700...
[2023-02-25 20:25:19,988][36780] Num frames 3800...
[2023-02-25 20:25:20,211][36780] Num frames 3900...
[2023-02-25 20:25:20,410][36780] Num frames 4000...
[2023-02-25 20:25:20,618][36780] Num frames 4100...
[2023-02-25 20:25:20,816][36780] Num frames 4200...
[2023-02-25 20:25:20,869][36780] Avg episode rewards: #0: 58.499, true rewards: #0: 21.000
[2023-02-25 20:25:20,870][36780] Avg episode reward: 58.499, avg true_objective: 21.000
[2023-02-25 20:25:21,027][36780] Num frames 4300...
[2023-02-25 20:25:21,197][36780] Num frames 4400...
[2023-02-25 20:25:21,351][36780] Num frames 4500...
[2023-02-25 20:25:21,507][36780] Num frames 4600...
[2023-02-25 20:25:21,647][36780] Num frames 4700...
[2023-02-25 20:25:21,777][36780] Num frames 4800...
[2023-02-25 20:25:21,933][36780] Avg episode rewards: #0: 44.286, true rewards: #0: 16.287
[2023-02-25 20:25:21,934][36780] Avg episode reward: 44.286, avg true_objective: 16.287
[2023-02-25 20:25:21,956][36780] Num frames 4900...
[2023-02-25 20:25:22,079][36780] Num frames 5000...
[2023-02-25 20:25:22,216][36780] Num frames 5100...
[2023-02-25 20:25:22,329][36780] Num frames 5200...
[2023-02-25 20:25:22,442][36780] Num frames 5300...
[2023-02-25 20:25:22,561][36780] Num frames 5400...
[2023-02-25 20:25:22,682][36780] Num frames 5500...
[2023-02-25 20:25:22,803][36780] Num frames 5600...
[2023-02-25 20:25:22,916][36780] Num frames 5700...
[2023-02-25 20:25:23,034][36780] Num frames 5800...
[2023-02-25 20:25:23,150][36780] Avg episode rewards: #0: 37.864, true rewards: #0: 14.615
[2023-02-25 20:25:23,152][36780] Avg episode reward: 37.864, avg true_objective: 14.615
[2023-02-25 20:25:23,223][36780] Num frames 5900...
[2023-02-25 20:25:23,348][36780] Num frames 6000...
[2023-02-25 20:25:23,479][36780] Num frames 6100...
[2023-02-25 20:25:23,616][36780] Num frames 6200...
[2023-02-25 20:25:23,732][36780] Num frames 6300...
[2023-02-25 20:25:23,850][36780] Num frames 6400...
[2023-02-25 20:25:23,964][36780] Num frames 6500...
[2023-02-25 20:25:24,078][36780] Num frames 6600...
[2023-02-25 20:25:24,198][36780] Num frames 6700...
[2023-02-25 20:25:24,312][36780] Num frames 6800...
[2023-02-25 20:25:24,429][36780] Num frames 6900...
[2023-02-25 20:25:24,552][36780] Num frames 7000...
[2023-02-25 20:25:24,667][36780] Num frames 7100...
[2023-02-25 20:25:24,790][36780] Avg episode rewards: #0: 36.112, true rewards: #0: 14.312
[2023-02-25 20:25:24,793][36780] Avg episode reward: 36.112, avg true_objective: 14.312
[2023-02-25 20:25:24,851][36780] Num frames 7200...
[2023-02-25 20:25:24,967][36780] Num frames 7300...
[2023-02-25 20:25:25,088][36780] Num frames 7400...
[2023-02-25 20:25:25,208][36780] Num frames 7500...
[2023-02-25 20:25:25,324][36780] Num frames 7600...
[2023-02-25 20:25:25,438][36780] Num frames 7700...
[2023-02-25 20:25:25,556][36780] Num frames 7800...
[2023-02-25 20:25:25,676][36780] Num frames 7900...
[2023-02-25 20:25:25,794][36780] Num frames 8000...
[2023-02-25 20:25:25,904][36780] Num frames 8100...
[2023-02-25 20:25:26,020][36780] Num frames 8200...
[2023-02-25 20:25:26,090][36780] Avg episode rewards: #0: 34.686, true rewards: #0: 13.687
[2023-02-25 20:25:26,092][36780] Avg episode reward: 34.686, avg true_objective: 13.687
[2023-02-25 20:25:26,202][36780] Num frames 8300...
[2023-02-25 20:25:26,321][36780] Num frames 8400...
[2023-02-25 20:25:26,443][36780] Num frames 8500...
[2023-02-25 20:25:26,565][36780] Num frames 8600...
[2023-02-25 20:25:26,677][36780] Num frames 8700...
[2023-02-25 20:25:26,795][36780] Num frames 8800...
[2023-02-25 20:25:26,910][36780] Num frames 8900...
[2023-02-25 20:25:27,019][36780] Avg episode rewards: #0: 31.640, true rewards: #0: 12.783
[2023-02-25 20:25:27,023][36780] Avg episode reward: 31.640, avg true_objective: 12.783
[2023-02-25 20:25:27,087][36780] Num frames 9000...
[2023-02-25 20:25:27,221][36780] Num frames 9100...
[2023-02-25 20:25:27,360][36780] Num frames 9200...
[2023-02-25 20:25:27,484][36780] Num frames 9300...
[2023-02-25 20:25:27,606][36780] Num frames 9400...
[2023-02-25 20:25:27,723][36780] Num frames 9500...
[2023-02-25 20:25:27,844][36780] Num frames 9600...
[2023-02-25 20:25:27,958][36780] Avg episode rewards: #0: 29.440, true rewards: #0: 12.065
[2023-02-25 20:25:27,960][36780] Avg episode reward: 29.440, avg true_objective: 12.065
[2023-02-25 20:25:28,025][36780] Num frames 9700...
[2023-02-25 20:25:28,152][36780] Num frames 9800...
[2023-02-25 20:25:28,281][36780] Num frames 9900...
[2023-02-25 20:25:28,398][36780] Num frames 10000...
[2023-02-25 20:25:28,517][36780] Num frames 10100...
[2023-02-25 20:25:28,632][36780] Avg episode rewards: #0: 27.047, true rewards: #0: 11.269
[2023-02-25 20:25:28,633][36780] Avg episode reward: 27.047, avg true_objective: 11.269
[2023-02-25 20:25:28,706][36780] Num frames 10200...
[2023-02-25 20:25:28,821][36780] Num frames 10300...
[2023-02-25 20:25:28,934][36780] Num frames 10400...
[2023-02-25 20:25:29,049][36780] Num frames 10500...
[2023-02-25 20:25:29,169][36780] Num frames 10600...
[2023-02-25 20:25:29,284][36780] Num frames 10700...
[2023-02-25 20:25:29,398][36780] Num frames 10800...
[2023-02-25 20:25:29,517][36780] Num frames 10900...
[2023-02-25 20:25:29,640][36780] Num frames 11000...
[2023-02-25 20:25:29,763][36780] Num frames 11100...
[2023-02-25 20:25:29,890][36780] Num frames 11200...
[2023-02-25 20:25:30,010][36780] Num frames 11300...
[2023-02-25 20:25:30,129][36780] Num frames 11400...
[2023-02-25 20:25:30,255][36780] Num frames 11500...
[2023-02-25 20:25:30,375][36780] Num frames 11600...
[2023-02-25 20:25:30,496][36780] Num frames 11700...
[2023-02-25 20:25:30,622][36780] Num frames 11800...
[2023-02-25 20:25:30,742][36780] Num frames 11900...
[2023-02-25 20:25:30,865][36780] Num frames 12000...
[2023-02-25 20:25:31,032][36780] Num frames 12100...
[2023-02-25 20:25:31,199][36780] Num frames 12200...
[2023-02-25 20:25:31,328][36780] Avg episode rewards: #0: 30.142, true rewards: #0: 12.242
[2023-02-25 20:25:31,331][36780] Avg episode reward: 30.142, avg true_objective: 12.242
[2023-02-25 20:26:53,706][36780] Replay video saved to /content/train_dir/default_experiment/replay.mp4!