khaled5321's picture
Upload . with huggingface_hub
99a62b4
raw
history blame contribute delete
No virus
105 kB
[2023-02-25 19:18:17,413][07057] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-02-25 19:18:17,421][07057] Rollout worker 0 uses device cpu
[2023-02-25 19:18:17,422][07057] Rollout worker 1 uses device cpu
[2023-02-25 19:18:17,424][07057] Rollout worker 2 uses device cpu
[2023-02-25 19:18:17,429][07057] Rollout worker 3 uses device cpu
[2023-02-25 19:18:17,431][07057] Rollout worker 4 uses device cpu
[2023-02-25 19:18:17,433][07057] Rollout worker 5 uses device cpu
[2023-02-25 19:18:17,434][07057] Rollout worker 6 uses device cpu
[2023-02-25 19:18:17,436][07057] Rollout worker 7 uses device cpu
[2023-02-25 19:18:17,730][07057] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:17,732][07057] InferenceWorker_p0-w0: min num requests: 2
[2023-02-25 19:18:17,781][07057] Starting all processes...
[2023-02-25 19:18:17,785][07057] Starting process learner_proc0
[2023-02-25 19:18:17,862][07057] Starting all processes...
[2023-02-25 19:18:17,961][07057] Starting process inference_proc0-0
[2023-02-25 19:18:17,962][07057] Starting process rollout_proc0
[2023-02-25 19:18:17,963][07057] Starting process rollout_proc1
[2023-02-25 19:18:17,963][07057] Starting process rollout_proc2
[2023-02-25 19:18:17,963][07057] Starting process rollout_proc3
[2023-02-25 19:18:17,964][07057] Starting process rollout_proc4
[2023-02-25 19:18:17,964][07057] Starting process rollout_proc5
[2023-02-25 19:18:17,964][07057] Starting process rollout_proc6
[2023-02-25 19:18:17,965][07057] Starting process rollout_proc7
[2023-02-25 19:18:30,230][12632] Worker 3 uses CPU cores [1]
[2023-02-25 19:18:30,332][12613] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:30,334][12613] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-02-25 19:18:30,442][12631] Worker 4 uses CPU cores [0]
[2023-02-25 19:18:30,591][12634] Worker 6 uses CPU cores [0]
[2023-02-25 19:18:30,650][12635] Worker 7 uses CPU cores [1]
[2023-02-25 19:18:30,769][12627] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:30,769][12627] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-02-25 19:18:30,973][12628] Worker 0 uses CPU cores [0]
[2023-02-25 19:18:31,019][12630] Worker 2 uses CPU cores [0]
[2023-02-25 19:18:31,062][12629] Worker 1 uses CPU cores [1]
[2023-02-25 19:18:31,069][12633] Worker 5 uses CPU cores [1]
[2023-02-25 19:18:31,420][12613] Num visible devices: 1
[2023-02-25 19:18:31,421][12627] Num visible devices: 1
[2023-02-25 19:18:31,440][12613] Starting seed is not provided
[2023-02-25 19:18:31,440][12613] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:31,441][12613] Initializing actor-critic model on device cuda:0
[2023-02-25 19:18:31,442][12613] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:31,444][12613] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:31,465][12613] ConvEncoder: input_channels=3
[2023-02-25 19:18:31,849][12613] Conv encoder output size: 512
[2023-02-25 19:18:31,851][12613] Policy head output size: 512
[2023-02-25 19:18:31,937][12613] Created Actor Critic model with architecture:
[2023-02-25 19:18:31,938][12613] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-02-25 19:18:37,720][07057] Heartbeat connected on Batcher_0
[2023-02-25 19:18:37,731][07057] Heartbeat connected on InferenceWorker_p0-w0
[2023-02-25 19:18:37,751][07057] Heartbeat connected on RolloutWorker_w0
[2023-02-25 19:18:37,756][07057] Heartbeat connected on RolloutWorker_w1
[2023-02-25 19:18:37,759][07057] Heartbeat connected on RolloutWorker_w2
[2023-02-25 19:18:37,761][07057] Heartbeat connected on RolloutWorker_w3
[2023-02-25 19:18:37,766][07057] Heartbeat connected on RolloutWorker_w4
[2023-02-25 19:18:37,773][07057] Heartbeat connected on RolloutWorker_w5
[2023-02-25 19:18:37,775][07057] Heartbeat connected on RolloutWorker_w6
[2023-02-25 19:18:37,781][07057] Heartbeat connected on RolloutWorker_w7
[2023-02-25 19:18:40,676][12613] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-02-25 19:18:40,677][12613] No checkpoints found
[2023-02-25 19:18:40,677][12613] Did not load from checkpoint, starting from scratch!
[2023-02-25 19:18:40,677][12613] Initialized policy 0 weights for model version 0
[2023-02-25 19:18:40,683][12613] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-02-25 19:18:40,690][12613] LearnerWorker_p0 finished initialization!
[2023-02-25 19:18:40,698][07057] Heartbeat connected on LearnerWorker_p0
[2023-02-25 19:18:40,891][12627] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:18:40,893][12627] RunningMeanStd input shape: (1,)
[2023-02-25 19:18:40,904][12627] ConvEncoder: input_channels=3
[2023-02-25 19:18:41,003][12627] Conv encoder output size: 512
[2023-02-25 19:18:41,003][12627] Policy head output size: 512
[2023-02-25 19:18:42,888][07057] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:43,240][07057] Inference worker 0-0 is ready!
[2023-02-25 19:18:43,242][07057] All inference workers are ready! Signal rollout workers to start!
[2023-02-25 19:18:43,341][12632] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,348][12635] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,384][12633] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,389][12629] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,420][12631] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,422][12628] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,430][12630] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:43,442][12634] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:18:44,277][12634] Decorrelating experience for 0 frames...
[2023-02-25 19:18:44,279][12628] Decorrelating experience for 0 frames...
[2023-02-25 19:18:44,631][12628] Decorrelating experience for 32 frames...
[2023-02-25 19:18:44,809][12635] Decorrelating experience for 0 frames...
[2023-02-25 19:18:44,814][12629] Decorrelating experience for 0 frames...
[2023-02-25 19:18:44,817][12633] Decorrelating experience for 0 frames...
[2023-02-25 19:18:44,820][12632] Decorrelating experience for 0 frames...
[2023-02-25 19:18:45,315][12630] Decorrelating experience for 0 frames...
[2023-02-25 19:18:45,419][12628] Decorrelating experience for 64 frames...
[2023-02-25 19:18:45,850][12631] Decorrelating experience for 0 frames...
[2023-02-25 19:18:46,138][12629] Decorrelating experience for 32 frames...
[2023-02-25 19:18:46,140][12635] Decorrelating experience for 32 frames...
[2023-02-25 19:18:46,145][12633] Decorrelating experience for 32 frames...
[2023-02-25 19:18:46,150][12632] Decorrelating experience for 32 frames...
[2023-02-25 19:18:46,848][12631] Decorrelating experience for 32 frames...
[2023-02-25 19:18:46,988][12628] Decorrelating experience for 96 frames...
[2023-02-25 19:18:47,202][12633] Decorrelating experience for 64 frames...
[2023-02-25 19:18:47,200][12632] Decorrelating experience for 64 frames...
[2023-02-25 19:18:47,503][12634] Decorrelating experience for 32 frames...
[2023-02-25 19:18:47,529][12630] Decorrelating experience for 32 frames...
[2023-02-25 19:18:47,888][07057] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:48,196][12633] Decorrelating experience for 96 frames...
[2023-02-25 19:18:48,271][12631] Decorrelating experience for 64 frames...
[2023-02-25 19:18:48,407][12629] Decorrelating experience for 64 frames...
[2023-02-25 19:18:48,755][12632] Decorrelating experience for 96 frames...
[2023-02-25 19:18:49,072][12630] Decorrelating experience for 64 frames...
[2023-02-25 19:18:49,461][12634] Decorrelating experience for 64 frames...
[2023-02-25 19:18:49,459][12631] Decorrelating experience for 96 frames...
[2023-02-25 19:18:49,944][12629] Decorrelating experience for 96 frames...
[2023-02-25 19:18:50,149][12630] Decorrelating experience for 96 frames...
[2023-02-25 19:18:50,383][12635] Decorrelating experience for 64 frames...
[2023-02-25 19:18:50,548][12634] Decorrelating experience for 96 frames...
[2023-02-25 19:18:50,768][12635] Decorrelating experience for 96 frames...
[2023-02-25 19:18:52,888][07057] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 57.2. Samples: 572. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:52,895][07057] Avg episode reward: [(0, '0.897')]
[2023-02-25 19:18:55,713][12613] Signal inference workers to stop experience collection...
[2023-02-25 19:18:55,742][12627] InferenceWorker_p0-w0: stopping experience collection
[2023-02-25 19:18:57,888][07057] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 137.3. Samples: 2060. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-02-25 19:18:57,891][07057] Avg episode reward: [(0, '2.066')]
[2023-02-25 19:18:58,312][12613] Signal inference workers to resume experience collection...
[2023-02-25 19:18:58,315][12627] InferenceWorker_p0-w0: resuming experience collection
[2023-02-25 19:19:02,888][07057] Fps is (10 sec: 2457.7, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 24576. Throughput: 0: 240.0. Samples: 4800. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:02,890][07057] Avg episode reward: [(0, '3.727')]
[2023-02-25 19:19:06,229][12627] Updated weights for policy 0, policy_version 10 (0.0402)
[2023-02-25 19:19:07,888][07057] Fps is (10 sec: 4505.6, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 481.1. Samples: 12028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:19:07,893][07057] Avg episode reward: [(0, '4.390')]
[2023-02-25 19:19:12,892][07057] Fps is (10 sec: 4094.2, 60 sec: 2184.2, 300 sec: 2184.2). Total num frames: 65536. Throughput: 0: 497.7. Samples: 14932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:12,895][07057] Avg episode reward: [(0, '4.473')]
[2023-02-25 19:19:17,888][07057] Fps is (10 sec: 3276.8, 60 sec: 2223.6, 300 sec: 2223.6). Total num frames: 77824. Throughput: 0: 556.5. Samples: 19476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:17,896][07057] Avg episode reward: [(0, '4.448')]
[2023-02-25 19:19:17,998][12627] Updated weights for policy 0, policy_version 20 (0.0015)
[2023-02-25 19:19:22,889][07057] Fps is (10 sec: 3687.6, 60 sec: 2559.9, 300 sec: 2559.9). Total num frames: 102400. Throughput: 0: 648.9. Samples: 25956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:22,891][07057] Avg episode reward: [(0, '4.379')]
[2023-02-25 19:19:22,899][12613] Saving new best policy, reward=4.379!
[2023-02-25 19:19:26,692][12627] Updated weights for policy 0, policy_version 30 (0.0021)
[2023-02-25 19:19:27,888][07057] Fps is (10 sec: 4915.2, 60 sec: 2821.7, 300 sec: 2821.7). Total num frames: 126976. Throughput: 0: 655.4. Samples: 29492. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-02-25 19:19:27,891][07057] Avg episode reward: [(0, '4.443')]
[2023-02-25 19:19:27,895][12613] Saving new best policy, reward=4.443!
[2023-02-25 19:19:32,888][07057] Fps is (10 sec: 3686.8, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 139264. Throughput: 0: 779.7. Samples: 35088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:32,891][07057] Avg episode reward: [(0, '4.419')]
[2023-02-25 19:19:37,888][07057] Fps is (10 sec: 2867.2, 60 sec: 2830.0, 300 sec: 2830.0). Total num frames: 155648. Throughput: 0: 861.6. Samples: 39342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:19:37,891][07057] Avg episode reward: [(0, '4.421')]
[2023-02-25 19:19:39,169][12627] Updated weights for policy 0, policy_version 40 (0.0021)
[2023-02-25 19:19:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 180224. Throughput: 0: 903.9. Samples: 42736. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:19:42,896][07057] Avg episode reward: [(0, '4.380')]
[2023-02-25 19:19:47,707][12627] Updated weights for policy 0, policy_version 50 (0.0026)
[2023-02-25 19:19:47,888][07057] Fps is (10 sec: 4915.2, 60 sec: 3413.3, 300 sec: 3150.8). Total num frames: 204800. Throughput: 0: 1005.7. Samples: 50058. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:47,891][07057] Avg episode reward: [(0, '4.405')]
[2023-02-25 19:19:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3101.3). Total num frames: 217088. Throughput: 0: 959.1. Samples: 55186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:19:52,895][07057] Avg episode reward: [(0, '4.481')]
[2023-02-25 19:19:52,909][12613] Saving new best policy, reward=4.481!
[2023-02-25 19:19:57,891][07057] Fps is (10 sec: 3275.8, 60 sec: 3959.3, 300 sec: 3167.4). Total num frames: 237568. Throughput: 0: 944.1. Samples: 57414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:19:57,894][07057] Avg episode reward: [(0, '4.594')]
[2023-02-25 19:19:57,900][12613] Saving new best policy, reward=4.594!
[2023-02-25 19:19:59,686][12627] Updated weights for policy 0, policy_version 60 (0.0022)
[2023-02-25 19:20:02,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3225.6). Total num frames: 258048. Throughput: 0: 986.5. Samples: 63870. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:20:02,891][07057] Avg episode reward: [(0, '4.528')]
[2023-02-25 19:20:07,891][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.3, 300 sec: 3324.9). Total num frames: 282624. Throughput: 0: 1002.5. Samples: 71072. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:20:07,896][07057] Avg episode reward: [(0, '4.448')]
[2023-02-25 19:20:08,816][12627] Updated weights for policy 0, policy_version 70 (0.0013)
[2023-02-25 19:20:12,890][07057] Fps is (10 sec: 4095.1, 60 sec: 3891.3, 300 sec: 3322.2). Total num frames: 299008. Throughput: 0: 974.8. Samples: 73360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:12,895][07057] Avg episode reward: [(0, '4.532')]
[2023-02-25 19:20:12,907][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth...
[2023-02-25 19:20:17,888][07057] Fps is (10 sec: 3277.9, 60 sec: 3959.5, 300 sec: 3319.9). Total num frames: 315392. Throughput: 0: 955.6. Samples: 78088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:20:17,890][07057] Avg episode reward: [(0, '4.506')]
[2023-02-25 19:20:20,102][12627] Updated weights for policy 0, policy_version 80 (0.0013)
[2023-02-25 19:20:22,888][07057] Fps is (10 sec: 4096.8, 60 sec: 3959.5, 300 sec: 3399.7). Total num frames: 339968. Throughput: 0: 1014.7. Samples: 85006. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:20:22,891][07057] Avg episode reward: [(0, '4.520')]
[2023-02-25 19:20:27,890][07057] Fps is (10 sec: 4504.6, 60 sec: 3891.1, 300 sec: 3432.8). Total num frames: 360448. Throughput: 0: 1018.5. Samples: 88572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:20:27,898][07057] Avg episode reward: [(0, '4.523')]
[2023-02-25 19:20:29,611][12627] Updated weights for policy 0, policy_version 90 (0.0032)
[2023-02-25 19:20:32,889][07057] Fps is (10 sec: 3686.1, 60 sec: 3959.4, 300 sec: 3425.7). Total num frames: 376832. Throughput: 0: 977.4. Samples: 94044. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:20:32,894][07057] Avg episode reward: [(0, '4.448')]
[2023-02-25 19:20:37,888][07057] Fps is (10 sec: 3277.5, 60 sec: 3959.5, 300 sec: 3419.3). Total num frames: 393216. Throughput: 0: 972.0. Samples: 98926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:20:37,890][07057] Avg episode reward: [(0, '4.567')]
[2023-02-25 19:20:40,548][12627] Updated weights for policy 0, policy_version 100 (0.0014)
[2023-02-25 19:20:42,888][07057] Fps is (10 sec: 4096.5, 60 sec: 3959.5, 300 sec: 3481.6). Total num frames: 417792. Throughput: 0: 1004.5. Samples: 102612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:20:42,893][07057] Avg episode reward: [(0, '4.681')]
[2023-02-25 19:20:42,902][12613] Saving new best policy, reward=4.681!
[2023-02-25 19:20:47,893][07057] Fps is (10 sec: 4912.6, 60 sec: 3959.1, 300 sec: 3538.8). Total num frames: 442368. Throughput: 0: 1020.3. Samples: 109790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:20:47,896][07057] Avg episode reward: [(0, '4.501')]
[2023-02-25 19:20:50,356][12627] Updated weights for policy 0, policy_version 110 (0.0016)
[2023-02-25 19:20:52,893][07057] Fps is (10 sec: 3684.6, 60 sec: 3959.1, 300 sec: 3497.2). Total num frames: 454656. Throughput: 0: 967.6. Samples: 114614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:20:52,895][07057] Avg episode reward: [(0, '4.589')]
[2023-02-25 19:20:57,888][07057] Fps is (10 sec: 2458.9, 60 sec: 3823.1, 300 sec: 3458.8). Total num frames: 466944. Throughput: 0: 956.9. Samples: 116416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:20:57,890][07057] Avg episode reward: [(0, '4.707')]
[2023-02-25 19:20:57,893][12613] Saving new best policy, reward=4.707!
[2023-02-25 19:21:02,888][07057] Fps is (10 sec: 2868.6, 60 sec: 3754.7, 300 sec: 3452.3). Total num frames: 483328. Throughput: 0: 939.2. Samples: 120350. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:02,897][07057] Avg episode reward: [(0, '4.612')]
[2023-02-25 19:21:04,735][12627] Updated weights for policy 0, policy_version 120 (0.0042)
[2023-02-25 19:21:07,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3474.5). Total num frames: 503808. Throughput: 0: 920.6. Samples: 126432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:07,895][07057] Avg episode reward: [(0, '4.440')]
[2023-02-25 19:21:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3467.9). Total num frames: 520192. Throughput: 0: 902.4. Samples: 129180. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:21:12,895][07057] Avg episode reward: [(0, '4.440')]
[2023-02-25 19:21:16,111][12627] Updated weights for policy 0, policy_version 130 (0.0016)
[2023-02-25 19:21:17,892][07057] Fps is (10 sec: 3275.5, 60 sec: 3686.2, 300 sec: 3461.7). Total num frames: 536576. Throughput: 0: 882.8. Samples: 133772. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:21:17,894][07057] Avg episode reward: [(0, '4.393')]
[2023-02-25 19:21:22,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3507.2). Total num frames: 561152. Throughput: 0: 922.4. Samples: 140434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:21:22,894][07057] Avg episode reward: [(0, '4.500')]
[2023-02-25 19:21:25,162][12627] Updated weights for policy 0, policy_version 140 (0.0013)
[2023-02-25 19:21:27,888][07057] Fps is (10 sec: 4917.1, 60 sec: 3754.8, 300 sec: 3549.9). Total num frames: 585728. Throughput: 0: 918.5. Samples: 143946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:21:27,897][07057] Avg episode reward: [(0, '4.407')]
[2023-02-25 19:21:32,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3517.7). Total num frames: 598016. Throughput: 0: 887.5. Samples: 149724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:21:32,890][07057] Avg episode reward: [(0, '4.326')]
[2023-02-25 19:21:36,888][12627] Updated weights for policy 0, policy_version 150 (0.0033)
[2023-02-25 19:21:37,888][07057] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3510.9). Total num frames: 614400. Throughput: 0: 885.8. Samples: 154470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:21:37,890][07057] Avg episode reward: [(0, '4.328')]
[2023-02-25 19:21:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3549.9). Total num frames: 638976. Throughput: 0: 925.3. Samples: 158056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:21:42,889][07057] Avg episode reward: [(0, '4.579')]
[2023-02-25 19:21:45,480][12627] Updated weights for policy 0, policy_version 160 (0.0012)
[2023-02-25 19:21:47,888][07057] Fps is (10 sec: 4915.2, 60 sec: 3686.7, 300 sec: 3586.8). Total num frames: 663552. Throughput: 0: 1001.6. Samples: 165420. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:21:47,891][07057] Avg episode reward: [(0, '4.483')]
[2023-02-25 19:21:52,889][07057] Fps is (10 sec: 4095.5, 60 sec: 3754.9, 300 sec: 3578.6). Total num frames: 679936. Throughput: 0: 982.8. Samples: 170658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:21:52,894][07057] Avg episode reward: [(0, '4.514')]
[2023-02-25 19:21:57,382][12627] Updated weights for policy 0, policy_version 170 (0.0028)
[2023-02-25 19:21:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3570.9). Total num frames: 696320. Throughput: 0: 972.0. Samples: 172920. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:21:57,891][07057] Avg episode reward: [(0, '4.609')]
[2023-02-25 19:22:02,888][07057] Fps is (10 sec: 4096.5, 60 sec: 3959.5, 300 sec: 3604.5). Total num frames: 720896. Throughput: 0: 1014.6. Samples: 179426. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:02,891][07057] Avg episode reward: [(0, '4.603')]
[2023-02-25 19:22:05,801][12627] Updated weights for policy 0, policy_version 180 (0.0015)
[2023-02-25 19:22:07,888][07057] Fps is (10 sec: 4915.2, 60 sec: 4027.7, 300 sec: 3636.5). Total num frames: 745472. Throughput: 0: 1028.6. Samples: 186720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:22:07,894][07057] Avg episode reward: [(0, '4.690')]
[2023-02-25 19:22:12,889][07057] Fps is (10 sec: 4095.5, 60 sec: 4027.6, 300 sec: 3627.9). Total num frames: 761856. Throughput: 0: 1003.3. Samples: 189094. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:12,893][07057] Avg episode reward: [(0, '4.658')]
[2023-02-25 19:22:12,904][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000186_761856.pth...
[2023-02-25 19:22:17,855][12627] Updated weights for policy 0, policy_version 190 (0.0020)
[2023-02-25 19:22:17,888][07057] Fps is (10 sec: 3276.8, 60 sec: 4028.0, 300 sec: 3619.7). Total num frames: 778240. Throughput: 0: 977.2. Samples: 193700. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:22:17,892][07057] Avg episode reward: [(0, '4.693')]
[2023-02-25 19:22:22,888][07057] Fps is (10 sec: 3686.8, 60 sec: 3959.5, 300 sec: 3630.5). Total num frames: 798720. Throughput: 0: 1024.1. Samples: 200554. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:22:22,895][07057] Avg episode reward: [(0, '4.746')]
[2023-02-25 19:22:22,965][12613] Saving new best policy, reward=4.746!
[2023-02-25 19:22:26,354][12627] Updated weights for policy 0, policy_version 200 (0.0015)
[2023-02-25 19:22:27,888][07057] Fps is (10 sec: 4505.3, 60 sec: 3959.4, 300 sec: 3659.1). Total num frames: 823296. Throughput: 0: 1023.6. Samples: 204118. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:22:27,895][07057] Avg episode reward: [(0, '4.731')]
[2023-02-25 19:22:32,889][07057] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3650.8). Total num frames: 839680. Throughput: 0: 976.8. Samples: 209376. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:22:32,895][07057] Avg episode reward: [(0, '4.670')]
[2023-02-25 19:22:37,888][07057] Fps is (10 sec: 3277.0, 60 sec: 4027.7, 300 sec: 3642.8). Total num frames: 856064. Throughput: 0: 969.1. Samples: 214266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:37,891][07057] Avg episode reward: [(0, '4.725')]
[2023-02-25 19:22:38,387][12627] Updated weights for policy 0, policy_version 210 (0.0012)
[2023-02-25 19:22:42,888][07057] Fps is (10 sec: 4096.3, 60 sec: 4027.7, 300 sec: 3669.3). Total num frames: 880640. Throughput: 0: 998.6. Samples: 217856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:22:42,891][07057] Avg episode reward: [(0, '4.599')]
[2023-02-25 19:22:46,704][12627] Updated weights for policy 0, policy_version 220 (0.0011)
[2023-02-25 19:22:47,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3678.0). Total num frames: 901120. Throughput: 0: 1017.4. Samples: 225208. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:22:47,899][07057] Avg episode reward: [(0, '4.852')]
[2023-02-25 19:22:47,900][12613] Saving new best policy, reward=4.852!
[2023-02-25 19:22:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3670.0). Total num frames: 917504. Throughput: 0: 959.7. Samples: 229906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:22:52,894][07057] Avg episode reward: [(0, '4.630')]
[2023-02-25 19:22:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3662.3). Total num frames: 933888. Throughput: 0: 958.4. Samples: 232220. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:22:57,890][07057] Avg episode reward: [(0, '4.533')]
[2023-02-25 19:22:58,898][12627] Updated weights for policy 0, policy_version 230 (0.0028)
[2023-02-25 19:23:02,888][07057] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3686.4). Total num frames: 958464. Throughput: 0: 1006.9. Samples: 239012. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:02,896][07057] Avg episode reward: [(0, '4.535')]
[2023-02-25 19:23:07,822][12627] Updated weights for policy 0, policy_version 240 (0.0013)
[2023-02-25 19:23:07,888][07057] Fps is (10 sec: 4915.1, 60 sec: 3959.5, 300 sec: 3709.6). Total num frames: 983040. Throughput: 0: 1008.8. Samples: 245950. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:07,895][07057] Avg episode reward: [(0, '4.608')]
[2023-02-25 19:23:12,888][07057] Fps is (10 sec: 3686.5, 60 sec: 3891.3, 300 sec: 3686.4). Total num frames: 995328. Throughput: 0: 979.9. Samples: 248214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:12,891][07057] Avg episode reward: [(0, '4.536')]
[2023-02-25 19:23:17,888][07057] Fps is (10 sec: 3276.9, 60 sec: 3959.5, 300 sec: 3693.8). Total num frames: 1015808. Throughput: 0: 967.4. Samples: 252906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:23:17,894][07057] Avg episode reward: [(0, '4.510')]
[2023-02-25 19:23:19,172][12627] Updated weights for policy 0, policy_version 250 (0.0027)
[2023-02-25 19:23:22,888][07057] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3715.7). Total num frames: 1040384. Throughput: 0: 1021.3. Samples: 260226. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:23:22,894][07057] Avg episode reward: [(0, '4.667')]
[2023-02-25 19:23:27,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3722.3). Total num frames: 1060864. Throughput: 0: 1021.3. Samples: 263814. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:23:27,891][07057] Avg episode reward: [(0, '4.533')]
[2023-02-25 19:23:28,454][12627] Updated weights for policy 0, policy_version 260 (0.0011)
[2023-02-25 19:23:32,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3714.6). Total num frames: 1077248. Throughput: 0: 968.8. Samples: 268802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:23:32,890][07057] Avg episode reward: [(0, '4.608')]
[2023-02-25 19:23:37,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3707.2). Total num frames: 1093632. Throughput: 0: 979.7. Samples: 273994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:23:37,890][07057] Avg episode reward: [(0, '4.808')]
[2023-02-25 19:23:39,705][12627] Updated weights for policy 0, policy_version 270 (0.0012)
[2023-02-25 19:23:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 1118208. Throughput: 0: 1009.5. Samples: 277646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:23:42,895][07057] Avg episode reward: [(0, '4.798')]
[2023-02-25 19:23:47,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 1138688. Throughput: 0: 1015.7. Samples: 284718. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:23:47,893][07057] Avg episode reward: [(0, '4.750')]
[2023-02-25 19:23:49,502][12627] Updated weights for policy 0, policy_version 280 (0.0018)
[2023-02-25 19:23:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1155072. Throughput: 0: 962.8. Samples: 289274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:23:52,903][07057] Avg episode reward: [(0, '4.641')]
[2023-02-25 19:23:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1171456. Throughput: 0: 963.2. Samples: 291558. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:23:57,894][07057] Avg episode reward: [(0, '4.627')]
[2023-02-25 19:24:00,296][12627] Updated weights for policy 0, policy_version 290 (0.0011)
[2023-02-25 19:24:02,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1196032. Throughput: 0: 1016.8. Samples: 298660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:02,894][07057] Avg episode reward: [(0, '4.722')]
[2023-02-25 19:24:07,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3901.7). Total num frames: 1216512. Throughput: 0: 999.4. Samples: 305200. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:24:07,893][07057] Avg episode reward: [(0, '4.913')]
[2023-02-25 19:24:07,971][12613] Saving new best policy, reward=4.913!
[2023-02-25 19:24:10,637][12627] Updated weights for policy 0, policy_version 300 (0.0011)
[2023-02-25 19:24:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1232896. Throughput: 0: 970.1. Samples: 307470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:24:12,891][07057] Avg episode reward: [(0, '4.844')]
[2023-02-25 19:24:12,908][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000301_1232896.pth...
[2023-02-25 19:24:13,059][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000073_299008.pth
[2023-02-25 19:24:17,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1253376. Throughput: 0: 970.4. Samples: 312468. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:17,894][07057] Avg episode reward: [(0, '4.839')]
[2023-02-25 19:24:20,839][12627] Updated weights for policy 0, policy_version 310 (0.0019)
[2023-02-25 19:24:22,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1277952. Throughput: 0: 1015.6. Samples: 319696. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:24:22,897][07057] Avg episode reward: [(0, '5.089')]
[2023-02-25 19:24:22,907][12613] Saving new best policy, reward=5.089!
[2023-02-25 19:24:27,895][07057] Fps is (10 sec: 4502.2, 60 sec: 3959.0, 300 sec: 3929.3). Total num frames: 1298432. Throughput: 0: 1012.6. Samples: 323222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:24:27,903][07057] Avg episode reward: [(0, '4.929')]
[2023-02-25 19:24:31,522][12627] Updated weights for policy 0, policy_version 320 (0.0020)
[2023-02-25 19:24:32,888][07057] Fps is (10 sec: 3276.9, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 1310720. Throughput: 0: 963.6. Samples: 328082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:32,890][07057] Avg episode reward: [(0, '5.091')]
[2023-02-25 19:24:32,958][12613] Saving new best policy, reward=5.091!
[2023-02-25 19:24:37,888][07057] Fps is (10 sec: 3279.2, 60 sec: 3959.4, 300 sec: 3901.6). Total num frames: 1331200. Throughput: 0: 980.8. Samples: 333412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:37,891][07057] Avg episode reward: [(0, '5.269')]
[2023-02-25 19:24:37,893][12613] Saving new best policy, reward=5.269!
[2023-02-25 19:24:41,495][12627] Updated weights for policy 0, policy_version 330 (0.0012)
[2023-02-25 19:24:42,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1355776. Throughput: 0: 1008.8. Samples: 336952. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:42,890][07057] Avg episode reward: [(0, '5.418')]
[2023-02-25 19:24:42,902][12613] Saving new best policy, reward=5.418!
[2023-02-25 19:24:47,888][07057] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 1376256. Throughput: 0: 1003.9. Samples: 343834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:47,897][07057] Avg episode reward: [(0, '5.398')]
[2023-02-25 19:24:52,500][12627] Updated weights for policy 0, policy_version 340 (0.0012)
[2023-02-25 19:24:52,888][07057] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1392640. Throughput: 0: 963.3. Samples: 348548. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:24:52,894][07057] Avg episode reward: [(0, '5.478')]
[2023-02-25 19:24:52,906][12613] Saving new best policy, reward=5.478!
[2023-02-25 19:24:57,888][07057] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 1413120. Throughput: 0: 961.9. Samples: 350754. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:24:57,892][07057] Avg episode reward: [(0, '5.494')]
[2023-02-25 19:24:57,896][12613] Saving new best policy, reward=5.494!
[2023-02-25 19:25:02,096][12627] Updated weights for policy 0, policy_version 350 (0.0021)
[2023-02-25 19:25:02,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.7). Total num frames: 1433600. Throughput: 0: 1011.2. Samples: 357972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:02,892][07057] Avg episode reward: [(0, '5.509')]
[2023-02-25 19:25:02,963][12613] Saving new best policy, reward=5.509!
[2023-02-25 19:25:07,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1454080. Throughput: 0: 989.8. Samples: 364238. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:07,893][07057] Avg episode reward: [(0, '5.381')]
[2023-02-25 19:25:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1470464. Throughput: 0: 962.5. Samples: 366526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:12,894][07057] Avg episode reward: [(0, '5.267')]
[2023-02-25 19:25:13,749][12627] Updated weights for policy 0, policy_version 360 (0.0013)
[2023-02-25 19:25:17,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1490944. Throughput: 0: 970.2. Samples: 371742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:17,895][07057] Avg episode reward: [(0, '5.608')]
[2023-02-25 19:25:17,900][12613] Saving new best policy, reward=5.608!
[2023-02-25 19:25:22,688][12627] Updated weights for policy 0, policy_version 370 (0.0013)
[2023-02-25 19:25:22,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1515520. Throughput: 0: 1011.1. Samples: 378910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:22,895][07057] Avg episode reward: [(0, '5.915')]
[2023-02-25 19:25:22,906][12613] Saving new best policy, reward=5.915!
[2023-02-25 19:25:27,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3891.7, 300 sec: 3915.5). Total num frames: 1531904. Throughput: 0: 1007.2. Samples: 382274. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:25:27,894][07057] Avg episode reward: [(0, '5.781')]
[2023-02-25 19:25:32,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1548288. Throughput: 0: 956.6. Samples: 386880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:25:32,891][07057] Avg episode reward: [(0, '5.848')]
[2023-02-25 19:25:34,849][12627] Updated weights for policy 0, policy_version 380 (0.0023)
[2023-02-25 19:25:37,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1568768. Throughput: 0: 978.8. Samples: 392594. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:25:37,895][07057] Avg episode reward: [(0, '5.545')]
[2023-02-25 19:25:42,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3901.7). Total num frames: 1593344. Throughput: 0: 1011.2. Samples: 396256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:42,895][07057] Avg episode reward: [(0, '5.668')]
[2023-02-25 19:25:43,354][12627] Updated weights for policy 0, policy_version 390 (0.0015)
[2023-02-25 19:25:47,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 1613824. Throughput: 0: 999.4. Samples: 402946. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:47,894][07057] Avg episode reward: [(0, '5.508')]
[2023-02-25 19:25:52,890][07057] Fps is (10 sec: 3276.2, 60 sec: 3891.1, 300 sec: 3929.4). Total num frames: 1626112. Throughput: 0: 950.2. Samples: 407000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:25:52,892][07057] Avg episode reward: [(0, '5.861')]
[2023-02-25 19:25:57,460][12627] Updated weights for policy 0, policy_version 400 (0.0024)
[2023-02-25 19:25:57,888][07057] Fps is (10 sec: 2457.6, 60 sec: 3754.7, 300 sec: 3915.5). Total num frames: 1638400. Throughput: 0: 940.3. Samples: 408838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:25:57,893][07057] Avg episode reward: [(0, '5.597')]
[2023-02-25 19:26:02,888][07057] Fps is (10 sec: 2867.7, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1654784. Throughput: 0: 925.0. Samples: 413366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:02,893][07057] Avg episode reward: [(0, '5.948')]
[2023-02-25 19:26:02,914][12613] Saving new best policy, reward=5.948!
[2023-02-25 19:26:07,075][12627] Updated weights for policy 0, policy_version 410 (0.0031)
[2023-02-25 19:26:07,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 1679360. Throughput: 0: 924.8. Samples: 420524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:07,890][07057] Avg episode reward: [(0, '5.782')]
[2023-02-25 19:26:12,891][07057] Fps is (10 sec: 4094.8, 60 sec: 3754.5, 300 sec: 3929.4). Total num frames: 1695744. Throughput: 0: 903.0. Samples: 422910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:12,899][07057] Avg episode reward: [(0, '5.705')]
[2023-02-25 19:26:12,912][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000414_1695744.pth...
[2023-02-25 19:26:13,050][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000186_761856.pth
[2023-02-25 19:26:17,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1712128. Throughput: 0: 902.4. Samples: 427488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:17,890][07057] Avg episode reward: [(0, '6.101')]
[2023-02-25 19:26:17,896][12613] Saving new best policy, reward=6.101!
[2023-02-25 19:26:19,228][12627] Updated weights for policy 0, policy_version 420 (0.0025)
[2023-02-25 19:26:22,888][07057] Fps is (10 sec: 4097.1, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1736704. Throughput: 0: 930.5. Samples: 434466. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:26:22,890][07057] Avg episode reward: [(0, '6.630')]
[2023-02-25 19:26:22,906][12613] Saving new best policy, reward=6.630!
[2023-02-25 19:26:27,891][07057] Fps is (10 sec: 4504.1, 60 sec: 3754.5, 300 sec: 3929.3). Total num frames: 1757184. Throughput: 0: 924.9. Samples: 437878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:27,898][07057] Avg episode reward: [(0, '6.758')]
[2023-02-25 19:26:27,901][12613] Saving new best policy, reward=6.758!
[2023-02-25 19:26:28,341][12627] Updated weights for policy 0, policy_version 430 (0.0016)
[2023-02-25 19:26:32,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 1773568. Throughput: 0: 893.8. Samples: 443166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:32,890][07057] Avg episode reward: [(0, '7.116')]
[2023-02-25 19:26:32,898][12613] Saving new best policy, reward=7.116!
[2023-02-25 19:26:37,888][07057] Fps is (10 sec: 3277.9, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1789952. Throughput: 0: 910.3. Samples: 447962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:26:37,891][07057] Avg episode reward: [(0, '7.108')]
[2023-02-25 19:26:39,857][12627] Updated weights for policy 0, policy_version 440 (0.0025)
[2023-02-25 19:26:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1814528. Throughput: 0: 949.4. Samples: 451560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:26:42,891][07057] Avg episode reward: [(0, '7.620')]
[2023-02-25 19:26:42,903][12613] Saving new best policy, reward=7.620!
[2023-02-25 19:26:47,888][07057] Fps is (10 sec: 4915.2, 60 sec: 3754.7, 300 sec: 3929.4). Total num frames: 1839104. Throughput: 0: 1010.8. Samples: 458850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:26:47,894][07057] Avg episode reward: [(0, '7.795')]
[2023-02-25 19:26:47,905][12613] Saving new best policy, reward=7.795!
[2023-02-25 19:26:49,230][12627] Updated weights for policy 0, policy_version 450 (0.0021)
[2023-02-25 19:26:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3915.5). Total num frames: 1851392. Throughput: 0: 956.4. Samples: 463564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:26:52,890][07057] Avg episode reward: [(0, '7.654')]
[2023-02-25 19:26:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1871872. Throughput: 0: 954.5. Samples: 465858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:26:57,891][07057] Avg episode reward: [(0, '7.258')]
[2023-02-25 19:27:00,412][12627] Updated weights for policy 0, policy_version 460 (0.0020)
[2023-02-25 19:27:02,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1892352. Throughput: 0: 1004.7. Samples: 472698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:02,891][07057] Avg episode reward: [(0, '6.879')]
[2023-02-25 19:27:07,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 1916928. Throughput: 0: 1002.5. Samples: 479578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:07,893][07057] Avg episode reward: [(0, '6.727')]
[2023-02-25 19:27:10,261][12627] Updated weights for policy 0, policy_version 470 (0.0014)
[2023-02-25 19:27:12,915][07057] Fps is (10 sec: 3676.3, 60 sec: 3889.6, 300 sec: 3901.3). Total num frames: 1929216. Throughput: 0: 977.5. Samples: 481890. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:12,922][07057] Avg episode reward: [(0, '6.646')]
[2023-02-25 19:27:17,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1949696. Throughput: 0: 964.4. Samples: 486564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:17,895][07057] Avg episode reward: [(0, '6.860')]
[2023-02-25 19:27:20,805][12627] Updated weights for policy 0, policy_version 480 (0.0022)
[2023-02-25 19:27:22,888][07057] Fps is (10 sec: 4518.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1974272. Throughput: 0: 1020.6. Samples: 493888. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:22,893][07057] Avg episode reward: [(0, '6.405')]
[2023-02-25 19:27:27,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.7, 300 sec: 3915.5). Total num frames: 1994752. Throughput: 0: 1020.2. Samples: 497470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:27:27,894][07057] Avg episode reward: [(0, '6.615')]
[2023-02-25 19:27:30,857][12627] Updated weights for policy 0, policy_version 490 (0.0017)
[2023-02-25 19:27:32,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2011136. Throughput: 0: 971.4. Samples: 502564. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:32,895][07057] Avg episode reward: [(0, '7.081')]
[2023-02-25 19:27:37,888][07057] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 2031616. Throughput: 0: 982.1. Samples: 507760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:37,892][07057] Avg episode reward: [(0, '6.868')]
[2023-02-25 19:27:41,230][12627] Updated weights for policy 0, policy_version 500 (0.0021)
[2023-02-25 19:27:42,888][07057] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2056192. Throughput: 0: 1013.0. Samples: 511444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:42,896][07057] Avg episode reward: [(0, '6.693')]
[2023-02-25 19:27:47,890][07057] Fps is (10 sec: 4504.4, 60 sec: 3959.3, 300 sec: 3929.3). Total num frames: 2076672. Throughput: 0: 1019.6. Samples: 518584. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:27:47,898][07057] Avg episode reward: [(0, '6.774')]
[2023-02-25 19:27:51,815][12627] Updated weights for policy 0, policy_version 510 (0.0012)
[2023-02-25 19:27:52,888][07057] Fps is (10 sec: 3276.6, 60 sec: 3959.4, 300 sec: 3915.5). Total num frames: 2088960. Throughput: 0: 970.2. Samples: 523236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:27:52,891][07057] Avg episode reward: [(0, '7.206')]
[2023-02-25 19:27:57,888][07057] Fps is (10 sec: 3277.7, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2109440. Throughput: 0: 972.7. Samples: 525636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:27:57,891][07057] Avg episode reward: [(0, '6.683')]
[2023-02-25 19:28:01,399][12627] Updated weights for policy 0, policy_version 520 (0.0023)
[2023-02-25 19:28:02,888][07057] Fps is (10 sec: 4505.9, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 2134016. Throughput: 0: 1026.3. Samples: 532746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:02,891][07057] Avg episode reward: [(0, '6.500')]
[2023-02-25 19:28:07,893][07057] Fps is (10 sec: 4503.2, 60 sec: 3959.1, 300 sec: 3929.3). Total num frames: 2154496. Throughput: 0: 1009.4. Samples: 539316. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:28:07,901][07057] Avg episode reward: [(0, '6.786')]
[2023-02-25 19:28:12,429][12627] Updated weights for policy 0, policy_version 530 (0.0022)
[2023-02-25 19:28:12,892][07057] Fps is (10 sec: 3684.8, 60 sec: 4029.3, 300 sec: 3915.4). Total num frames: 2170880. Throughput: 0: 981.9. Samples: 541660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:12,896][07057] Avg episode reward: [(0, '7.245')]
[2023-02-25 19:28:12,913][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000530_2170880.pth...
[2023-02-25 19:28:13,042][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000301_1232896.pth
[2023-02-25 19:28:17,888][07057] Fps is (10 sec: 3688.4, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 2191360. Throughput: 0: 978.3. Samples: 546586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:28:17,891][07057] Avg episode reward: [(0, '7.825')]
[2023-02-25 19:28:17,893][12613] Saving new best policy, reward=7.825!
[2023-02-25 19:28:21,986][12627] Updated weights for policy 0, policy_version 540 (0.0020)
[2023-02-25 19:28:22,888][07057] Fps is (10 sec: 4507.5, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2215936. Throughput: 0: 1023.3. Samples: 553810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:22,891][07057] Avg episode reward: [(0, '8.202')]
[2023-02-25 19:28:22,899][12613] Saving new best policy, reward=8.202!
[2023-02-25 19:28:27,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2232320. Throughput: 0: 1021.3. Samples: 557402. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-02-25 19:28:27,898][07057] Avg episode reward: [(0, '8.708')]
[2023-02-25 19:28:27,900][12613] Saving new best policy, reward=8.708!
[2023-02-25 19:28:32,888][07057] Fps is (10 sec: 3276.9, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2248704. Throughput: 0: 968.9. Samples: 562182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:32,890][07057] Avg episode reward: [(0, '8.717')]
[2023-02-25 19:28:32,905][12613] Saving new best policy, reward=8.717!
[2023-02-25 19:28:33,570][12627] Updated weights for policy 0, policy_version 550 (0.0013)
[2023-02-25 19:28:37,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2269184. Throughput: 0: 983.1. Samples: 567476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:28:37,893][07057] Avg episode reward: [(0, '9.851')]
[2023-02-25 19:28:37,897][12613] Saving new best policy, reward=9.851!
[2023-02-25 19:28:42,636][12627] Updated weights for policy 0, policy_version 560 (0.0028)
[2023-02-25 19:28:42,890][07057] Fps is (10 sec: 4504.4, 60 sec: 3959.3, 300 sec: 3915.5). Total num frames: 2293760. Throughput: 0: 1009.1. Samples: 571048. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:28:42,895][07057] Avg episode reward: [(0, '9.684')]
[2023-02-25 19:28:47,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3959.6, 300 sec: 3929.4). Total num frames: 2314240. Throughput: 0: 1005.8. Samples: 578008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:28:47,891][07057] Avg episode reward: [(0, '10.298')]
[2023-02-25 19:28:47,900][12613] Saving new best policy, reward=10.298!
[2023-02-25 19:28:52,888][07057] Fps is (10 sec: 3277.6, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2326528. Throughput: 0: 958.2. Samples: 582428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:28:52,895][07057] Avg episode reward: [(0, '9.841')]
[2023-02-25 19:28:54,348][12627] Updated weights for policy 0, policy_version 570 (0.0024)
[2023-02-25 19:28:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2347008. Throughput: 0: 960.9. Samples: 584896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:28:57,890][07057] Avg episode reward: [(0, '10.332')]
[2023-02-25 19:28:57,893][12613] Saving new best policy, reward=10.332!
[2023-02-25 19:29:02,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2371584. Throughput: 0: 1013.6. Samples: 592196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:29:02,894][07057] Avg episode reward: [(0, '10.398')]
[2023-02-25 19:29:02,902][12613] Saving new best policy, reward=10.398!
[2023-02-25 19:29:03,240][12627] Updated weights for policy 0, policy_version 580 (0.0016)
[2023-02-25 19:29:07,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3959.8, 300 sec: 3929.4). Total num frames: 2392064. Throughput: 0: 987.6. Samples: 598252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:07,893][07057] Avg episode reward: [(0, '10.794')]
[2023-02-25 19:29:07,901][12613] Saving new best policy, reward=10.794!
[2023-02-25 19:29:12,889][07057] Fps is (10 sec: 3276.3, 60 sec: 3891.4, 300 sec: 3901.6). Total num frames: 2404352. Throughput: 0: 956.7. Samples: 600456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:29:12,897][07057] Avg episode reward: [(0, '11.683')]
[2023-02-25 19:29:12,983][12613] Saving new best policy, reward=11.683!
[2023-02-25 19:29:15,402][12627] Updated weights for policy 0, policy_version 590 (0.0028)
[2023-02-25 19:29:17,888][07057] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2428928. Throughput: 0: 972.0. Samples: 605920. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:29:17,890][07057] Avg episode reward: [(0, '12.155')]
[2023-02-25 19:29:17,898][12613] Saving new best policy, reward=12.155!
[2023-02-25 19:29:22,888][07057] Fps is (10 sec: 4506.3, 60 sec: 3891.2, 300 sec: 3901.7). Total num frames: 2449408. Throughput: 0: 1017.3. Samples: 613254. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:22,894][07057] Avg episode reward: [(0, '13.520')]
[2023-02-25 19:29:22,903][12613] Saving new best policy, reward=13.520!
[2023-02-25 19:29:23,801][12627] Updated weights for policy 0, policy_version 600 (0.0022)
[2023-02-25 19:29:27,889][07057] Fps is (10 sec: 4095.7, 60 sec: 3959.4, 300 sec: 3929.4). Total num frames: 2469888. Throughput: 0: 1011.8. Samples: 616578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:29:27,897][07057] Avg episode reward: [(0, '15.127')]
[2023-02-25 19:29:27,899][12613] Saving new best policy, reward=15.127!
[2023-02-25 19:29:32,889][07057] Fps is (10 sec: 3686.0, 60 sec: 3959.4, 300 sec: 3915.5). Total num frames: 2486272. Throughput: 0: 959.8. Samples: 621202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:29:32,892][07057] Avg episode reward: [(0, '14.893')]
[2023-02-25 19:29:35,557][12627] Updated weights for policy 0, policy_version 610 (0.0013)
[2023-02-25 19:29:37,888][07057] Fps is (10 sec: 3686.7, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2506752. Throughput: 0: 994.7. Samples: 627190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:37,890][07057] Avg episode reward: [(0, '14.933')]
[2023-02-25 19:29:42,888][07057] Fps is (10 sec: 4506.1, 60 sec: 3959.6, 300 sec: 3915.5). Total num frames: 2531328. Throughput: 0: 1021.9. Samples: 630880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:29:42,895][07057] Avg episode reward: [(0, '14.290')]
[2023-02-25 19:29:43,960][12627] Updated weights for policy 0, policy_version 620 (0.0012)
[2023-02-25 19:29:47,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 2551808. Throughput: 0: 1005.2. Samples: 637432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:29:47,895][07057] Avg episode reward: [(0, '14.960')]
[2023-02-25 19:29:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2568192. Throughput: 0: 975.6. Samples: 642152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:29:52,896][07057] Avg episode reward: [(0, '15.410')]
[2023-02-25 19:29:52,913][12613] Saving new best policy, reward=15.410!
[2023-02-25 19:29:55,895][12627] Updated weights for policy 0, policy_version 630 (0.0016)
[2023-02-25 19:29:57,888][07057] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2588672. Throughput: 0: 987.2. Samples: 644878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:29:57,891][07057] Avg episode reward: [(0, '15.503')]
[2023-02-25 19:29:57,897][12613] Saving new best policy, reward=15.503!
[2023-02-25 19:30:02,888][07057] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2613248. Throughput: 0: 1026.4. Samples: 652106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:02,890][07057] Avg episode reward: [(0, '15.775')]
[2023-02-25 19:30:02,911][12613] Saving new best policy, reward=15.775!
[2023-02-25 19:30:04,615][12627] Updated weights for policy 0, policy_version 640 (0.0013)
[2023-02-25 19:30:07,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 2629632. Throughput: 0: 993.0. Samples: 657938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:07,891][07057] Avg episode reward: [(0, '16.904')]
[2023-02-25 19:30:07,895][12613] Saving new best policy, reward=16.904!
[2023-02-25 19:30:12,889][07057] Fps is (10 sec: 3276.6, 60 sec: 4027.8, 300 sec: 3915.5). Total num frames: 2646016. Throughput: 0: 969.8. Samples: 660220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:12,896][07057] Avg episode reward: [(0, '17.156')]
[2023-02-25 19:30:12,913][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000646_2646016.pth...
[2023-02-25 19:30:13,075][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000414_1695744.pth
[2023-02-25 19:30:13,087][12613] Saving new best policy, reward=17.156!
[2023-02-25 19:30:16,454][12627] Updated weights for policy 0, policy_version 650 (0.0021)
[2023-02-25 19:30:17,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2666496. Throughput: 0: 994.0. Samples: 665932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:17,895][07057] Avg episode reward: [(0, '17.431')]
[2023-02-25 19:30:17,899][12613] Saving new best policy, reward=17.431!
[2023-02-25 19:30:22,888][07057] Fps is (10 sec: 4505.9, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2691072. Throughput: 0: 1022.3. Samples: 673194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:22,890][07057] Avg episode reward: [(0, '16.706')]
[2023-02-25 19:30:25,447][12627] Updated weights for policy 0, policy_version 660 (0.0012)
[2023-02-25 19:30:27,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 2707456. Throughput: 0: 1006.6. Samples: 676176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:27,894][07057] Avg episode reward: [(0, '15.609')]
[2023-02-25 19:30:32,888][07057] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 2723840. Throughput: 0: 962.5. Samples: 680744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:32,895][07057] Avg episode reward: [(0, '14.238')]
[2023-02-25 19:30:36,955][12627] Updated weights for policy 0, policy_version 670 (0.0027)
[2023-02-25 19:30:37,888][07057] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 2748416. Throughput: 0: 999.5. Samples: 687128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:30:37,892][07057] Avg episode reward: [(0, '14.518')]
[2023-02-25 19:30:42,888][07057] Fps is (10 sec: 4915.4, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 2772992. Throughput: 0: 1020.0. Samples: 690776. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:30:42,891][07057] Avg episode reward: [(0, '14.858')]
[2023-02-25 19:30:47,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 2781184. Throughput: 0: 968.0. Samples: 695666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:30:47,890][07057] Avg episode reward: [(0, '15.033')]
[2023-02-25 19:30:48,441][12627] Updated weights for policy 0, policy_version 680 (0.0018)
[2023-02-25 19:30:52,888][07057] Fps is (10 sec: 2048.0, 60 sec: 3754.7, 300 sec: 3915.5). Total num frames: 2793472. Throughput: 0: 918.5. Samples: 699270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:30:52,890][07057] Avg episode reward: [(0, '15.481')]
[2023-02-25 19:30:57,888][07057] Fps is (10 sec: 2457.5, 60 sec: 3618.1, 300 sec: 3901.6). Total num frames: 2805760. Throughput: 0: 907.0. Samples: 701036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:30:57,896][07057] Avg episode reward: [(0, '16.363')]
[2023-02-25 19:31:02,888][07057] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3873.8). Total num frames: 2822144. Throughput: 0: 890.2. Samples: 705992. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:31:02,890][07057] Avg episode reward: [(0, '17.071')]
[2023-02-25 19:31:03,366][12627] Updated weights for policy 0, policy_version 690 (0.0019)
[2023-02-25 19:31:07,895][07057] Fps is (10 sec: 3683.9, 60 sec: 3549.4, 300 sec: 3887.7). Total num frames: 2842624. Throughput: 0: 851.5. Samples: 711518. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:07,901][07057] Avg episode reward: [(0, '17.941')]
[2023-02-25 19:31:07,909][12613] Saving new best policy, reward=17.941!
[2023-02-25 19:31:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3887.7). Total num frames: 2859008. Throughput: 0: 836.2. Samples: 713806. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:12,891][07057] Avg episode reward: [(0, '17.213')]
[2023-02-25 19:31:15,031][12627] Updated weights for policy 0, policy_version 700 (0.0016)
[2023-02-25 19:31:17,888][07057] Fps is (10 sec: 3279.2, 60 sec: 3481.6, 300 sec: 3860.0). Total num frames: 2875392. Throughput: 0: 834.6. Samples: 718302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:31:17,890][07057] Avg episode reward: [(0, '17.841')]
[2023-02-25 19:31:22,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3860.0). Total num frames: 2895872. Throughput: 0: 837.4. Samples: 724812. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:22,890][07057] Avg episode reward: [(0, '17.344')]
[2023-02-25 19:31:24,791][12627] Updated weights for policy 0, policy_version 710 (0.0020)
[2023-02-25 19:31:27,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3549.9, 300 sec: 3887.7). Total num frames: 2920448. Throughput: 0: 834.1. Samples: 728312. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:27,897][07057] Avg episode reward: [(0, '17.449')]
[2023-02-25 19:31:32,888][07057] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3873.8). Total num frames: 2932736. Throughput: 0: 839.9. Samples: 733462. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:32,892][07057] Avg episode reward: [(0, '17.622')]
[2023-02-25 19:31:37,342][12627] Updated weights for policy 0, policy_version 720 (0.0036)
[2023-02-25 19:31:37,889][07057] Fps is (10 sec: 2866.9, 60 sec: 3345.0, 300 sec: 3846.1). Total num frames: 2949120. Throughput: 0: 858.1. Samples: 737886. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:37,891][07057] Avg episode reward: [(0, '19.554')]
[2023-02-25 19:31:37,894][12613] Saving new best policy, reward=19.554!
[2023-02-25 19:31:42,888][07057] Fps is (10 sec: 4096.1, 60 sec: 3345.1, 300 sec: 3846.1). Total num frames: 2973696. Throughput: 0: 894.5. Samples: 741286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:42,895][07057] Avg episode reward: [(0, '19.847')]
[2023-02-25 19:31:42,909][12613] Saving new best policy, reward=19.847!
[2023-02-25 19:31:46,309][12627] Updated weights for policy 0, policy_version 730 (0.0012)
[2023-02-25 19:31:47,888][07057] Fps is (10 sec: 4506.1, 60 sec: 3549.9, 300 sec: 3873.8). Total num frames: 2994176. Throughput: 0: 937.1. Samples: 748162. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:31:47,891][07057] Avg episode reward: [(0, '20.310')]
[2023-02-25 19:31:47,893][12613] Saving new best policy, reward=20.310!
[2023-02-25 19:31:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3860.0). Total num frames: 3010560. Throughput: 0: 923.5. Samples: 753068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:31:52,895][07057] Avg episode reward: [(0, '21.571')]
[2023-02-25 19:31:52,907][12613] Saving new best policy, reward=21.571!
[2023-02-25 19:31:57,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 3026944. Throughput: 0: 920.9. Samples: 755248. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:31:57,895][07057] Avg episode reward: [(0, '20.919')]
[2023-02-25 19:31:58,346][12627] Updated weights for policy 0, policy_version 740 (0.0034)
[2023-02-25 19:32:02,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3051520. Throughput: 0: 968.2. Samples: 761872. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:02,895][07057] Avg episode reward: [(0, '21.391')]
[2023-02-25 19:32:06,686][12627] Updated weights for policy 0, policy_version 750 (0.0018)
[2023-02-25 19:32:07,888][07057] Fps is (10 sec: 4915.2, 60 sec: 3891.7, 300 sec: 3888.1). Total num frames: 3076096. Throughput: 0: 984.9. Samples: 769132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:32:07,895][07057] Avg episode reward: [(0, '21.486')]
[2023-02-25 19:32:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3088384. Throughput: 0: 958.5. Samples: 771444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:32:12,892][07057] Avg episode reward: [(0, '22.434')]
[2023-02-25 19:32:12,907][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000754_3088384.pth...
[2023-02-25 19:32:13,030][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000530_2170880.pth
[2023-02-25 19:32:13,045][12613] Saving new best policy, reward=22.434!
[2023-02-25 19:32:17,888][07057] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3104768. Throughput: 0: 943.3. Samples: 775910. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:17,897][07057] Avg episode reward: [(0, '22.179')]
[2023-02-25 19:32:18,936][12627] Updated weights for policy 0, policy_version 760 (0.0018)
[2023-02-25 19:32:22,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3129344. Throughput: 0: 999.6. Samples: 782866. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:32:22,896][07057] Avg episode reward: [(0, '23.293')]
[2023-02-25 19:32:22,916][12613] Saving new best policy, reward=23.293!
[2023-02-25 19:32:27,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3149824. Throughput: 0: 999.5. Samples: 786264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:27,896][07057] Avg episode reward: [(0, '23.363')]
[2023-02-25 19:32:27,907][12613] Saving new best policy, reward=23.363!
[2023-02-25 19:32:28,251][12627] Updated weights for policy 0, policy_version 770 (0.0023)
[2023-02-25 19:32:32,893][07057] Fps is (10 sec: 3684.5, 60 sec: 3890.9, 300 sec: 3846.0). Total num frames: 3166208. Throughput: 0: 958.4. Samples: 791296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:32,900][07057] Avg episode reward: [(0, '23.189')]
[2023-02-25 19:32:37,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3818.3). Total num frames: 3182592. Throughput: 0: 956.6. Samples: 796116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:37,891][07057] Avg episode reward: [(0, '22.232')]
[2023-02-25 19:32:39,908][12627] Updated weights for policy 0, policy_version 780 (0.0025)
[2023-02-25 19:32:42,888][07057] Fps is (10 sec: 4098.2, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3207168. Throughput: 0: 990.4. Samples: 799816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:32:42,895][07057] Avg episode reward: [(0, '20.164')]
[2023-02-25 19:32:47,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3227648. Throughput: 0: 1003.1. Samples: 807010. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:47,890][07057] Avg episode reward: [(0, '18.289')]
[2023-02-25 19:32:49,442][12627] Updated weights for policy 0, policy_version 790 (0.0025)
[2023-02-25 19:32:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3244032. Throughput: 0: 948.4. Samples: 811810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:32:52,899][07057] Avg episode reward: [(0, '18.675')]
[2023-02-25 19:32:57,888][07057] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3264512. Throughput: 0: 948.3. Samples: 814116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:32:57,892][07057] Avg episode reward: [(0, '18.769')]
[2023-02-25 19:33:00,356][12627] Updated weights for policy 0, policy_version 800 (0.0015)
[2023-02-25 19:33:02,888][07057] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3289088. Throughput: 0: 1001.3. Samples: 820968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:02,893][07057] Avg episode reward: [(0, '19.439')]
[2023-02-25 19:33:07,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3309568. Throughput: 0: 996.3. Samples: 827700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:07,890][07057] Avg episode reward: [(0, '20.850')]
[2023-02-25 19:33:10,067][12627] Updated weights for policy 0, policy_version 810 (0.0018)
[2023-02-25 19:33:12,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3325952. Throughput: 0: 971.9. Samples: 830000. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:33:12,893][07057] Avg episode reward: [(0, '21.880')]
[2023-02-25 19:33:17,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3342336. Throughput: 0: 963.7. Samples: 834656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:17,890][07057] Avg episode reward: [(0, '22.119')]
[2023-02-25 19:33:20,956][12627] Updated weights for policy 0, policy_version 820 (0.0011)
[2023-02-25 19:33:22,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3366912. Throughput: 0: 1017.3. Samples: 841896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:22,896][07057] Avg episode reward: [(0, '22.626')]
[2023-02-25 19:33:27,888][07057] Fps is (10 sec: 4505.2, 60 sec: 3959.4, 300 sec: 3860.0). Total num frames: 3387392. Throughput: 0: 1015.5. Samples: 845514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:27,890][07057] Avg episode reward: [(0, '23.715')]
[2023-02-25 19:33:27,898][12613] Saving new best policy, reward=23.715!
[2023-02-25 19:33:31,142][12627] Updated weights for policy 0, policy_version 830 (0.0018)
[2023-02-25 19:33:32,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3846.1). Total num frames: 3403776. Throughput: 0: 968.8. Samples: 850606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:32,890][07057] Avg episode reward: [(0, '24.473')]
[2023-02-25 19:33:32,905][12613] Saving new best policy, reward=24.473!
[2023-02-25 19:33:37,888][07057] Fps is (10 sec: 3277.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3420160. Throughput: 0: 973.5. Samples: 855618. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:37,894][07057] Avg episode reward: [(0, '24.187')]
[2023-02-25 19:33:41,525][12627] Updated weights for policy 0, policy_version 840 (0.0043)
[2023-02-25 19:33:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3444736. Throughput: 0: 1003.1. Samples: 859256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:33:42,892][07057] Avg episode reward: [(0, '23.762')]
[2023-02-25 19:33:47,888][07057] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3465216. Throughput: 0: 1012.8. Samples: 866546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:33:47,891][07057] Avg episode reward: [(0, '23.924')]
[2023-02-25 19:33:52,103][12627] Updated weights for policy 0, policy_version 850 (0.0021)
[2023-02-25 19:33:52,889][07057] Fps is (10 sec: 3685.9, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 3481600. Throughput: 0: 963.4. Samples: 871056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:52,900][07057] Avg episode reward: [(0, '24.225')]
[2023-02-25 19:33:57,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3502080. Throughput: 0: 963.7. Samples: 873366. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:33:57,890][07057] Avg episode reward: [(0, '23.152')]
[2023-02-25 19:34:02,053][12627] Updated weights for policy 0, policy_version 860 (0.0020)
[2023-02-25 19:34:02,888][07057] Fps is (10 sec: 4506.2, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3526656. Throughput: 0: 1015.7. Samples: 880364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:02,890][07057] Avg episode reward: [(0, '22.283')]
[2023-02-25 19:34:07,889][07057] Fps is (10 sec: 4505.1, 60 sec: 3959.4, 300 sec: 3873.9). Total num frames: 3547136. Throughput: 0: 1004.3. Samples: 887090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:07,893][07057] Avg episode reward: [(0, '22.936')]
[2023-02-25 19:34:12,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3559424. Throughput: 0: 974.2. Samples: 889354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:12,896][07057] Avg episode reward: [(0, '22.498')]
[2023-02-25 19:34:12,911][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000869_3559424.pth...
[2023-02-25 19:34:13,054][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000646_2646016.pth
[2023-02-25 19:34:13,344][12627] Updated weights for policy 0, policy_version 870 (0.0012)
[2023-02-25 19:34:17,888][07057] Fps is (10 sec: 3277.2, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3579904. Throughput: 0: 964.7. Samples: 894016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:17,890][07057] Avg episode reward: [(0, '22.164')]
[2023-02-25 19:34:22,564][12627] Updated weights for policy 0, policy_version 880 (0.0016)
[2023-02-25 19:34:22,889][07057] Fps is (10 sec: 4505.1, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 3604480. Throughput: 0: 1016.3. Samples: 901352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:22,899][07057] Avg episode reward: [(0, '22.808')]
[2023-02-25 19:34:27,894][07057] Fps is (10 sec: 4502.7, 60 sec: 3959.1, 300 sec: 3859.9). Total num frames: 3624960. Throughput: 0: 1013.9. Samples: 904890. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:27,899][07057] Avg episode reward: [(0, '23.269')]
[2023-02-25 19:34:32,888][07057] Fps is (10 sec: 3686.9, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3641344. Throughput: 0: 962.2. Samples: 909846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:32,892][07057] Avg episode reward: [(0, '23.660')]
[2023-02-25 19:34:34,045][12627] Updated weights for policy 0, policy_version 890 (0.0023)
[2023-02-25 19:34:37,888][07057] Fps is (10 sec: 3278.8, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3657728. Throughput: 0: 978.5. Samples: 915088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:37,896][07057] Avg episode reward: [(0, '23.948')]
[2023-02-25 19:34:42,888][07057] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3682304. Throughput: 0: 1010.6. Samples: 918844. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-02-25 19:34:42,890][07057] Avg episode reward: [(0, '23.857')]
[2023-02-25 19:34:43,000][12627] Updated weights for policy 0, policy_version 900 (0.0018)
[2023-02-25 19:34:47,888][07057] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3702784. Throughput: 0: 1010.4. Samples: 925832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:47,891][07057] Avg episode reward: [(0, '22.408')]
[2023-02-25 19:34:52,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3832.2). Total num frames: 3719168. Throughput: 0: 962.3. Samples: 930394. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:34:52,890][07057] Avg episode reward: [(0, '21.720')]
[2023-02-25 19:34:54,892][12627] Updated weights for policy 0, policy_version 910 (0.0028)
[2023-02-25 19:34:57,888][07057] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3739648. Throughput: 0: 964.0. Samples: 932736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:34:57,890][07057] Avg episode reward: [(0, '22.441')]
[2023-02-25 19:35:02,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3764224. Throughput: 0: 1021.1. Samples: 939964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:35:02,897][07057] Avg episode reward: [(0, '23.966')]
[2023-02-25 19:35:03,495][12627] Updated weights for policy 0, policy_version 920 (0.0018)
[2023-02-25 19:35:07,892][07057] Fps is (10 sec: 4503.7, 60 sec: 3959.3, 300 sec: 3859.9). Total num frames: 3784704. Throughput: 0: 998.5. Samples: 946286. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:07,895][07057] Avg episode reward: [(0, '24.112')]
[2023-02-25 19:35:12,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 3796992. Throughput: 0: 971.5. Samples: 948600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-02-25 19:35:12,890][07057] Avg episode reward: [(0, '25.161')]
[2023-02-25 19:35:12,901][12613] Saving new best policy, reward=25.161!
[2023-02-25 19:35:15,502][12627] Updated weights for policy 0, policy_version 930 (0.0016)
[2023-02-25 19:35:17,888][07057] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3817472. Throughput: 0: 976.3. Samples: 953780. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:35:17,890][07057] Avg episode reward: [(0, '25.998')]
[2023-02-25 19:35:17,895][12613] Saving new best policy, reward=25.998!
[2023-02-25 19:35:22,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3846.1). Total num frames: 3842048. Throughput: 0: 1017.4. Samples: 960870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:35:22,891][07057] Avg episode reward: [(0, '26.935')]
[2023-02-25 19:35:22,901][12613] Saving new best policy, reward=26.935!
[2023-02-25 19:35:24,168][12627] Updated weights for policy 0, policy_version 940 (0.0015)
[2023-02-25 19:35:27,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3959.9, 300 sec: 3860.0). Total num frames: 3862528. Throughput: 0: 1010.2. Samples: 964302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-02-25 19:35:27,890][07057] Avg episode reward: [(0, '26.361')]
[2023-02-25 19:35:32,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3874816. Throughput: 0: 958.0. Samples: 968940. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-02-25 19:35:32,894][07057] Avg episode reward: [(0, '25.570')]
[2023-02-25 19:35:36,216][12627] Updated weights for policy 0, policy_version 950 (0.0016)
[2023-02-25 19:35:37,888][07057] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3804.4). Total num frames: 3895296. Throughput: 0: 985.7. Samples: 974750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-02-25 19:35:37,890][07057] Avg episode reward: [(0, '24.571')]
[2023-02-25 19:35:42,891][07057] Fps is (10 sec: 4094.7, 60 sec: 3891.0, 300 sec: 3846.0). Total num frames: 3915776. Throughput: 0: 1005.4. Samples: 977984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:42,893][07057] Avg episode reward: [(0, '24.157')]
[2023-02-25 19:35:47,889][07057] Fps is (10 sec: 3276.3, 60 sec: 3754.6, 300 sec: 3846.1). Total num frames: 3928064. Throughput: 0: 941.4. Samples: 982330. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:47,892][07057] Avg episode reward: [(0, '23.994')]
[2023-02-25 19:35:48,699][12627] Updated weights for policy 0, policy_version 960 (0.0025)
[2023-02-25 19:35:52,888][07057] Fps is (10 sec: 2458.4, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 3940352. Throughput: 0: 886.4. Samples: 986170. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-02-25 19:35:52,897][07057] Avg episode reward: [(0, '24.253')]
[2023-02-25 19:35:57,888][07057] Fps is (10 sec: 3277.3, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 3960832. Throughput: 0: 885.3. Samples: 988440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-02-25 19:35:57,891][07057] Avg episode reward: [(0, '24.771')]
[2023-02-25 19:36:00,156][12627] Updated weights for policy 0, policy_version 970 (0.0021)
[2023-02-25 19:36:02,888][07057] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3873.9). Total num frames: 3985408. Throughput: 0: 920.3. Samples: 995194. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-02-25 19:36:02,890][07057] Avg episode reward: [(0, '25.325')]
[2023-02-25 19:36:06,907][12613] Stopping Batcher_0...
[2023-02-25 19:36:06,908][12613] Loop batcher_evt_loop terminating...
[2023-02-25 19:36:06,909][07057] Component Batcher_0 stopped!
[2023-02-25 19:36:06,938][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:36:06,986][07057] Component RolloutWorker_w7 stopped!
[2023-02-25 19:36:06,988][07057] Component RolloutWorker_w6 stopped!
[2023-02-25 19:36:06,988][12635] Stopping RolloutWorker_w7...
[2023-02-25 19:36:06,995][12635] Loop rollout_proc7_evt_loop terminating...
[2023-02-25 19:36:07,006][07057] Component RolloutWorker_w3 stopped!
[2023-02-25 19:36:07,009][12632] Stopping RolloutWorker_w3...
[2023-02-25 19:36:07,009][12632] Loop rollout_proc3_evt_loop terminating...
[2023-02-25 19:36:06,988][12634] Stopping RolloutWorker_w6...
[2023-02-25 19:36:07,020][07057] Component RolloutWorker_w1 stopped!
[2023-02-25 19:36:07,023][12629] Stopping RolloutWorker_w1...
[2023-02-25 19:36:07,025][12629] Loop rollout_proc1_evt_loop terminating...
[2023-02-25 19:36:07,038][07057] Component RolloutWorker_w0 stopped!
[2023-02-25 19:36:07,016][12634] Loop rollout_proc6_evt_loop terminating...
[2023-02-25 19:36:07,048][12627] Weights refcount: 2 0
[2023-02-25 19:36:07,038][12628] Stopping RolloutWorker_w0...
[2023-02-25 19:36:07,062][07057] Component InferenceWorker_p0-w0 stopped!
[2023-02-25 19:36:07,063][12631] Stopping RolloutWorker_w4...
[2023-02-25 19:36:07,067][07057] Component RolloutWorker_w4 stopped!
[2023-02-25 19:36:07,063][12630] Stopping RolloutWorker_w2...
[2023-02-25 19:36:07,070][07057] Component RolloutWorker_w2 stopped!
[2023-02-25 19:36:07,073][12633] Stopping RolloutWorker_w5...
[2023-02-25 19:36:07,073][12633] Loop rollout_proc5_evt_loop terminating...
[2023-02-25 19:36:07,074][12630] Loop rollout_proc2_evt_loop terminating...
[2023-02-25 19:36:07,072][07057] Component RolloutWorker_w5 stopped!
[2023-02-25 19:36:07,077][12627] Stopping InferenceWorker_p0-w0...
[2023-02-25 19:36:07,078][12627] Loop inference_proc0-0_evt_loop terminating...
[2023-02-25 19:36:07,061][12628] Loop rollout_proc0_evt_loop terminating...
[2023-02-25 19:36:07,068][12631] Loop rollout_proc4_evt_loop terminating...
[2023-02-25 19:36:07,172][12613] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000754_3088384.pth
[2023-02-25 19:36:07,188][12613] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:36:07,489][07057] Component LearnerWorker_p0 stopped!
[2023-02-25 19:36:07,494][07057] Waiting for process learner_proc0 to stop...
[2023-02-25 19:36:07,501][12613] Stopping LearnerWorker_p0...
[2023-02-25 19:36:07,501][12613] Loop learner_proc0_evt_loop terminating...
[2023-02-25 19:36:09,942][07057] Waiting for process inference_proc0-0 to join...
[2023-02-25 19:36:10,639][07057] Waiting for process rollout_proc0 to join...
[2023-02-25 19:36:11,285][07057] Waiting for process rollout_proc1 to join...
[2023-02-25 19:36:11,288][07057] Waiting for process rollout_proc2 to join...
[2023-02-25 19:36:11,296][07057] Waiting for process rollout_proc3 to join...
[2023-02-25 19:36:11,298][07057] Waiting for process rollout_proc4 to join...
[2023-02-25 19:36:11,305][07057] Waiting for process rollout_proc5 to join...
[2023-02-25 19:36:11,308][07057] Waiting for process rollout_proc6 to join...
[2023-02-25 19:36:11,310][07057] Waiting for process rollout_proc7 to join...
[2023-02-25 19:36:11,312][07057] Batcher 0 profile tree view:
batching: 25.5810, releasing_batches: 0.0229
[2023-02-25 19:36:11,313][07057] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0001
wait_policy_total: 508.7138
update_model: 7.7479
weight_update: 0.0023
one_step: 0.0153
handle_policy_step: 487.1928
deserialize: 14.3036, stack: 2.8631, obs_to_device_normalize: 110.9765, forward: 228.9025, send_messages: 24.9682
prepare_outputs: 80.4461
to_cpu: 50.8596
[2023-02-25 19:36:11,315][07057] Learner 0 profile tree view:
misc: 0.0071, prepare_batch: 16.7536
train: 75.8742
epoch_init: 0.0196, minibatch_init: 0.0057, losses_postprocess: 0.6612, kl_divergence: 0.6512, after_optimizer: 33.0590
calculate_losses: 26.7872
losses_init: 0.0314, forward_head: 1.7740, bptt_initial: 17.7822, tail: 1.0367, advantages_returns: 0.2674, losses: 3.3651
bptt: 2.1974
bptt_forward_core: 2.1349
update: 14.1087
clip: 1.4058
[2023-02-25 19:36:11,317][07057] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.2478, enqueue_policy_requests: 135.2224, env_step: 786.4474, overhead: 18.4560, complete_rollouts: 6.8270
save_policy_outputs: 18.6202
split_output_tensors: 8.6285
[2023-02-25 19:36:11,318][07057] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.3770, enqueue_policy_requests: 135.6595, env_step: 787.4087, overhead: 19.2839, complete_rollouts: 7.0271
save_policy_outputs: 18.7823
split_output_tensors: 9.1122
[2023-02-25 19:36:11,322][07057] Loop Runner_EvtLoop terminating...
[2023-02-25 19:36:11,324][07057] Runner profile tree view:
main_loop: 1073.5426
[2023-02-25 19:36:11,325][07057] Collected {0: 4005888}, FPS: 3731.5
[2023-02-25 19:36:11,488][07057] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:36:11,490][07057] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:36:11,494][07057] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:36:11,497][07057] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:36:11,501][07057] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:36:11,504][07057] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:36:11,505][07057] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:36:11,508][07057] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:36:11,509][07057] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-02-25 19:36:11,511][07057] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-02-25 19:36:11,513][07057] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:36:11,515][07057] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:36:11,518][07057] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:36:11,519][07057] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:36:11,521][07057] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:36:11,565][07057] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-02-25 19:36:11,570][07057] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:36:11,578][07057] RunningMeanStd input shape: (1,)
[2023-02-25 19:36:11,601][07057] ConvEncoder: input_channels=3
[2023-02-25 19:36:12,315][07057] Conv encoder output size: 512
[2023-02-25 19:36:12,318][07057] Policy head output size: 512
[2023-02-25 19:36:14,698][07057] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:36:15,953][07057] Num frames 100...
[2023-02-25 19:36:16,062][07057] Num frames 200...
[2023-02-25 19:36:16,170][07057] Num frames 300...
[2023-02-25 19:36:16,281][07057] Num frames 400...
[2023-02-25 19:36:16,407][07057] Num frames 500...
[2023-02-25 19:36:16,529][07057] Num frames 600...
[2023-02-25 19:36:16,640][07057] Num frames 700...
[2023-02-25 19:36:16,754][07057] Num frames 800...
[2023-02-25 19:36:16,865][07057] Num frames 900...
[2023-02-25 19:36:17,028][07057] Avg episode rewards: #0: 21.970, true rewards: #0: 9.970
[2023-02-25 19:36:17,030][07057] Avg episode reward: 21.970, avg true_objective: 9.970
[2023-02-25 19:36:17,037][07057] Num frames 1000...
[2023-02-25 19:36:17,146][07057] Num frames 1100...
[2023-02-25 19:36:17,256][07057] Num frames 1200...
[2023-02-25 19:36:17,376][07057] Num frames 1300...
[2023-02-25 19:36:17,493][07057] Num frames 1400...
[2023-02-25 19:36:17,608][07057] Num frames 1500...
[2023-02-25 19:36:17,719][07057] Num frames 1600...
[2023-02-25 19:36:17,844][07057] Num frames 1700...
[2023-02-25 19:36:17,957][07057] Num frames 1800...
[2023-02-25 19:36:18,069][07057] Num frames 1900...
[2023-02-25 19:36:18,180][07057] Num frames 2000...
[2023-02-25 19:36:18,294][07057] Num frames 2100...
[2023-02-25 19:36:18,413][07057] Num frames 2200...
[2023-02-25 19:36:18,526][07057] Num frames 2300...
[2023-02-25 19:36:18,640][07057] Num frames 2400...
[2023-02-25 19:36:18,754][07057] Num frames 2500...
[2023-02-25 19:36:18,866][07057] Num frames 2600...
[2023-02-25 19:36:18,986][07057] Num frames 2700...
[2023-02-25 19:36:19,097][07057] Num frames 2800...
[2023-02-25 19:36:19,212][07057] Num frames 2900...
[2023-02-25 19:36:19,324][07057] Num frames 3000...
[2023-02-25 19:36:19,496][07057] Avg episode rewards: #0: 38.984, true rewards: #0: 15.485
[2023-02-25 19:36:19,497][07057] Avg episode reward: 38.984, avg true_objective: 15.485
[2023-02-25 19:36:19,504][07057] Num frames 3100...
[2023-02-25 19:36:19,618][07057] Num frames 3200...
[2023-02-25 19:36:19,736][07057] Num frames 3300...
[2023-02-25 19:36:19,850][07057] Num frames 3400...
[2023-02-25 19:36:19,964][07057] Num frames 3500...
[2023-02-25 19:36:20,078][07057] Num frames 3600...
[2023-02-25 19:36:20,188][07057] Num frames 3700...
[2023-02-25 19:36:20,305][07057] Num frames 3800...
[2023-02-25 19:36:20,426][07057] Num frames 3900...
[2023-02-25 19:36:20,540][07057] Num frames 4000...
[2023-02-25 19:36:20,662][07057] Avg episode rewards: #0: 33.533, true rewards: #0: 13.533
[2023-02-25 19:36:20,665][07057] Avg episode reward: 33.533, avg true_objective: 13.533
[2023-02-25 19:36:20,712][07057] Num frames 4100...
[2023-02-25 19:36:20,825][07057] Num frames 4200...
[2023-02-25 19:36:20,935][07057] Num frames 4300...
[2023-02-25 19:36:21,053][07057] Num frames 4400...
[2023-02-25 19:36:21,162][07057] Num frames 4500...
[2023-02-25 19:36:21,274][07057] Num frames 4600...
[2023-02-25 19:36:21,421][07057] Avg episode rewards: #0: 28.690, true rewards: #0: 11.690
[2023-02-25 19:36:21,423][07057] Avg episode reward: 28.690, avg true_objective: 11.690
[2023-02-25 19:36:21,453][07057] Num frames 4700...
[2023-02-25 19:36:21,568][07057] Num frames 4800...
[2023-02-25 19:36:21,680][07057] Num frames 4900...
[2023-02-25 19:36:21,796][07057] Num frames 5000...
[2023-02-25 19:36:21,917][07057] Num frames 5100...
[2023-02-25 19:36:22,029][07057] Num frames 5200...
[2023-02-25 19:36:22,141][07057] Num frames 5300...
[2023-02-25 19:36:22,286][07057] Num frames 5400...
[2023-02-25 19:36:22,443][07057] Num frames 5500...
[2023-02-25 19:36:22,605][07057] Num frames 5600...
[2023-02-25 19:36:22,758][07057] Num frames 5700...
[2023-02-25 19:36:22,909][07057] Avg episode rewards: #0: 27.928, true rewards: #0: 11.528
[2023-02-25 19:36:22,911][07057] Avg episode reward: 27.928, avg true_objective: 11.528
[2023-02-25 19:36:22,971][07057] Num frames 5800...
[2023-02-25 19:36:23,124][07057] Num frames 5900...
[2023-02-25 19:36:23,305][07057] Num frames 6000...
[2023-02-25 19:36:23,464][07057] Num frames 6100...
[2023-02-25 19:36:23,619][07057] Num frames 6200...
[2023-02-25 19:36:23,775][07057] Num frames 6300...
[2023-02-25 19:36:23,930][07057] Num frames 6400...
[2023-02-25 19:36:24,087][07057] Num frames 6500...
[2023-02-25 19:36:24,244][07057] Num frames 6600...
[2023-02-25 19:36:24,403][07057] Num frames 6700...
[2023-02-25 19:36:24,570][07057] Num frames 6800...
[2023-02-25 19:36:24,732][07057] Num frames 6900...
[2023-02-25 19:36:24,895][07057] Num frames 7000...
[2023-02-25 19:36:25,053][07057] Num frames 7100...
[2023-02-25 19:36:25,195][07057] Avg episode rewards: #0: 28.922, true rewards: #0: 11.922
[2023-02-25 19:36:25,198][07057] Avg episode reward: 28.922, avg true_objective: 11.922
[2023-02-25 19:36:25,273][07057] Num frames 7200...
[2023-02-25 19:36:25,430][07057] Num frames 7300...
[2023-02-25 19:36:25,587][07057] Num frames 7400...
[2023-02-25 19:36:25,748][07057] Num frames 7500...
[2023-02-25 19:36:25,830][07057] Avg episode rewards: #0: 25.604, true rewards: #0: 10.747
[2023-02-25 19:36:25,832][07057] Avg episode reward: 25.604, avg true_objective: 10.747
[2023-02-25 19:36:25,917][07057] Num frames 7600...
[2023-02-25 19:36:26,029][07057] Num frames 7700...
[2023-02-25 19:36:26,137][07057] Num frames 7800...
[2023-02-25 19:36:26,251][07057] Num frames 7900...
[2023-02-25 19:36:26,360][07057] Num frames 8000...
[2023-02-25 19:36:26,473][07057] Num frames 8100...
[2023-02-25 19:36:26,592][07057] Num frames 8200...
[2023-02-25 19:36:26,715][07057] Num frames 8300...
[2023-02-25 19:36:26,829][07057] Num frames 8400...
[2023-02-25 19:36:26,938][07057] Num frames 8500...
[2023-02-25 19:36:27,050][07057] Num frames 8600...
[2023-02-25 19:36:27,160][07057] Avg episode rewards: #0: 25.309, true rewards: #0: 10.809
[2023-02-25 19:36:27,162][07057] Avg episode reward: 25.309, avg true_objective: 10.809
[2023-02-25 19:36:27,230][07057] Num frames 8700...
[2023-02-25 19:36:27,342][07057] Num frames 8800...
[2023-02-25 19:36:27,452][07057] Num frames 8900...
[2023-02-25 19:36:27,574][07057] Num frames 9000...
[2023-02-25 19:36:27,683][07057] Num frames 9100...
[2023-02-25 19:36:27,792][07057] Num frames 9200...
[2023-02-25 19:36:27,908][07057] Avg episode rewards: #0: 23.950, true rewards: #0: 10.283
[2023-02-25 19:36:27,911][07057] Avg episode reward: 23.950, avg true_objective: 10.283
[2023-02-25 19:36:27,962][07057] Num frames 9300...
[2023-02-25 19:36:28,079][07057] Num frames 9400...
[2023-02-25 19:36:28,188][07057] Num frames 9500...
[2023-02-25 19:36:28,298][07057] Num frames 9600...
[2023-02-25 19:36:28,413][07057] Num frames 9700...
[2023-02-25 19:36:28,529][07057] Num frames 9800...
[2023-02-25 19:36:28,648][07057] Num frames 9900...
[2023-02-25 19:36:28,761][07057] Num frames 10000...
[2023-02-25 19:36:28,873][07057] Num frames 10100...
[2023-02-25 19:36:28,984][07057] Num frames 10200...
[2023-02-25 19:36:29,093][07057] Avg episode rewards: #0: 23.647, true rewards: #0: 10.247
[2023-02-25 19:36:29,100][07057] Avg episode reward: 23.647, avg true_objective: 10.247
[2023-02-25 19:37:29,208][07057] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-02-25 19:39:44,676][07057] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-02-25 19:39:44,678][07057] Overriding arg 'num_workers' with value 1 passed from command line
[2023-02-25 19:39:44,680][07057] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-02-25 19:39:44,684][07057] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-02-25 19:39:44,687][07057] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-02-25 19:39:44,690][07057] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-02-25 19:39:44,695][07057] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-02-25 19:39:44,697][07057] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-02-25 19:39:44,700][07057] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-02-25 19:39:44,701][07057] Adding new argument 'hf_repository'='khaled5321/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-02-25 19:39:44,704][07057] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-02-25 19:39:44,707][07057] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-02-25 19:39:44,711][07057] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-02-25 19:39:44,713][07057] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-02-25 19:39:44,715][07057] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-02-25 19:39:44,740][07057] RunningMeanStd input shape: (3, 72, 128)
[2023-02-25 19:39:44,742][07057] RunningMeanStd input shape: (1,)
[2023-02-25 19:39:44,757][07057] ConvEncoder: input_channels=3
[2023-02-25 19:39:44,797][07057] Conv encoder output size: 512
[2023-02-25 19:39:44,798][07057] Policy head output size: 512
[2023-02-25 19:39:44,818][07057] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-02-25 19:39:45,274][07057] Num frames 100...
[2023-02-25 19:39:45,396][07057] Num frames 200...
[2023-02-25 19:39:45,520][07057] Avg episode rewards: #0: 2.560, true rewards: #0: 2.560
[2023-02-25 19:39:45,522][07057] Avg episode reward: 2.560, avg true_objective: 2.560
[2023-02-25 19:39:45,574][07057] Num frames 300...
[2023-02-25 19:39:45,696][07057] Num frames 400...
[2023-02-25 19:39:45,808][07057] Num frames 500...
[2023-02-25 19:39:45,927][07057] Avg episode rewards: #0: 2.790, true rewards: #0: 2.790
[2023-02-25 19:39:45,929][07057] Avg episode reward: 2.790, avg true_objective: 2.790
[2023-02-25 19:39:45,986][07057] Num frames 600...
[2023-02-25 19:39:46,110][07057] Num frames 700...
[2023-02-25 19:39:46,225][07057] Num frames 800...
[2023-02-25 19:39:46,343][07057] Num frames 900...
[2023-02-25 19:39:46,472][07057] Num frames 1000...
[2023-02-25 19:39:46,589][07057] Num frames 1100...
[2023-02-25 19:39:46,702][07057] Num frames 1200...
[2023-02-25 19:39:46,816][07057] Num frames 1300...
[2023-02-25 19:39:46,971][07057] Avg episode rewards: #0: 7.300, true rewards: #0: 4.633
[2023-02-25 19:39:46,972][07057] Avg episode reward: 7.300, avg true_objective: 4.633
[2023-02-25 19:39:46,987][07057] Num frames 1400...
[2023-02-25 19:39:47,105][07057] Num frames 1500...
[2023-02-25 19:39:47,217][07057] Num frames 1600...
[2023-02-25 19:39:47,337][07057] Num frames 1700...
[2023-02-25 19:39:47,459][07057] Num frames 1800...
[2023-02-25 19:39:47,579][07057] Num frames 1900...
[2023-02-25 19:39:47,695][07057] Num frames 2000...
[2023-02-25 19:39:47,811][07057] Num frames 2100...
[2023-02-25 19:39:47,930][07057] Num frames 2200...
[2023-02-25 19:39:48,063][07057] Num frames 2300...
[2023-02-25 19:39:48,180][07057] Num frames 2400...
[2023-02-25 19:39:48,296][07057] Avg episode rewards: #0: 11.365, true rewards: #0: 6.115
[2023-02-25 19:39:48,298][07057] Avg episode reward: 11.365, avg true_objective: 6.115
[2023-02-25 19:39:48,367][07057] Num frames 2500...
[2023-02-25 19:39:48,493][07057] Num frames 2600...
[2023-02-25 19:39:48,644][07057] Num frames 2700...
[2023-02-25 19:39:48,812][07057] Num frames 2800...
[2023-02-25 19:39:48,977][07057] Num frames 2900...
[2023-02-25 19:39:49,141][07057] Num frames 3000...
[2023-02-25 19:39:49,296][07057] Num frames 3100...
[2023-02-25 19:39:49,454][07057] Num frames 3200...
[2023-02-25 19:39:49,616][07057] Num frames 3300...
[2023-02-25 19:39:49,774][07057] Num frames 3400...
[2023-02-25 19:39:49,843][07057] Avg episode rewards: #0: 12.812, true rewards: #0: 6.812
[2023-02-25 19:39:49,848][07057] Avg episode reward: 12.812, avg true_objective: 6.812
[2023-02-25 19:39:50,000][07057] Num frames 3500...
[2023-02-25 19:39:50,169][07057] Num frames 3600...
[2023-02-25 19:39:50,328][07057] Num frames 3700...
[2023-02-25 19:39:50,521][07057] Num frames 3800...
[2023-02-25 19:39:50,709][07057] Num frames 3900...
[2023-02-25 19:39:50,888][07057] Num frames 4000...
[2023-02-25 19:39:51,052][07057] Num frames 4100...
[2023-02-25 19:39:51,222][07057] Num frames 4200...
[2023-02-25 19:39:51,392][07057] Num frames 4300...
[2023-02-25 19:39:51,612][07057] Avg episode rewards: #0: 14.663, true rewards: #0: 7.330
[2023-02-25 19:39:51,614][07057] Avg episode reward: 14.663, avg true_objective: 7.330
[2023-02-25 19:39:51,621][07057] Num frames 4400...
[2023-02-25 19:39:51,790][07057] Num frames 4500...
[2023-02-25 19:39:51,954][07057] Num frames 4600...
[2023-02-25 19:39:52,113][07057] Num frames 4700...
[2023-02-25 19:39:52,238][07057] Num frames 4800...
[2023-02-25 19:39:52,353][07057] Num frames 4900...
[2023-02-25 19:39:52,469][07057] Num frames 5000...
[2023-02-25 19:39:52,582][07057] Num frames 5100...
[2023-02-25 19:39:52,703][07057] Num frames 5200...
[2023-02-25 19:39:52,817][07057] Num frames 5300...
[2023-02-25 19:39:52,934][07057] Num frames 5400...
[2023-02-25 19:39:53,059][07057] Num frames 5500...
[2023-02-25 19:39:53,172][07057] Num frames 5600...
[2023-02-25 19:39:53,300][07057] Num frames 5700...
[2023-02-25 19:39:53,416][07057] Num frames 5800...
[2023-02-25 19:39:53,536][07057] Num frames 5900...
[2023-02-25 19:39:53,661][07057] Num frames 6000...
[2023-02-25 19:39:53,776][07057] Num frames 6100...
[2023-02-25 19:39:53,890][07057] Num frames 6200...
[2023-02-25 19:39:54,006][07057] Num frames 6300...
[2023-02-25 19:39:54,151][07057] Num frames 6400...
[2023-02-25 19:39:54,341][07057] Avg episode rewards: #0: 20.997, true rewards: #0: 9.283
[2023-02-25 19:39:54,344][07057] Avg episode reward: 20.997, avg true_objective: 9.283
[2023-02-25 19:39:54,349][07057] Num frames 6500...
[2023-02-25 19:39:54,464][07057] Num frames 6600...
[2023-02-25 19:39:54,581][07057] Num frames 6700...
[2023-02-25 19:39:54,696][07057] Num frames 6800...
[2023-02-25 19:39:54,809][07057] Num frames 6900...
[2023-02-25 19:39:54,919][07057] Num frames 7000...
[2023-02-25 19:39:55,034][07057] Num frames 7100...
[2023-02-25 19:39:55,145][07057] Num frames 7200...
[2023-02-25 19:39:55,270][07057] Num frames 7300...
[2023-02-25 19:39:55,400][07057] Avg episode rewards: #0: 20.702, true rewards: #0: 9.202
[2023-02-25 19:39:55,402][07057] Avg episode reward: 20.702, avg true_objective: 9.202
[2023-02-25 19:39:55,447][07057] Num frames 7400...
[2023-02-25 19:39:55,563][07057] Num frames 7500...
[2023-02-25 19:39:55,674][07057] Num frames 7600...
[2023-02-25 19:39:55,791][07057] Num frames 7700...
[2023-02-25 19:39:55,903][07057] Num frames 7800...
[2023-02-25 19:39:56,021][07057] Num frames 7900...
[2023-02-25 19:39:56,133][07057] Num frames 8000...
[2023-02-25 19:39:56,257][07057] Num frames 8100...
[2023-02-25 19:39:56,380][07057] Num frames 8200...
[2023-02-25 19:39:56,494][07057] Num frames 8300...
[2023-02-25 19:39:56,613][07057] Num frames 8400...
[2023-02-25 19:39:56,723][07057] Num frames 8500...
[2023-02-25 19:39:56,833][07057] Num frames 8600...
[2023-02-25 19:39:56,955][07057] Num frames 8700...
[2023-02-25 19:39:57,066][07057] Num frames 8800...
[2023-02-25 19:39:57,192][07057] Num frames 8900...
[2023-02-25 19:39:57,320][07057] Num frames 9000...
[2023-02-25 19:39:57,442][07057] Num frames 9100...
[2023-02-25 19:39:57,553][07057] Num frames 9200...
[2023-02-25 19:39:57,677][07057] Avg episode rewards: #0: 23.732, true rewards: #0: 10.288
[2023-02-25 19:39:57,678][07057] Avg episode reward: 23.732, avg true_objective: 10.288
[2023-02-25 19:39:57,729][07057] Num frames 9300...
[2023-02-25 19:39:57,842][07057] Num frames 9400...
[2023-02-25 19:39:57,956][07057] Num frames 9500...
[2023-02-25 19:39:58,074][07057] Num frames 9600...
[2023-02-25 19:39:58,184][07057] Num frames 9700...
[2023-02-25 19:39:58,307][07057] Num frames 9800...
[2023-02-25 19:39:58,431][07057] Num frames 9900...
[2023-02-25 19:39:58,548][07057] Num frames 10000...
[2023-02-25 19:39:58,668][07057] Num frames 10100...
[2023-02-25 19:39:58,820][07057] Avg episode rewards: #0: 23.287, true rewards: #0: 10.187
[2023-02-25 19:39:58,822][07057] Avg episode reward: 23.287, avg true_objective: 10.187
[2023-02-25 19:41:00,043][07057] Replay video saved to /content/train_dir/default_experiment/replay.mp4!