diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,1120 @@ +[2023-02-26 12:42:36,040][00201] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-26 12:42:36,043][00201] Rollout worker 0 uses device cpu +[2023-02-26 12:42:36,045][00201] Rollout worker 1 uses device cpu +[2023-02-26 12:42:36,046][00201] Rollout worker 2 uses device cpu +[2023-02-26 12:42:36,047][00201] Rollout worker 3 uses device cpu +[2023-02-26 12:42:36,049][00201] Rollout worker 4 uses device cpu +[2023-02-26 12:42:36,050][00201] Rollout worker 5 uses device cpu +[2023-02-26 12:42:36,051][00201] Rollout worker 6 uses device cpu +[2023-02-26 12:42:36,053][00201] Rollout worker 7 uses device cpu +[2023-02-26 12:42:36,267][00201] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 12:42:36,270][00201] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-26 12:42:36,309][00201] Starting all processes... +[2023-02-26 12:42:36,312][00201] Starting process learner_proc0 +[2023-02-26 12:42:36,390][00201] Starting all processes... +[2023-02-26 12:42:36,404][00201] Starting process inference_proc0-0 +[2023-02-26 12:42:36,420][00201] Starting process rollout_proc0 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc1 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc2 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc3 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc4 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc5 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc6 +[2023-02-26 12:42:36,422][00201] Starting process rollout_proc7 +[2023-02-26 12:42:46,149][24343] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 12:42:46,150][24343] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-26 12:42:46,153][24356] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 12:42:46,153][24356] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-26 12:42:46,475][24359] Worker 1 uses CPU cores [1] +[2023-02-26 12:42:46,490][24358] Worker 0 uses CPU cores [0] +[2023-02-26 12:42:46,513][24365] Worker 4 uses CPU cores [0] +[2023-02-26 12:42:46,561][24364] Worker 2 uses CPU cores [0] +[2023-02-26 12:42:46,609][24368] Worker 6 uses CPU cores [0] +[2023-02-26 12:42:46,760][24367] Worker 3 uses CPU cores [1] +[2023-02-26 12:42:46,763][24366] Worker 5 uses CPU cores [1] +[2023-02-26 12:42:46,766][24369] Worker 7 uses CPU cores [1] +[2023-02-26 12:42:47,076][24343] Num visible devices: 1 +[2023-02-26 12:42:47,079][24343] Starting seed is not provided +[2023-02-26 12:42:47,079][24343] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 12:42:47,079][24343] Initializing actor-critic model on device cuda:0 +[2023-02-26 12:42:47,080][24343] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 12:42:47,081][24356] Num visible devices: 1 +[2023-02-26 12:42:47,084][24343] RunningMeanStd input shape: (1,) +[2023-02-26 12:42:47,103][24343] ConvEncoder: input_channels=3 +[2023-02-26 12:42:47,434][24343] Conv encoder output size: 512 +[2023-02-26 12:42:47,434][24343] Policy head output size: 512 +[2023-02-26 12:42:47,499][24343] Created Actor Critic model with architecture: +[2023-02-26 12:42:47,499][24343] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-26 12:42:54,661][24343] Using optimizer +[2023-02-26 12:42:54,663][24343] No checkpoints found +[2023-02-26 12:42:54,663][24343] Did not load from checkpoint, starting from scratch! +[2023-02-26 12:42:54,663][24343] Initialized policy 0 weights for model version 0 +[2023-02-26 12:42:54,668][24343] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-26 12:42:54,675][24343] LearnerWorker_p0 finished initialization! +[2023-02-26 12:42:54,927][24356] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 12:42:54,928][24356] RunningMeanStd input shape: (1,) +[2023-02-26 12:42:54,941][24356] ConvEncoder: input_channels=3 +[2023-02-26 12:42:55,041][24356] Conv encoder output size: 512 +[2023-02-26 12:42:55,041][24356] Policy head output size: 512 +[2023-02-26 12:42:56,090][00201] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 12:42:56,257][00201] Heartbeat connected on Batcher_0 +[2023-02-26 12:42:56,264][00201] Heartbeat connected on LearnerWorker_p0 +[2023-02-26 12:42:56,277][00201] Heartbeat connected on RolloutWorker_w0 +[2023-02-26 12:42:56,282][00201] Heartbeat connected on RolloutWorker_w1 +[2023-02-26 12:42:56,288][00201] Heartbeat connected on RolloutWorker_w2 +[2023-02-26 12:42:56,292][00201] Heartbeat connected on RolloutWorker_w3 +[2023-02-26 12:42:56,294][00201] Heartbeat connected on RolloutWorker_w4 +[2023-02-26 12:42:56,303][00201] Heartbeat connected on RolloutWorker_w5 +[2023-02-26 12:42:56,304][00201] Heartbeat connected on RolloutWorker_w6 +[2023-02-26 12:42:56,308][00201] Heartbeat connected on RolloutWorker_w7 +[2023-02-26 12:42:57,361][00201] Inference worker 0-0 is ready! +[2023-02-26 12:42:57,362][00201] All inference workers are ready! Signal rollout workers to start! +[2023-02-26 12:42:57,369][00201] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-26 12:42:57,475][24365] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,483][24368] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,486][24364] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,525][24358] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,529][24366] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,537][24369] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,540][24367] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:57,540][24359] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 12:42:58,071][24366] Decorrelating experience for 0 frames... +[2023-02-26 12:42:58,459][24369] Decorrelating experience for 0 frames... +[2023-02-26 12:42:58,804][24369] Decorrelating experience for 32 frames... +[2023-02-26 12:42:58,946][24368] Decorrelating experience for 0 frames... +[2023-02-26 12:42:58,951][24364] Decorrelating experience for 0 frames... +[2023-02-26 12:42:58,954][24365] Decorrelating experience for 0 frames... +[2023-02-26 12:42:58,968][24358] Decorrelating experience for 0 frames... +[2023-02-26 12:42:59,571][24367] Decorrelating experience for 0 frames... +[2023-02-26 12:43:00,776][24367] Decorrelating experience for 32 frames... +[2023-02-26 12:43:01,092][00201] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 12:43:01,380][24365] Decorrelating experience for 32 frames... +[2023-02-26 12:43:01,386][24364] Decorrelating experience for 32 frames... +[2023-02-26 12:43:01,532][24358] Decorrelating experience for 32 frames... +[2023-02-26 12:43:02,205][24369] Decorrelating experience for 64 frames... +[2023-02-26 12:43:02,438][24368] Decorrelating experience for 32 frames... +[2023-02-26 12:43:03,097][24364] Decorrelating experience for 64 frames... +[2023-02-26 12:43:03,100][24358] Decorrelating experience for 64 frames... +[2023-02-26 12:43:03,629][24359] Decorrelating experience for 0 frames... +[2023-02-26 12:43:03,722][24366] Decorrelating experience for 32 frames... +[2023-02-26 12:43:04,160][24369] Decorrelating experience for 96 frames... +[2023-02-26 12:43:04,703][24359] Decorrelating experience for 32 frames... +[2023-02-26 12:43:05,000][24366] Decorrelating experience for 64 frames... +[2023-02-26 12:43:05,460][24364] Decorrelating experience for 96 frames... +[2023-02-26 12:43:05,848][24366] Decorrelating experience for 96 frames... +[2023-02-26 12:43:06,090][00201] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 12:43:06,296][24359] Decorrelating experience for 64 frames... +[2023-02-26 12:43:06,296][24365] Decorrelating experience for 64 frames... +[2023-02-26 12:43:06,405][24358] Decorrelating experience for 96 frames... +[2023-02-26 12:43:07,039][24359] Decorrelating experience for 96 frames... +[2023-02-26 12:43:07,603][24368] Decorrelating experience for 64 frames... +[2023-02-26 12:43:08,424][24365] Decorrelating experience for 96 frames... +[2023-02-26 12:43:08,444][24367] Decorrelating experience for 64 frames... +[2023-02-26 12:43:08,839][24368] Decorrelating experience for 96 frames... +[2023-02-26 12:43:09,898][24367] Decorrelating experience for 96 frames... +[2023-02-26 12:43:11,090][00201] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 17.9. Samples: 268. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-26 12:43:11,092][00201] Avg episode reward: [(0, '1.683')] +[2023-02-26 12:43:11,843][24343] Signal inference workers to stop experience collection... +[2023-02-26 12:43:11,866][24356] InferenceWorker_p0-w0: stopping experience collection +[2023-02-26 12:43:14,387][24343] Signal inference workers to resume experience collection... +[2023-02-26 12:43:14,387][24356] InferenceWorker_p0-w0: resuming experience collection +[2023-02-26 12:43:16,090][00201] Fps is (10 sec: 409.6, 60 sec: 204.8, 300 sec: 204.8). Total num frames: 4096. Throughput: 0: 145.8. Samples: 2916. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-02-26 12:43:16,093][00201] Avg episode reward: [(0, '2.607')] +[2023-02-26 12:43:21,090][00201] Fps is (10 sec: 2867.2, 60 sec: 1146.9, 300 sec: 1146.9). Total num frames: 28672. Throughput: 0: 245.0. Samples: 6126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:43:21,095][00201] Avg episode reward: [(0, '3.799')] +[2023-02-26 12:43:24,206][24356] Updated weights for policy 0, policy_version 10 (0.0387) +[2023-02-26 12:43:26,090][00201] Fps is (10 sec: 4096.0, 60 sec: 1501.9, 300 sec: 1501.9). Total num frames: 45056. Throughput: 0: 360.3. Samples: 10810. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:43:26,095][00201] Avg episode reward: [(0, '4.245')] +[2023-02-26 12:43:31,090][00201] Fps is (10 sec: 3276.8, 60 sec: 1755.4, 300 sec: 1755.4). Total num frames: 61440. Throughput: 0: 453.4. Samples: 15868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:43:31,092][00201] Avg episode reward: [(0, '4.428')] +[2023-02-26 12:43:35,181][24356] Updated weights for policy 0, policy_version 20 (0.0017) +[2023-02-26 12:43:36,098][00201] Fps is (10 sec: 4092.6, 60 sec: 2150.0, 300 sec: 2150.0). Total num frames: 86016. Throughput: 0: 479.2. Samples: 19172. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:43:36,104][00201] Avg episode reward: [(0, '4.365')] +[2023-02-26 12:43:41,095][00201] Fps is (10 sec: 4503.3, 60 sec: 2366.3, 300 sec: 2366.3). Total num frames: 106496. Throughput: 0: 580.4. Samples: 26120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:43:41,102][00201] Avg episode reward: [(0, '4.322')] +[2023-02-26 12:43:41,111][24343] Saving new best policy, reward=4.322! +[2023-02-26 12:43:46,091][00201] Fps is (10 sec: 3279.1, 60 sec: 2375.6, 300 sec: 2375.6). Total num frames: 118784. Throughput: 0: 669.3. Samples: 30118. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:43:46,094][00201] Avg episode reward: [(0, '4.508')] +[2023-02-26 12:43:46,096][24343] Saving new best policy, reward=4.508! +[2023-02-26 12:43:46,885][24356] Updated weights for policy 0, policy_version 30 (0.0012) +[2023-02-26 12:43:51,090][00201] Fps is (10 sec: 2868.7, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 135168. Throughput: 0: 718.3. Samples: 32324. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:43:51,099][00201] Avg episode reward: [(0, '4.685')] +[2023-02-26 12:43:51,112][24343] Saving new best policy, reward=4.685! +[2023-02-26 12:43:56,090][00201] Fps is (10 sec: 4096.5, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 159744. Throughput: 0: 867.4. Samples: 39300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:43:56,098][00201] Avg episode reward: [(0, '4.444')] +[2023-02-26 12:43:56,431][24356] Updated weights for policy 0, policy_version 40 (0.0026) +[2023-02-26 12:44:01,091][00201] Fps is (10 sec: 4505.1, 60 sec: 3003.8, 300 sec: 2772.6). Total num frames: 180224. Throughput: 0: 951.1. Samples: 45718. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-26 12:44:01,096][00201] Avg episode reward: [(0, '4.292')] +[2023-02-26 12:44:06,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 2808.7). Total num frames: 196608. Throughput: 0: 928.7. Samples: 47916. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:44:06,095][00201] Avg episode reward: [(0, '4.379')] +[2023-02-26 12:44:08,431][24356] Updated weights for policy 0, policy_version 50 (0.0019) +[2023-02-26 12:44:11,090][00201] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 2839.9). Total num frames: 212992. Throughput: 0: 928.8. Samples: 52604. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:44:11,095][00201] Avg episode reward: [(0, '4.443')] +[2023-02-26 12:44:16,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 2918.4). Total num frames: 233472. Throughput: 0: 958.7. Samples: 59008. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:44:16,094][00201] Avg episode reward: [(0, '4.272')] +[2023-02-26 12:44:18,114][24356] Updated weights for policy 0, policy_version 60 (0.0018) +[2023-02-26 12:44:21,096][00201] Fps is (10 sec: 4093.4, 60 sec: 3754.3, 300 sec: 2987.4). Total num frames: 253952. Throughput: 0: 961.6. Samples: 62440. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:44:21,099][00201] Avg episode reward: [(0, '4.366')] +[2023-02-26 12:44:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 2958.2). Total num frames: 266240. Throughput: 0: 908.9. Samples: 67014. Policy #0 lag: (min: 0.0, avg: 0.8, max: 1.0) +[2023-02-26 12:44:26,093][00201] Avg episode reward: [(0, '4.503')] +[2023-02-26 12:44:31,019][24356] Updated weights for policy 0, policy_version 70 (0.0012) +[2023-02-26 12:44:31,090][00201] Fps is (10 sec: 3278.9, 60 sec: 3754.7, 300 sec: 3018.1). Total num frames: 286720. Throughput: 0: 923.8. Samples: 71690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:44:31,093][00201] Avg episode reward: [(0, '4.614')] +[2023-02-26 12:44:31,103][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth... +[2023-02-26 12:44:36,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3686.9, 300 sec: 3072.0). Total num frames: 307200. Throughput: 0: 949.2. Samples: 75036. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:44:36,092][00201] Avg episode reward: [(0, '4.606')] +[2023-02-26 12:44:40,560][24356] Updated weights for policy 0, policy_version 80 (0.0015) +[2023-02-26 12:44:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3686.7, 300 sec: 3120.8). Total num frames: 327680. Throughput: 0: 945.2. Samples: 81834. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:44:41,095][00201] Avg episode reward: [(0, '4.403')] +[2023-02-26 12:44:46,093][00201] Fps is (10 sec: 3275.7, 60 sec: 3686.3, 300 sec: 3090.5). Total num frames: 339968. Throughput: 0: 896.4. Samples: 86058. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:44:46,098][00201] Avg episode reward: [(0, '4.325')] +[2023-02-26 12:44:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3134.3). Total num frames: 360448. Throughput: 0: 896.0. Samples: 88236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:44:51,093][00201] Avg episode reward: [(0, '4.315')] +[2023-02-26 12:44:52,553][24356] Updated weights for policy 0, policy_version 90 (0.0024) +[2023-02-26 12:44:56,090][00201] Fps is (10 sec: 4507.1, 60 sec: 3754.7, 300 sec: 3208.5). Total num frames: 385024. Throughput: 0: 942.7. Samples: 95026. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:44:56,092][00201] Avg episode reward: [(0, '4.617')] +[2023-02-26 12:45:01,092][00201] Fps is (10 sec: 4504.6, 60 sec: 3754.6, 300 sec: 3244.0). Total num frames: 405504. Throughput: 0: 942.6. Samples: 101428. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:45:01,097][00201] Avg episode reward: [(0, '4.889')] +[2023-02-26 12:45:01,111][24343] Saving new best policy, reward=4.889! +[2023-02-26 12:45:02,623][24356] Updated weights for policy 0, policy_version 100 (0.0013) +[2023-02-26 12:45:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3213.8). Total num frames: 417792. Throughput: 0: 912.8. Samples: 103508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:45:06,093][00201] Avg episode reward: [(0, '4.942')] +[2023-02-26 12:45:06,099][24343] Saving new best policy, reward=4.942! +[2023-02-26 12:45:11,090][00201] Fps is (10 sec: 2867.8, 60 sec: 3686.4, 300 sec: 3216.1). Total num frames: 434176. Throughput: 0: 910.7. Samples: 107994. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:45:11,096][00201] Avg episode reward: [(0, '4.482')] +[2023-02-26 12:45:13,809][24356] Updated weights for policy 0, policy_version 110 (0.0033) +[2023-02-26 12:45:16,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 458752. Throughput: 0: 964.9. Samples: 115112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:45:16,093][00201] Avg episode reward: [(0, '4.462')] +[2023-02-26 12:45:21,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3755.1, 300 sec: 3305.0). Total num frames: 479232. Throughput: 0: 965.5. Samples: 118482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:45:21,094][00201] Avg episode reward: [(0, '4.668')] +[2023-02-26 12:45:24,637][24356] Updated weights for policy 0, policy_version 120 (0.0021) +[2023-02-26 12:45:26,093][00201] Fps is (10 sec: 3685.1, 60 sec: 3822.7, 300 sec: 3304.0). Total num frames: 495616. Throughput: 0: 922.4. Samples: 123344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:45:26,096][00201] Avg episode reward: [(0, '4.616')] +[2023-02-26 12:45:31,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3303.2). Total num frames: 512000. Throughput: 0: 942.2. Samples: 128452. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:45:31,093][00201] Avg episode reward: [(0, '4.554')] +[2023-02-26 12:45:34,790][24356] Updated weights for policy 0, policy_version 130 (0.0023) +[2023-02-26 12:45:36,090][00201] Fps is (10 sec: 4097.4, 60 sec: 3822.9, 300 sec: 3353.6). Total num frames: 536576. Throughput: 0: 972.0. Samples: 131978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:45:36,095][00201] Avg episode reward: [(0, '4.781')] +[2023-02-26 12:45:41,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3376.1). Total num frames: 557056. Throughput: 0: 972.0. Samples: 138766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:45:41,097][00201] Avg episode reward: [(0, '4.886')] +[2023-02-26 12:45:46,090][00201] Fps is (10 sec: 3276.7, 60 sec: 3823.1, 300 sec: 3349.1). Total num frames: 569344. Throughput: 0: 928.3. Samples: 143200. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:45:46,095][00201] Avg episode reward: [(0, '4.879')] +[2023-02-26 12:45:46,286][24356] Updated weights for policy 0, policy_version 140 (0.0024) +[2023-02-26 12:45:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3370.4). Total num frames: 589824. Throughput: 0: 933.7. Samples: 145526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:45:51,097][00201] Avg episode reward: [(0, '4.749')] +[2023-02-26 12:45:55,891][24356] Updated weights for policy 0, policy_version 150 (0.0026) +[2023-02-26 12:45:56,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3413.3). Total num frames: 614400. Throughput: 0: 987.1. Samples: 152412. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:45:56,096][00201] Avg episode reward: [(0, '4.501')] +[2023-02-26 12:46:01,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3823.1, 300 sec: 3431.8). Total num frames: 634880. Throughput: 0: 970.9. Samples: 158802. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:46:01,092][00201] Avg episode reward: [(0, '4.435')] +[2023-02-26 12:46:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3406.1). Total num frames: 647168. Throughput: 0: 946.5. Samples: 161074. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:46:06,095][00201] Avg episode reward: [(0, '4.499')] +[2023-02-26 12:46:08,001][24356] Updated weights for policy 0, policy_version 160 (0.0018) +[2023-02-26 12:46:11,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3423.8). Total num frames: 667648. Throughput: 0: 942.7. Samples: 165762. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:46:11,096][00201] Avg episode reward: [(0, '4.593')] +[2023-02-26 12:46:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3461.1). Total num frames: 692224. Throughput: 0: 987.8. Samples: 172904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:46:16,097][00201] Avg episode reward: [(0, '4.481')] +[2023-02-26 12:46:16,889][24356] Updated weights for policy 0, policy_version 170 (0.0013) +[2023-02-26 12:46:21,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3456.6). Total num frames: 708608. Throughput: 0: 986.9. Samples: 176390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:46:21,097][00201] Avg episode reward: [(0, '4.433')] +[2023-02-26 12:46:26,091][00201] Fps is (10 sec: 3276.4, 60 sec: 3823.1, 300 sec: 3452.3). Total num frames: 724992. Throughput: 0: 938.9. Samples: 181016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:46:26,094][00201] Avg episode reward: [(0, '4.556')] +[2023-02-26 12:46:29,418][24356] Updated weights for policy 0, policy_version 180 (0.0030) +[2023-02-26 12:46:31,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3448.3). Total num frames: 741376. Throughput: 0: 955.3. Samples: 186190. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-26 12:46:31,092][00201] Avg episode reward: [(0, '4.680')] +[2023-02-26 12:46:31,111][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000181_741376.pth... +[2023-02-26 12:46:36,090][00201] Fps is (10 sec: 4096.5, 60 sec: 3822.9, 300 sec: 3481.6). Total num frames: 765952. Throughput: 0: 979.9. Samples: 189620. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-26 12:46:36,097][00201] Avg episode reward: [(0, '4.786')] +[2023-02-26 12:46:38,030][24356] Updated weights for policy 0, policy_version 190 (0.0019) +[2023-02-26 12:46:41,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3495.3). Total num frames: 786432. Throughput: 0: 979.8. Samples: 196502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:46:41,097][00201] Avg episode reward: [(0, '4.930')] +[2023-02-26 12:46:46,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3490.5). Total num frames: 802816. Throughput: 0: 938.2. Samples: 201020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:46:46,097][00201] Avg episode reward: [(0, '4.912')] +[2023-02-26 12:46:50,363][24356] Updated weights for policy 0, policy_version 200 (0.0016) +[2023-02-26 12:46:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3486.0). Total num frames: 819200. Throughput: 0: 937.3. Samples: 203252. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:46:51,097][00201] Avg episode reward: [(0, '4.917')] +[2023-02-26 12:46:56,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3515.7). Total num frames: 843776. Throughput: 0: 989.8. Samples: 210302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:46:56,092][00201] Avg episode reward: [(0, '5.270')] +[2023-02-26 12:46:56,096][24343] Saving new best policy, reward=5.270! +[2023-02-26 12:46:58,915][24356] Updated weights for policy 0, policy_version 210 (0.0015) +[2023-02-26 12:47:01,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3527.6). Total num frames: 864256. Throughput: 0: 970.4. Samples: 216574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:47:01,097][00201] Avg episode reward: [(0, '5.405')] +[2023-02-26 12:47:01,108][24343] Saving new best policy, reward=5.405! +[2023-02-26 12:47:06,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3522.6). Total num frames: 880640. Throughput: 0: 941.7. Samples: 218768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:47:06,096][00201] Avg episode reward: [(0, '5.338')] +[2023-02-26 12:47:11,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3517.7). Total num frames: 897024. Throughput: 0: 948.5. Samples: 223696. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:47:11,092][00201] Avg episode reward: [(0, '5.205')] +[2023-02-26 12:47:11,225][24356] Updated weights for policy 0, policy_version 220 (0.0023) +[2023-02-26 12:47:16,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3544.6). Total num frames: 921600. Throughput: 0: 993.4. Samples: 230894. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:47:16,093][00201] Avg episode reward: [(0, '5.172')] +[2023-02-26 12:47:20,360][24356] Updated weights for policy 0, policy_version 230 (0.0015) +[2023-02-26 12:47:21,095][00201] Fps is (10 sec: 4503.3, 60 sec: 3890.9, 300 sec: 3555.0). Total num frames: 942080. Throughput: 0: 995.7. Samples: 234430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:47:21,098][00201] Avg episode reward: [(0, '5.058')] +[2023-02-26 12:47:26,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3549.9). Total num frames: 958464. Throughput: 0: 946.4. Samples: 239090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:47:26,093][00201] Avg episode reward: [(0, '5.054')] +[2023-02-26 12:47:31,090][00201] Fps is (10 sec: 3688.3, 60 sec: 3959.5, 300 sec: 3559.8). Total num frames: 978944. Throughput: 0: 966.5. Samples: 244514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:47:31,093][00201] Avg episode reward: [(0, '5.275')] +[2023-02-26 12:47:31,864][24356] Updated weights for policy 0, policy_version 240 (0.0029) +[2023-02-26 12:47:36,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3569.4). Total num frames: 999424. Throughput: 0: 998.1. Samples: 248166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:47:36,092][00201] Avg episode reward: [(0, '5.703')] +[2023-02-26 12:47:36,100][24343] Saving new best policy, reward=5.703! +[2023-02-26 12:47:41,090][00201] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3578.6). Total num frames: 1019904. Throughput: 0: 991.1. Samples: 254900. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 12:47:41,094][00201] Avg episode reward: [(0, '5.648')] +[2023-02-26 12:47:41,755][24356] Updated weights for policy 0, policy_version 250 (0.0027) +[2023-02-26 12:47:46,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3573.4). Total num frames: 1036288. Throughput: 0: 952.9. Samples: 259456. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:47:46,094][00201] Avg episode reward: [(0, '5.524')] +[2023-02-26 12:47:51,090][00201] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3582.3). Total num frames: 1056768. Throughput: 0: 956.1. Samples: 261794. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:47:51,093][00201] Avg episode reward: [(0, '5.480')] +[2023-02-26 12:47:52,637][24356] Updated weights for policy 0, policy_version 260 (0.0024) +[2023-02-26 12:47:56,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3665.6). Total num frames: 1081344. Throughput: 0: 1003.7. Samples: 268862. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:47:56,092][00201] Avg episode reward: [(0, '5.586')] +[2023-02-26 12:48:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 1097728. Throughput: 0: 980.8. Samples: 275030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:48:01,095][00201] Avg episode reward: [(0, '5.659')] +[2023-02-26 12:48:03,089][24356] Updated weights for policy 0, policy_version 270 (0.0018) +[2023-02-26 12:48:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 1114112. Throughput: 0: 951.7. Samples: 277252. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 12:48:06,097][00201] Avg episode reward: [(0, '5.454')] +[2023-02-26 12:48:11,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1130496. Throughput: 0: 957.8. Samples: 282190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:48:11,093][00201] Avg episode reward: [(0, '5.589')] +[2023-02-26 12:48:13,939][24356] Updated weights for policy 0, policy_version 280 (0.0012) +[2023-02-26 12:48:16,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1155072. Throughput: 0: 990.4. Samples: 289084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:48:16,093][00201] Avg episode reward: [(0, '5.466')] +[2023-02-26 12:48:21,091][00201] Fps is (10 sec: 4505.1, 60 sec: 3891.5, 300 sec: 3832.2). Total num frames: 1175552. Throughput: 0: 983.9. Samples: 292442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:48:21,094][00201] Avg episode reward: [(0, '5.554')] +[2023-02-26 12:48:25,109][24356] Updated weights for policy 0, policy_version 290 (0.0011) +[2023-02-26 12:48:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1187840. Throughput: 0: 932.9. Samples: 296880. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:48:26,098][00201] Avg episode reward: [(0, '6.001')] +[2023-02-26 12:48:26,103][24343] Saving new best policy, reward=6.001! +[2023-02-26 12:48:31,090][00201] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3804.5). Total num frames: 1208320. Throughput: 0: 945.6. Samples: 302008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:48:31,096][00201] Avg episode reward: [(0, '6.429')] +[2023-02-26 12:48:31,106][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000295_1208320.pth... +[2023-02-26 12:48:31,245][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth +[2023-02-26 12:48:31,256][24343] Saving new best policy, reward=6.429! +[2023-02-26 12:48:35,443][24356] Updated weights for policy 0, policy_version 300 (0.0018) +[2023-02-26 12:48:36,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3804.5). Total num frames: 1228800. Throughput: 0: 967.2. Samples: 305318. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:48:36,092][00201] Avg episode reward: [(0, '6.914')] +[2023-02-26 12:48:36,097][24343] Saving new best policy, reward=6.914! +[2023-02-26 12:48:41,090][00201] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1245184. Throughput: 0: 949.8. Samples: 311602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:48:41,093][00201] Avg episode reward: [(0, '7.290')] +[2023-02-26 12:48:41,116][24343] Saving new best policy, reward=7.290! +[2023-02-26 12:48:46,091][00201] Fps is (10 sec: 3276.5, 60 sec: 3754.6, 300 sec: 3818.3). Total num frames: 1261568. Throughput: 0: 903.1. Samples: 315670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:48:46,094][00201] Avg episode reward: [(0, '7.449')] +[2023-02-26 12:48:46,099][24343] Saving new best policy, reward=7.449! +[2023-02-26 12:48:48,533][24356] Updated weights for policy 0, policy_version 310 (0.0025) +[2023-02-26 12:48:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3790.5). Total num frames: 1277952. Throughput: 0: 904.2. Samples: 317940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:48:51,098][00201] Avg episode reward: [(0, '7.467')] +[2023-02-26 12:48:51,198][24343] Saving new best policy, reward=7.467! +[2023-02-26 12:48:56,090][00201] Fps is (10 sec: 4096.3, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1302528. Throughput: 0: 946.4. Samples: 324776. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:48:56,098][00201] Avg episode reward: [(0, '7.215')] +[2023-02-26 12:48:57,421][24356] Updated weights for policy 0, policy_version 320 (0.0018) +[2023-02-26 12:49:01,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1323008. Throughput: 0: 927.0. Samples: 330800. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:49:01,095][00201] Avg episode reward: [(0, '7.066')] +[2023-02-26 12:49:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1335296. Throughput: 0: 901.2. Samples: 332996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:49:06,099][00201] Avg episode reward: [(0, '7.539')] +[2023-02-26 12:49:06,101][24343] Saving new best policy, reward=7.539! +[2023-02-26 12:49:09,919][24356] Updated weights for policy 0, policy_version 330 (0.0013) +[2023-02-26 12:49:11,090][00201] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 1355776. Throughput: 0: 906.7. Samples: 337682. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:49:11,093][00201] Avg episode reward: [(0, '8.068')] +[2023-02-26 12:49:11,106][24343] Saving new best policy, reward=8.068! +[2023-02-26 12:49:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3818.4). Total num frames: 1380352. Throughput: 0: 948.4. Samples: 344684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:49:16,097][00201] Avg episode reward: [(0, '8.916')] +[2023-02-26 12:49:16,100][24343] Saving new best policy, reward=8.916! +[2023-02-26 12:49:18,954][24356] Updated weights for policy 0, policy_version 340 (0.0019) +[2023-02-26 12:49:21,091][00201] Fps is (10 sec: 4095.7, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 1396736. Throughput: 0: 954.2. Samples: 348256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:49:21,097][00201] Avg episode reward: [(0, '9.606')] +[2023-02-26 12:49:21,109][24343] Saving new best policy, reward=9.606! +[2023-02-26 12:49:26,090][00201] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1409024. Throughput: 0: 909.5. Samples: 352530. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:49:26,096][00201] Avg episode reward: [(0, '9.575')] +[2023-02-26 12:49:31,090][00201] Fps is (10 sec: 3277.0, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 1429504. Throughput: 0: 938.0. Samples: 357880. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:49:31,093][00201] Avg episode reward: [(0, '8.935')] +[2023-02-26 12:49:31,220][24356] Updated weights for policy 0, policy_version 350 (0.0026) +[2023-02-26 12:49:36,091][00201] Fps is (10 sec: 4505.3, 60 sec: 3754.6, 300 sec: 3818.3). Total num frames: 1454080. Throughput: 0: 966.7. Samples: 361444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:49:36,093][00201] Avg episode reward: [(0, '8.511')] +[2023-02-26 12:49:40,838][24356] Updated weights for policy 0, policy_version 360 (0.0021) +[2023-02-26 12:49:41,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1474560. Throughput: 0: 962.1. Samples: 368072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:49:41,096][00201] Avg episode reward: [(0, '8.473')] +[2023-02-26 12:49:46,090][00201] Fps is (10 sec: 3277.0, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 1486848. Throughput: 0: 928.5. Samples: 372584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:49:46,096][00201] Avg episode reward: [(0, '8.981')] +[2023-02-26 12:49:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 1507328. Throughput: 0: 933.8. Samples: 375018. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:49:51,096][00201] Avg episode reward: [(0, '9.633')] +[2023-02-26 12:49:51,109][24343] Saving new best policy, reward=9.633! +[2023-02-26 12:49:52,157][24356] Updated weights for policy 0, policy_version 370 (0.0019) +[2023-02-26 12:49:56,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1531904. Throughput: 0: 988.4. Samples: 382160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:49:56,092][00201] Avg episode reward: [(0, '10.397')] +[2023-02-26 12:49:56,098][24343] Saving new best policy, reward=10.397! +[2023-02-26 12:50:01,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1552384. Throughput: 0: 971.2. Samples: 388388. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:50:01,093][00201] Avg episode reward: [(0, '10.414')] +[2023-02-26 12:50:01,109][24343] Saving new best policy, reward=10.414! +[2023-02-26 12:50:02,164][24356] Updated weights for policy 0, policy_version 380 (0.0018) +[2023-02-26 12:50:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1564672. Throughput: 0: 940.8. Samples: 390590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:50:06,099][00201] Avg episode reward: [(0, '10.471')] +[2023-02-26 12:50:06,114][24343] Saving new best policy, reward=10.471! +[2023-02-26 12:50:11,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 1585152. Throughput: 0: 958.7. Samples: 395670. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:50:11,093][00201] Avg episode reward: [(0, '11.439')] +[2023-02-26 12:50:11,147][24343] Saving new best policy, reward=11.439! +[2023-02-26 12:50:12,856][24356] Updated weights for policy 0, policy_version 390 (0.0026) +[2023-02-26 12:50:16,090][00201] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1609728. Throughput: 0: 1001.7. Samples: 402956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:50:16,098][00201] Avg episode reward: [(0, '12.225')] +[2023-02-26 12:50:16,102][24343] Saving new best policy, reward=12.225! +[2023-02-26 12:50:21,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.3, 300 sec: 3846.1). Total num frames: 1630208. Throughput: 0: 996.5. Samples: 406286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:50:21,093][00201] Avg episode reward: [(0, '12.169')] +[2023-02-26 12:50:23,493][24356] Updated weights for policy 0, policy_version 400 (0.0012) +[2023-02-26 12:50:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1642496. Throughput: 0: 947.1. Samples: 410692. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:50:26,094][00201] Avg episode reward: [(0, '11.625')] +[2023-02-26 12:50:31,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1662976. Throughput: 0: 973.0. Samples: 416368. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:50:31,095][00201] Avg episode reward: [(0, '11.066')] +[2023-02-26 12:50:31,114][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000407_1667072.pth... +[2023-02-26 12:50:31,230][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000181_741376.pth +[2023-02-26 12:50:33,710][24356] Updated weights for policy 0, policy_version 410 (0.0032) +[2023-02-26 12:50:36,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1687552. Throughput: 0: 999.2. Samples: 419982. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:50:36,096][00201] Avg episode reward: [(0, '12.225')] +[2023-02-26 12:50:41,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1708032. Throughput: 0: 986.0. Samples: 426532. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:50:41,093][00201] Avg episode reward: [(0, '12.354')] +[2023-02-26 12:50:41,110][24343] Saving new best policy, reward=12.354! +[2023-02-26 12:50:45,114][24356] Updated weights for policy 0, policy_version 420 (0.0019) +[2023-02-26 12:50:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1720320. Throughput: 0: 946.7. Samples: 430990. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:50:46,096][00201] Avg episode reward: [(0, '12.616')] +[2023-02-26 12:50:46,099][24343] Saving new best policy, reward=12.616! +[2023-02-26 12:50:51,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1740800. Throughput: 0: 949.8. Samples: 433330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:50:51,093][00201] Avg episode reward: [(0, '12.641')] +[2023-02-26 12:50:51,107][24343] Saving new best policy, reward=12.641! +[2023-02-26 12:50:54,829][24356] Updated weights for policy 0, policy_version 430 (0.0028) +[2023-02-26 12:50:56,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1765376. Throughput: 0: 994.2. Samples: 440410. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:50:56,092][00201] Avg episode reward: [(0, '12.588')] +[2023-02-26 12:51:01,093][00201] Fps is (10 sec: 4094.7, 60 sec: 3822.7, 300 sec: 3846.0). Total num frames: 1781760. Throughput: 0: 967.3. Samples: 446486. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 12:51:01,096][00201] Avg episode reward: [(0, '13.159')] +[2023-02-26 12:51:01,103][24343] Saving new best policy, reward=13.159! +[2023-02-26 12:51:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1798144. Throughput: 0: 942.6. Samples: 448704. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:51:06,098][00201] Avg episode reward: [(0, '13.050')] +[2023-02-26 12:51:06,343][24356] Updated weights for policy 0, policy_version 440 (0.0018) +[2023-02-26 12:51:11,090][00201] Fps is (10 sec: 3687.5, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 1818624. Throughput: 0: 958.0. Samples: 453804. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:51:11,093][00201] Avg episode reward: [(0, '14.424')] +[2023-02-26 12:51:11,105][24343] Saving new best policy, reward=14.424! +[2023-02-26 12:51:15,951][24356] Updated weights for policy 0, policy_version 450 (0.0021) +[2023-02-26 12:51:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1843200. Throughput: 0: 986.1. Samples: 460742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:51:16,092][00201] Avg episode reward: [(0, '14.883')] +[2023-02-26 12:51:16,095][24343] Saving new best policy, reward=14.883! +[2023-02-26 12:51:21,091][00201] Fps is (10 sec: 4095.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1859584. Throughput: 0: 980.6. Samples: 464112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:51:21,095][00201] Avg episode reward: [(0, '14.976')] +[2023-02-26 12:51:21,109][24343] Saving new best policy, reward=14.976! +[2023-02-26 12:51:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1875968. Throughput: 0: 932.0. Samples: 468472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:51:26,095][00201] Avg episode reward: [(0, '14.139')] +[2023-02-26 12:51:28,197][24356] Updated weights for policy 0, policy_version 460 (0.0030) +[2023-02-26 12:51:31,090][00201] Fps is (10 sec: 3686.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1896448. Throughput: 0: 954.6. Samples: 473946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:51:31,094][00201] Avg episode reward: [(0, '14.255')] +[2023-02-26 12:51:36,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 1916928. Throughput: 0: 980.6. Samples: 477458. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 12:51:36,093][00201] Avg episode reward: [(0, '12.958')] +[2023-02-26 12:51:37,019][24356] Updated weights for policy 0, policy_version 470 (0.0017) +[2023-02-26 12:51:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1937408. Throughput: 0: 967.0. Samples: 483924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:51:41,094][00201] Avg episode reward: [(0, '13.097')] +[2023-02-26 12:51:46,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1953792. Throughput: 0: 930.8. Samples: 488368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:51:46,099][00201] Avg episode reward: [(0, '14.494')] +[2023-02-26 12:51:49,288][24356] Updated weights for policy 0, policy_version 480 (0.0012) +[2023-02-26 12:51:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1974272. Throughput: 0: 936.8. Samples: 490858. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:51:51,094][00201] Avg episode reward: [(0, '14.923')] +[2023-02-26 12:51:56,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1998848. Throughput: 0: 983.9. Samples: 498078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:51:56,092][00201] Avg episode reward: [(0, '16.928')] +[2023-02-26 12:51:56,099][24343] Saving new best policy, reward=16.928! +[2023-02-26 12:51:57,816][24356] Updated weights for policy 0, policy_version 490 (0.0027) +[2023-02-26 12:52:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3846.1). Total num frames: 2015232. Throughput: 0: 964.1. Samples: 504126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:52:01,099][00201] Avg episode reward: [(0, '16.755')] +[2023-02-26 12:52:06,091][00201] Fps is (10 sec: 3276.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2031616. Throughput: 0: 938.7. Samples: 506352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:52:06,096][00201] Avg episode reward: [(0, '18.289')] +[2023-02-26 12:52:06,101][24343] Saving new best policy, reward=18.289! +[2023-02-26 12:52:10,230][24356] Updated weights for policy 0, policy_version 500 (0.0012) +[2023-02-26 12:52:11,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2052096. Throughput: 0: 955.7. Samples: 511478. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:52:11,096][00201] Avg episode reward: [(0, '18.716')] +[2023-02-26 12:52:11,110][24343] Saving new best policy, reward=18.716! +[2023-02-26 12:52:16,090][00201] Fps is (10 sec: 4096.3, 60 sec: 3822.9, 300 sec: 3832.3). Total num frames: 2072576. Throughput: 0: 993.0. Samples: 518630. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:52:16,097][00201] Avg episode reward: [(0, '19.366')] +[2023-02-26 12:52:16,100][24343] Saving new best policy, reward=19.366! +[2023-02-26 12:52:19,758][24356] Updated weights for policy 0, policy_version 510 (0.0023) +[2023-02-26 12:52:21,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 2088960. Throughput: 0: 987.0. Samples: 521874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:52:21,096][00201] Avg episode reward: [(0, '19.635')] +[2023-02-26 12:52:21,121][24343] Saving new best policy, reward=19.635! +[2023-02-26 12:52:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2105344. Throughput: 0: 941.6. Samples: 526298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:52:26,093][00201] Avg episode reward: [(0, '19.916')] +[2023-02-26 12:52:26,099][24343] Saving new best policy, reward=19.916! +[2023-02-26 12:52:31,084][24356] Updated weights for policy 0, policy_version 520 (0.0019) +[2023-02-26 12:52:31,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2129920. Throughput: 0: 970.4. Samples: 532036. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:52:31,093][00201] Avg episode reward: [(0, '19.406')] +[2023-02-26 12:52:31,112][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000520_2129920.pth... +[2023-02-26 12:52:31,229][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000295_1208320.pth +[2023-02-26 12:52:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2150400. Throughput: 0: 994.1. Samples: 535592. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:52:36,093][00201] Avg episode reward: [(0, '18.601')] +[2023-02-26 12:52:40,758][24356] Updated weights for policy 0, policy_version 530 (0.0014) +[2023-02-26 12:52:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2170880. Throughput: 0: 977.3. Samples: 542058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:52:41,093][00201] Avg episode reward: [(0, '19.673')] +[2023-02-26 12:52:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2183168. Throughput: 0: 945.1. Samples: 546656. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:52:46,099][00201] Avg episode reward: [(0, '17.813')] +[2023-02-26 12:52:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2207744. Throughput: 0: 955.3. Samples: 549342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:52:51,099][00201] Avg episode reward: [(0, '18.421')] +[2023-02-26 12:52:51,873][24356] Updated weights for policy 0, policy_version 540 (0.0015) +[2023-02-26 12:52:56,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2228224. Throughput: 0: 1002.8. Samples: 556606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:52:56,095][00201] Avg episode reward: [(0, '19.192')] +[2023-02-26 12:53:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2248704. Throughput: 0: 975.4. Samples: 562524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:01,105][00201] Avg episode reward: [(0, '19.668')] +[2023-02-26 12:53:02,082][24356] Updated weights for policy 0, policy_version 550 (0.0037) +[2023-02-26 12:53:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 2260992. Throughput: 0: 952.8. Samples: 564750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:06,093][00201] Avg episode reward: [(0, '18.881')] +[2023-02-26 12:53:11,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2285568. Throughput: 0: 977.6. Samples: 570292. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:11,092][00201] Avg episode reward: [(0, '19.470')] +[2023-02-26 12:53:12,432][24356] Updated weights for policy 0, policy_version 560 (0.0022) +[2023-02-26 12:53:16,090][00201] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 2310144. Throughput: 0: 1011.2. Samples: 577540. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:16,093][00201] Avg episode reward: [(0, '19.530')] +[2023-02-26 12:53:21,097][00201] Fps is (10 sec: 4093.1, 60 sec: 3959.0, 300 sec: 3859.9). Total num frames: 2326528. Throughput: 0: 1002.2. Samples: 580700. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 12:53:21,100][00201] Avg episode reward: [(0, '19.674')] +[2023-02-26 12:53:23,026][24356] Updated weights for policy 0, policy_version 570 (0.0018) +[2023-02-26 12:53:26,092][00201] Fps is (10 sec: 3276.0, 60 sec: 3959.3, 300 sec: 3846.0). Total num frames: 2342912. Throughput: 0: 957.8. Samples: 585162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:53:26,094][00201] Avg episode reward: [(0, '20.179')] +[2023-02-26 12:53:26,102][24343] Saving new best policy, reward=20.179! +[2023-02-26 12:53:31,090][00201] Fps is (10 sec: 3689.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2363392. Throughput: 0: 987.4. Samples: 591090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:53:31,092][00201] Avg episode reward: [(0, '19.700')] +[2023-02-26 12:53:33,289][24356] Updated weights for policy 0, policy_version 580 (0.0036) +[2023-02-26 12:53:36,090][00201] Fps is (10 sec: 4506.7, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2387968. Throughput: 0: 1006.8. Samples: 594650. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:36,095][00201] Avg episode reward: [(0, '18.617')] +[2023-02-26 12:53:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2404352. Throughput: 0: 985.7. Samples: 600964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:53:41,092][00201] Avg episode reward: [(0, '16.933')] +[2023-02-26 12:53:44,404][24356] Updated weights for policy 0, policy_version 590 (0.0029) +[2023-02-26 12:53:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2420736. Throughput: 0: 956.4. Samples: 605560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:53:46,093][00201] Avg episode reward: [(0, '16.049')] +[2023-02-26 12:53:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2441216. Throughput: 0: 969.8. Samples: 608392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:53:51,092][00201] Avg episode reward: [(0, '17.074')] +[2023-02-26 12:53:53,735][24356] Updated weights for policy 0, policy_version 600 (0.0023) +[2023-02-26 12:53:56,090][00201] Fps is (10 sec: 4505.5, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2465792. Throughput: 0: 1006.8. Samples: 615596. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 12:53:56,096][00201] Avg episode reward: [(0, '17.708')] +[2023-02-26 12:54:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2482176. Throughput: 0: 976.1. Samples: 621464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:54:01,092][00201] Avg episode reward: [(0, '18.436')] +[2023-02-26 12:54:05,400][24356] Updated weights for policy 0, policy_version 610 (0.0012) +[2023-02-26 12:54:06,092][00201] Fps is (10 sec: 3276.1, 60 sec: 3959.3, 300 sec: 3873.8). Total num frames: 2498560. Throughput: 0: 955.8. Samples: 623706. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:54:06,095][00201] Avg episode reward: [(0, '20.683')] +[2023-02-26 12:54:06,102][24343] Saving new best policy, reward=20.683! +[2023-02-26 12:54:11,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2519040. Throughput: 0: 979.7. Samples: 629244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:54:11,097][00201] Avg episode reward: [(0, '20.564')] +[2023-02-26 12:54:14,575][24356] Updated weights for policy 0, policy_version 620 (0.0024) +[2023-02-26 12:54:16,090][00201] Fps is (10 sec: 4506.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2543616. Throughput: 0: 1009.4. Samples: 636514. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:54:16,092][00201] Avg episode reward: [(0, '20.656')] +[2023-02-26 12:54:21,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.7, 300 sec: 3901.6). Total num frames: 2560000. Throughput: 0: 996.7. Samples: 639500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:54:21,097][00201] Avg episode reward: [(0, '21.335')] +[2023-02-26 12:54:21,216][24343] Saving new best policy, reward=21.335! +[2023-02-26 12:54:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.4, 300 sec: 3887.7). Total num frames: 2576384. Throughput: 0: 956.9. Samples: 644024. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:54:26,094][00201] Avg episode reward: [(0, '21.444')] +[2023-02-26 12:54:26,099][24343] Saving new best policy, reward=21.444! +[2023-02-26 12:54:26,764][24356] Updated weights for policy 0, policy_version 630 (0.0019) +[2023-02-26 12:54:31,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2600960. Throughput: 0: 987.7. Samples: 650006. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-26 12:54:31,097][00201] Avg episode reward: [(0, '20.449')] +[2023-02-26 12:54:31,109][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000635_2600960.pth... +[2023-02-26 12:54:31,259][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000407_1667072.pth +[2023-02-26 12:54:35,147][24356] Updated weights for policy 0, policy_version 640 (0.0012) +[2023-02-26 12:54:36,090][00201] Fps is (10 sec: 4915.1, 60 sec: 3959.4, 300 sec: 3901.6). Total num frames: 2625536. Throughput: 0: 1004.5. Samples: 653594. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:54:36,093][00201] Avg episode reward: [(0, '20.775')] +[2023-02-26 12:54:41,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2637824. Throughput: 0: 978.3. Samples: 659618. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:54:41,097][00201] Avg episode reward: [(0, '21.087')] +[2023-02-26 12:54:46,090][00201] Fps is (10 sec: 2867.3, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2654208. Throughput: 0: 944.7. Samples: 663976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:54:46,095][00201] Avg episode reward: [(0, '20.544')] +[2023-02-26 12:54:47,505][24356] Updated weights for policy 0, policy_version 650 (0.0025) +[2023-02-26 12:54:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2674688. Throughput: 0: 961.4. Samples: 666966. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-26 12:54:51,094][00201] Avg episode reward: [(0, '21.828')] +[2023-02-26 12:54:51,138][24343] Saving new best policy, reward=21.828! +[2023-02-26 12:54:56,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 2691072. Throughput: 0: 969.2. Samples: 672860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:54:56,093][00201] Avg episode reward: [(0, '22.482')] +[2023-02-26 12:54:56,100][24343] Saving new best policy, reward=22.482! +[2023-02-26 12:54:59,323][24356] Updated weights for policy 0, policy_version 660 (0.0014) +[2023-02-26 12:55:01,094][00201] Fps is (10 sec: 3275.4, 60 sec: 3754.4, 300 sec: 3873.8). Total num frames: 2707456. Throughput: 0: 901.8. Samples: 677098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:01,097][00201] Avg episode reward: [(0, '22.642')] +[2023-02-26 12:55:01,107][24343] Saving new best policy, reward=22.642! +[2023-02-26 12:55:06,090][00201] Fps is (10 sec: 2867.2, 60 sec: 3686.5, 300 sec: 3846.1). Total num frames: 2719744. Throughput: 0: 882.0. Samples: 679188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:55:06,093][00201] Avg episode reward: [(0, '23.060')] +[2023-02-26 12:55:06,098][24343] Saving new best policy, reward=23.060! +[2023-02-26 12:55:10,807][24356] Updated weights for policy 0, policy_version 670 (0.0022) +[2023-02-26 12:55:11,090][00201] Fps is (10 sec: 3688.0, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 2744320. Throughput: 0: 906.5. Samples: 684818. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:55:11,096][00201] Avg episode reward: [(0, '23.654')] +[2023-02-26 12:55:11,107][24343] Saving new best policy, reward=23.654! +[2023-02-26 12:55:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 2764800. Throughput: 0: 929.5. Samples: 691832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:16,094][00201] Avg episode reward: [(0, '22.784')] +[2023-02-26 12:55:21,092][00201] Fps is (10 sec: 3685.6, 60 sec: 3686.3, 300 sec: 3859.9). Total num frames: 2781184. Throughput: 0: 908.5. Samples: 694476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:55:21,096][00201] Avg episode reward: [(0, '23.529')] +[2023-02-26 12:55:21,707][24356] Updated weights for policy 0, policy_version 680 (0.0014) +[2023-02-26 12:55:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 2797568. Throughput: 0: 873.5. Samples: 698924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:26,096][00201] Avg episode reward: [(0, '23.601')] +[2023-02-26 12:55:31,090][00201] Fps is (10 sec: 3687.2, 60 sec: 3618.1, 300 sec: 3832.2). Total num frames: 2818048. Throughput: 0: 915.8. Samples: 705186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:55:31,097][00201] Avg episode reward: [(0, '22.183')] +[2023-02-26 12:55:31,989][24356] Updated weights for policy 0, policy_version 690 (0.0035) +[2023-02-26 12:55:36,090][00201] Fps is (10 sec: 4505.5, 60 sec: 3618.1, 300 sec: 3846.1). Total num frames: 2842624. Throughput: 0: 929.8. Samples: 708806. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:55:36,098][00201] Avg episode reward: [(0, '21.750')] +[2023-02-26 12:55:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 2859008. Throughput: 0: 924.4. Samples: 714460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:41,095][00201] Avg episode reward: [(0, '21.934')] +[2023-02-26 12:55:43,228][24356] Updated weights for policy 0, policy_version 700 (0.0011) +[2023-02-26 12:55:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 2875392. Throughput: 0: 927.7. Samples: 718840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:46,093][00201] Avg episode reward: [(0, '21.782')] +[2023-02-26 12:55:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 2895872. Throughput: 0: 950.4. Samples: 721954. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:55:51,098][00201] Avg episode reward: [(0, '23.101')] +[2023-02-26 12:55:53,316][24356] Updated weights for policy 0, policy_version 710 (0.0020) +[2023-02-26 12:55:56,090][00201] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 2920448. Throughput: 0: 981.2. Samples: 728972. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:55:56,096][00201] Avg episode reward: [(0, '23.554')] +[2023-02-26 12:56:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3823.2, 300 sec: 3860.0). Total num frames: 2936832. Throughput: 0: 940.5. Samples: 734156. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:56:01,096][00201] Avg episode reward: [(0, '23.397')] +[2023-02-26 12:56:05,335][24356] Updated weights for policy 0, policy_version 720 (0.0036) +[2023-02-26 12:56:06,093][00201] Fps is (10 sec: 2866.3, 60 sec: 3822.7, 300 sec: 3832.2). Total num frames: 2949120. Throughput: 0: 930.9. Samples: 736366. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:56:06,101][00201] Avg episode reward: [(0, '23.008')] +[2023-02-26 12:56:11,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 2973696. Throughput: 0: 959.4. Samples: 742098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:56:11,097][00201] Avg episode reward: [(0, '22.732')] +[2023-02-26 12:56:14,582][24356] Updated weights for policy 0, policy_version 730 (0.0011) +[2023-02-26 12:56:16,090][00201] Fps is (10 sec: 4507.1, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 2994176. Throughput: 0: 976.2. Samples: 749114. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:56:16,096][00201] Avg episode reward: [(0, '21.432')] +[2023-02-26 12:56:21,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3846.1). Total num frames: 3010560. Throughput: 0: 950.7. Samples: 751586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:56:21,098][00201] Avg episode reward: [(0, '20.654')] +[2023-02-26 12:56:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3026944. Throughput: 0: 925.0. Samples: 756086. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:56:26,092][00201] Avg episode reward: [(0, '20.490')] +[2023-02-26 12:56:26,864][24356] Updated weights for policy 0, policy_version 740 (0.0044) +[2023-02-26 12:56:31,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3047424. Throughput: 0: 968.4. Samples: 762418. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:56:31,095][00201] Avg episode reward: [(0, '19.970')] +[2023-02-26 12:56:31,109][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000744_3047424.pth... +[2023-02-26 12:56:31,215][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000520_2129920.pth +[2023-02-26 12:56:35,454][24356] Updated weights for policy 0, policy_version 750 (0.0017) +[2023-02-26 12:56:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3072000. Throughput: 0: 979.4. Samples: 766026. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:56:36,097][00201] Avg episode reward: [(0, '21.496')] +[2023-02-26 12:56:41,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3088384. Throughput: 0: 948.5. Samples: 771656. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:56:41,094][00201] Avg episode reward: [(0, '22.927')] +[2023-02-26 12:56:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3104768. Throughput: 0: 933.3. Samples: 776156. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:56:46,098][00201] Avg episode reward: [(0, '22.987')] +[2023-02-26 12:56:47,722][24356] Updated weights for policy 0, policy_version 760 (0.0025) +[2023-02-26 12:56:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3125248. Throughput: 0: 959.6. Samples: 779546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:56:51,093][00201] Avg episode reward: [(0, '22.671')] +[2023-02-26 12:56:56,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3149824. Throughput: 0: 993.3. Samples: 786798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:56:56,093][00201] Avg episode reward: [(0, '21.809')] +[2023-02-26 12:56:56,155][24356] Updated weights for policy 0, policy_version 770 (0.0026) +[2023-02-26 12:57:01,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3166208. Throughput: 0: 954.4. Samples: 792060. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:57:01,093][00201] Avg episode reward: [(0, '21.724')] +[2023-02-26 12:57:06,090][00201] Fps is (10 sec: 3276.7, 60 sec: 3891.4, 300 sec: 3832.2). Total num frames: 3182592. Throughput: 0: 947.9. Samples: 794244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:57:06,096][00201] Avg episode reward: [(0, '21.732')] +[2023-02-26 12:57:08,473][24356] Updated weights for policy 0, policy_version 780 (0.0017) +[2023-02-26 12:57:11,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3207168. Throughput: 0: 986.4. Samples: 800474. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:57:11,097][00201] Avg episode reward: [(0, '22.952')] +[2023-02-26 12:57:16,090][00201] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3227648. Throughput: 0: 1007.7. Samples: 807764. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 12:57:16,097][00201] Avg episode reward: [(0, '24.102')] +[2023-02-26 12:57:16,105][24343] Saving new best policy, reward=24.102! +[2023-02-26 12:57:17,705][24356] Updated weights for policy 0, policy_version 790 (0.0012) +[2023-02-26 12:57:21,091][00201] Fps is (10 sec: 3686.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3244032. Throughput: 0: 978.1. Samples: 810040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:57:21,102][00201] Avg episode reward: [(0, '23.626')] +[2023-02-26 12:57:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3260416. Throughput: 0: 953.0. Samples: 814540. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:57:26,092][00201] Avg episode reward: [(0, '24.717')] +[2023-02-26 12:57:26,098][24343] Saving new best policy, reward=24.717! +[2023-02-26 12:57:29,141][24356] Updated weights for policy 0, policy_version 800 (0.0023) +[2023-02-26 12:57:31,090][00201] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3284992. Throughput: 0: 1000.8. Samples: 821194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:57:31,097][00201] Avg episode reward: [(0, '24.079')] +[2023-02-26 12:57:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3305472. Throughput: 0: 1005.4. Samples: 824788. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:57:36,096][00201] Avg episode reward: [(0, '21.985')] +[2023-02-26 12:57:39,157][24356] Updated weights for policy 0, policy_version 810 (0.0015) +[2023-02-26 12:57:41,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3321856. Throughput: 0: 964.4. Samples: 830194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:57:41,100][00201] Avg episode reward: [(0, '21.491')] +[2023-02-26 12:57:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3338240. Throughput: 0: 948.5. Samples: 834742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:57:46,093][00201] Avg episode reward: [(0, '21.632')] +[2023-02-26 12:57:50,103][24356] Updated weights for policy 0, policy_version 820 (0.0021) +[2023-02-26 12:57:51,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3362816. Throughput: 0: 977.5. Samples: 838232. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:57:51,092][00201] Avg episode reward: [(0, '21.250')] +[2023-02-26 12:57:56,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3383296. Throughput: 0: 999.3. Samples: 845444. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-26 12:57:56,097][00201] Avg episode reward: [(0, '20.710')] +[2023-02-26 12:58:00,306][24356] Updated weights for policy 0, policy_version 830 (0.0011) +[2023-02-26 12:58:01,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3399680. Throughput: 0: 950.3. Samples: 850526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:58:01,093][00201] Avg episode reward: [(0, '21.057')] +[2023-02-26 12:58:06,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3416064. Throughput: 0: 949.3. Samples: 852760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:58:06,097][00201] Avg episode reward: [(0, '21.560')] +[2023-02-26 12:58:10,906][24356] Updated weights for policy 0, policy_version 840 (0.0018) +[2023-02-26 12:58:11,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3440640. Throughput: 0: 987.1. Samples: 858958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:58:11,096][00201] Avg episode reward: [(0, '23.454')] +[2023-02-26 12:58:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.2). Total num frames: 3461120. Throughput: 0: 995.8. Samples: 866006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:58:16,094][00201] Avg episode reward: [(0, '23.667')] +[2023-02-26 12:58:21,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3477504. Throughput: 0: 967.9. Samples: 868344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:58:21,093][00201] Avg episode reward: [(0, '23.362')] +[2023-02-26 12:58:21,840][24356] Updated weights for policy 0, policy_version 850 (0.0012) +[2023-02-26 12:58:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3493888. Throughput: 0: 951.2. Samples: 872998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:58:26,096][00201] Avg episode reward: [(0, '23.943')] +[2023-02-26 12:58:31,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3518464. Throughput: 0: 996.4. Samples: 879582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:58:31,100][00201] Avg episode reward: [(0, '24.518')] +[2023-02-26 12:58:31,114][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000859_3518464.pth... +[2023-02-26 12:58:31,257][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000635_2600960.pth +[2023-02-26 12:58:31,799][24356] Updated weights for policy 0, policy_version 860 (0.0012) +[2023-02-26 12:58:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3538944. Throughput: 0: 994.0. Samples: 882960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:58:36,092][00201] Avg episode reward: [(0, '24.322')] +[2023-02-26 12:58:41,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3555328. Throughput: 0: 958.5. Samples: 888578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:58:41,097][00201] Avg episode reward: [(0, '23.655')] +[2023-02-26 12:58:43,156][24356] Updated weights for policy 0, policy_version 870 (0.0012) +[2023-02-26 12:58:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3571712. Throughput: 0: 949.9. Samples: 893270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:58:46,097][00201] Avg episode reward: [(0, '24.362')] +[2023-02-26 12:58:51,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3596288. Throughput: 0: 976.4. Samples: 896698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:58:51,094][00201] Avg episode reward: [(0, '24.144')] +[2023-02-26 12:58:52,573][24356] Updated weights for policy 0, policy_version 880 (0.0026) +[2023-02-26 12:58:56,091][00201] Fps is (10 sec: 4914.7, 60 sec: 3959.4, 300 sec: 3859.9). Total num frames: 3620864. Throughput: 0: 996.5. Samples: 903802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:58:56,096][00201] Avg episode reward: [(0, '23.228')] +[2023-02-26 12:59:01,092][00201] Fps is (10 sec: 3685.7, 60 sec: 3891.1, 300 sec: 3846.1). Total num frames: 3633152. Throughput: 0: 954.7. Samples: 908970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:59:01,095][00201] Avg episode reward: [(0, '24.407')] +[2023-02-26 12:59:04,281][24356] Updated weights for policy 0, policy_version 890 (0.0016) +[2023-02-26 12:59:06,090][00201] Fps is (10 sec: 2867.5, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3649536. Throughput: 0: 951.8. Samples: 911176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:59:06,093][00201] Avg episode reward: [(0, '25.137')] +[2023-02-26 12:59:06,098][24343] Saving new best policy, reward=25.137! +[2023-02-26 12:59:11,090][00201] Fps is (10 sec: 4096.9, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3674112. Throughput: 0: 982.2. Samples: 917198. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:59:11,098][00201] Avg episode reward: [(0, '26.314')] +[2023-02-26 12:59:11,111][24343] Saving new best policy, reward=26.314! +[2023-02-26 12:59:13,665][24356] Updated weights for policy 0, policy_version 900 (0.0015) +[2023-02-26 12:59:16,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3694592. Throughput: 0: 995.6. Samples: 924386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:59:16,095][00201] Avg episode reward: [(0, '27.354')] +[2023-02-26 12:59:16,100][24343] Saving new best policy, reward=27.354! +[2023-02-26 12:59:21,090][00201] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3710976. Throughput: 0: 973.3. Samples: 926760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:59:21,093][00201] Avg episode reward: [(0, '26.237')] +[2023-02-26 12:59:25,906][24356] Updated weights for policy 0, policy_version 910 (0.0016) +[2023-02-26 12:59:26,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3727360. Throughput: 0: 946.8. Samples: 931186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-26 12:59:26,092][00201] Avg episode reward: [(0, '27.305')] +[2023-02-26 12:59:31,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3751936. Throughput: 0: 991.4. Samples: 937882. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 12:59:31,093][00201] Avg episode reward: [(0, '25.943')] +[2023-02-26 12:59:34,297][24356] Updated weights for policy 0, policy_version 920 (0.0021) +[2023-02-26 12:59:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3772416. Throughput: 0: 997.0. Samples: 941564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:59:36,093][00201] Avg episode reward: [(0, '24.866')] +[2023-02-26 12:59:41,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3788800. Throughput: 0: 959.7. Samples: 946988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-26 12:59:41,095][00201] Avg episode reward: [(0, '25.248')] +[2023-02-26 12:59:46,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3805184. Throughput: 0: 945.8. Samples: 951530. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 12:59:46,098][00201] Avg episode reward: [(0, '24.044')] +[2023-02-26 12:59:46,572][24356] Updated weights for policy 0, policy_version 930 (0.0011) +[2023-02-26 12:59:51,090][00201] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3825664. Throughput: 0: 973.1. Samples: 954964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:59:51,093][00201] Avg episode reward: [(0, '25.434')] +[2023-02-26 12:59:56,093][00201] Fps is (10 sec: 4094.7, 60 sec: 3754.5, 300 sec: 3860.0). Total num frames: 3846144. Throughput: 0: 962.4. Samples: 960510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 12:59:56,097][00201] Avg episode reward: [(0, '24.575')] +[2023-02-26 12:59:56,997][24356] Updated weights for policy 0, policy_version 940 (0.0029) +[2023-02-26 13:00:01,090][00201] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3860.0). Total num frames: 3858432. Throughput: 0: 914.0. Samples: 965514. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 13:00:01,097][00201] Avg episode reward: [(0, '23.720')] +[2023-02-26 13:00:06,090][00201] Fps is (10 sec: 2868.1, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3874816. Throughput: 0: 911.9. Samples: 967796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-26 13:00:06,093][00201] Avg episode reward: [(0, '23.433')] +[2023-02-26 13:00:08,855][24356] Updated weights for policy 0, policy_version 950 (0.0012) +[2023-02-26 13:00:11,090][00201] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3899392. Throughput: 0: 951.7. Samples: 974012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 13:00:11,092][00201] Avg episode reward: [(0, '23.275')] +[2023-02-26 13:00:16,099][00201] Fps is (10 sec: 4910.9, 60 sec: 3822.4, 300 sec: 3873.8). Total num frames: 3923968. Throughput: 0: 957.5. Samples: 980980. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 13:00:16,105][00201] Avg episode reward: [(0, '22.782')] +[2023-02-26 13:00:18,629][24356] Updated weights for policy 0, policy_version 960 (0.0018) +[2023-02-26 13:00:21,094][00201] Fps is (10 sec: 3684.8, 60 sec: 3754.4, 300 sec: 3859.9). Total num frames: 3936256. Throughput: 0: 928.2. Samples: 983336. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-26 13:00:21,098][00201] Avg episode reward: [(0, '22.803')] +[2023-02-26 13:00:26,090][00201] Fps is (10 sec: 2869.7, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3952640. Throughput: 0: 908.6. Samples: 987876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 13:00:26,099][00201] Avg episode reward: [(0, '22.692')] +[2023-02-26 13:00:29,774][24356] Updated weights for policy 0, policy_version 970 (0.0032) +[2023-02-26 13:00:31,090][00201] Fps is (10 sec: 4097.7, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3977216. Throughput: 0: 958.0. Samples: 994638. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-26 13:00:31,099][00201] Avg episode reward: [(0, '24.753')] +[2023-02-26 13:00:31,111][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000971_3977216.pth... +[2023-02-26 13:00:31,217][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000744_3047424.pth +[2023-02-26 13:00:36,090][00201] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3997696. Throughput: 0: 956.0. Samples: 997984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-26 13:00:36,095][00201] Avg episode reward: [(0, '25.799')] +[2023-02-26 13:00:37,839][24343] Stopping Batcher_0... +[2023-02-26 13:00:37,839][24343] Loop batcher_evt_loop terminating... +[2023-02-26 13:00:37,840][00201] Component Batcher_0 stopped! +[2023-02-26 13:00:37,854][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 13:00:37,907][24356] Weights refcount: 2 0 +[2023-02-26 13:00:37,920][00201] Component InferenceWorker_p0-w0 stopped! +[2023-02-26 13:00:37,927][24356] Stopping InferenceWorker_p0-w0... +[2023-02-26 13:00:37,927][24356] Loop inference_proc0-0_evt_loop terminating... +[2023-02-26 13:00:37,943][00201] Component RolloutWorker_w0 stopped! +[2023-02-26 13:00:37,947][24358] Stopping RolloutWorker_w0... +[2023-02-26 13:00:37,951][24358] Loop rollout_proc0_evt_loop terminating... +[2023-02-26 13:00:37,984][00201] Component RolloutWorker_w6 stopped! +[2023-02-26 13:00:37,990][24368] Stopping RolloutWorker_w6... +[2023-02-26 13:00:38,000][00201] Component RolloutWorker_w2 stopped! +[2023-02-26 13:00:38,002][24364] Stopping RolloutWorker_w2... +[2023-02-26 13:00:37,991][24368] Loop rollout_proc6_evt_loop terminating... +[2023-02-26 13:00:38,023][24364] Loop rollout_proc2_evt_loop terminating... +[2023-02-26 13:00:38,025][00201] Component RolloutWorker_w4 stopped! +[2023-02-26 13:00:38,029][24365] Stopping RolloutWorker_w4... +[2023-02-26 13:00:38,032][24367] Stopping RolloutWorker_w3... +[2023-02-26 13:00:38,033][24367] Loop rollout_proc3_evt_loop terminating... +[2023-02-26 13:00:38,033][00201] Component RolloutWorker_w3 stopped! +[2023-02-26 13:00:38,030][24365] Loop rollout_proc4_evt_loop terminating... +[2023-02-26 13:00:38,042][24366] Stopping RolloutWorker_w5... +[2023-02-26 13:00:38,045][24366] Loop rollout_proc5_evt_loop terminating... +[2023-02-26 13:00:38,045][00201] Component RolloutWorker_w5 stopped! +[2023-02-26 13:00:38,050][24359] Stopping RolloutWorker_w1... +[2023-02-26 13:00:38,051][24359] Loop rollout_proc1_evt_loop terminating... +[2023-02-26 13:00:38,054][00201] Component RolloutWorker_w1 stopped! +[2023-02-26 13:00:38,057][24369] Stopping RolloutWorker_w7... +[2023-02-26 13:00:38,057][00201] Component RolloutWorker_w7 stopped! +[2023-02-26 13:00:38,058][24369] Loop rollout_proc7_evt_loop terminating... +[2023-02-26 13:00:38,169][24343] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000859_3518464.pth +[2023-02-26 13:00:38,210][24343] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 13:00:38,454][00201] Component LearnerWorker_p0 stopped! +[2023-02-26 13:00:38,457][00201] Waiting for process learner_proc0 to stop... +[2023-02-26 13:00:38,459][24343] Stopping LearnerWorker_p0... +[2023-02-26 13:00:38,460][24343] Loop learner_proc0_evt_loop terminating... +[2023-02-26 13:00:40,761][00201] Waiting for process inference_proc0-0 to join... +[2023-02-26 13:00:41,282][00201] Waiting for process rollout_proc0 to join... +[2023-02-26 13:00:41,987][00201] Waiting for process rollout_proc1 to join... +[2023-02-26 13:00:41,988][00201] Waiting for process rollout_proc2 to join... +[2023-02-26 13:00:41,991][00201] Waiting for process rollout_proc3 to join... +[2023-02-26 13:00:41,993][00201] Waiting for process rollout_proc4 to join... +[2023-02-26 13:00:41,994][00201] Waiting for process rollout_proc5 to join... +[2023-02-26 13:00:41,995][00201] Waiting for process rollout_proc6 to join... +[2023-02-26 13:00:42,001][00201] Waiting for process rollout_proc7 to join... +[2023-02-26 13:00:42,002][00201] Batcher 0 profile tree view: +batching: 25.4944, releasing_batches: 0.0242 +[2023-02-26 13:00:42,003][00201] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0048 + wait_policy_total: 527.8576 +update_model: 7.7096 + weight_update: 0.0043 +one_step: 0.0061 + handle_policy_step: 485.1233 + deserialize: 14.2860, stack: 2.7680, obs_to_device_normalize: 109.2433, forward: 230.3708, send_messages: 25.2463 + prepare_outputs: 78.9958 + to_cpu: 49.1439 +[2023-02-26 13:00:42,004][00201] Learner 0 profile tree view: +misc: 0.0057, prepare_batch: 15.4617 +train: 75.0181 + epoch_init: 0.0085, minibatch_init: 0.0147, losses_postprocess: 0.5780, kl_divergence: 0.5759, after_optimizer: 33.1267 + calculate_losses: 26.5268 + losses_init: 0.0034, forward_head: 1.6359, bptt_initial: 17.5452, tail: 1.0583, advantages_returns: 0.3159, losses: 3.6151 + bptt: 2.0804 + bptt_forward_core: 2.0033 + update: 13.6086 + clip: 1.3862 +[2023-02-26 13:00:42,006][00201] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3035, enqueue_policy_requests: 139.9593, env_step: 795.9803, overhead: 19.2942, complete_rollouts: 6.8312 +save_policy_outputs: 18.6656 + split_output_tensors: 8.9430 +[2023-02-26 13:00:42,007][00201] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3258, enqueue_policy_requests: 141.9544, env_step: 794.4403, overhead: 19.1456, complete_rollouts: 6.6645 +save_policy_outputs: 18.6443 + split_output_tensors: 8.9801 +[2023-02-26 13:00:42,009][00201] Loop Runner_EvtLoop terminating... +[2023-02-26 13:00:42,010][00201] Runner profile tree view: +main_loop: 1085.7017 +[2023-02-26 13:00:42,012][00201] Collected {0: 4005888}, FPS: 3689.7 +[2023-02-26 13:08:53,857][00201] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 13:08:53,860][00201] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 13:08:53,862][00201] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 13:08:53,866][00201] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 13:08:53,868][00201] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 13:08:53,869][00201] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 13:08:53,872][00201] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 13:08:53,874][00201] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 13:08:53,876][00201] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-26 13:08:53,877][00201] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-26 13:08:53,881][00201] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 13:08:53,882][00201] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 13:08:53,883][00201] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 13:08:53,885][00201] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 13:08:53,887][00201] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 13:08:53,935][00201] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-26 13:08:53,938][00201] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 13:08:53,943][00201] RunningMeanStd input shape: (1,) +[2023-02-26 13:08:53,969][00201] ConvEncoder: input_channels=3 +[2023-02-26 13:08:54,737][00201] Conv encoder output size: 512 +[2023-02-26 13:08:54,739][00201] Policy head output size: 512 +[2023-02-26 13:08:57,565][00201] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 13:08:58,797][00201] Num frames 100... +[2023-02-26 13:08:58,910][00201] Num frames 200... +[2023-02-26 13:08:58,999][00201] Avg episode rewards: #0: 3.300, true rewards: #0: 2.300 +[2023-02-26 13:08:59,000][00201] Avg episode reward: 3.300, avg true_objective: 2.300 +[2023-02-26 13:08:59,080][00201] Num frames 300... +[2023-02-26 13:08:59,199][00201] Num frames 400... +[2023-02-26 13:08:59,312][00201] Num frames 500... +[2023-02-26 13:08:59,430][00201] Num frames 600... +[2023-02-26 13:08:59,543][00201] Num frames 700... +[2023-02-26 13:08:59,662][00201] Num frames 800... +[2023-02-26 13:08:59,774][00201] Num frames 900... +[2023-02-26 13:08:59,898][00201] Num frames 1000... +[2023-02-26 13:09:00,012][00201] Num frames 1100... +[2023-02-26 13:09:00,131][00201] Num frames 1200... +[2023-02-26 13:09:00,242][00201] Num frames 1300... +[2023-02-26 13:09:00,367][00201] Num frames 1400... +[2023-02-26 13:09:00,483][00201] Num frames 1500... +[2023-02-26 13:09:00,599][00201] Num frames 1600... +[2023-02-26 13:09:00,731][00201] Num frames 1700... +[2023-02-26 13:09:00,799][00201] Avg episode rewards: #0: 23.550, true rewards: #0: 8.550 +[2023-02-26 13:09:00,801][00201] Avg episode reward: 23.550, avg true_objective: 8.550 +[2023-02-26 13:09:00,916][00201] Num frames 1800... +[2023-02-26 13:09:01,025][00201] Num frames 1900... +[2023-02-26 13:09:01,141][00201] Num frames 2000... +[2023-02-26 13:09:01,259][00201] Num frames 2100... +[2023-02-26 13:09:01,369][00201] Num frames 2200... +[2023-02-26 13:09:01,483][00201] Num frames 2300... +[2023-02-26 13:09:01,597][00201] Num frames 2400... +[2023-02-26 13:09:01,714][00201] Num frames 2500... +[2023-02-26 13:09:01,831][00201] Num frames 2600... +[2023-02-26 13:09:01,943][00201] Num frames 2700... +[2023-02-26 13:09:02,066][00201] Num frames 2800... +[2023-02-26 13:09:02,196][00201] Num frames 2900... +[2023-02-26 13:09:02,318][00201] Num frames 3000... +[2023-02-26 13:09:02,443][00201] Num frames 3100... +[2023-02-26 13:09:02,572][00201] Num frames 3200... +[2023-02-26 13:09:02,692][00201] Num frames 3300... +[2023-02-26 13:09:02,821][00201] Num frames 3400... +[2023-02-26 13:09:02,968][00201] Avg episode rewards: #0: 29.610, true rewards: #0: 11.610 +[2023-02-26 13:09:02,971][00201] Avg episode reward: 29.610, avg true_objective: 11.610 +[2023-02-26 13:09:02,996][00201] Num frames 3500... +[2023-02-26 13:09:03,106][00201] Num frames 3600... +[2023-02-26 13:09:03,219][00201] Num frames 3700... +[2023-02-26 13:09:03,326][00201] Avg episode rewards: #0: 23.115, true rewards: #0: 9.365 +[2023-02-26 13:09:03,327][00201] Avg episode reward: 23.115, avg true_objective: 9.365 +[2023-02-26 13:09:03,394][00201] Num frames 3800... +[2023-02-26 13:09:03,504][00201] Num frames 3900... +[2023-02-26 13:09:03,629][00201] Num frames 4000... +[2023-02-26 13:09:03,753][00201] Num frames 4100... +[2023-02-26 13:09:03,875][00201] Num frames 4200... +[2023-02-26 13:09:03,990][00201] Num frames 4300... +[2023-02-26 13:09:04,109][00201] Num frames 4400... +[2023-02-26 13:09:04,220][00201] Num frames 4500... +[2023-02-26 13:09:04,338][00201] Num frames 4600... +[2023-02-26 13:09:04,449][00201] Num frames 4700... +[2023-02-26 13:09:04,563][00201] Num frames 4800... +[2023-02-26 13:09:04,685][00201] Num frames 4900... +[2023-02-26 13:09:04,810][00201] Num frames 5000... +[2023-02-26 13:09:04,922][00201] Num frames 5100... +[2023-02-26 13:09:05,043][00201] Num frames 5200... +[2023-02-26 13:09:05,155][00201] Num frames 5300... +[2023-02-26 13:09:05,276][00201] Num frames 5400... +[2023-02-26 13:09:05,385][00201] Num frames 5500... +[2023-02-26 13:09:05,506][00201] Num frames 5600... +[2023-02-26 13:09:05,619][00201] Num frames 5700... +[2023-02-26 13:09:05,749][00201] Num frames 5800... +[2023-02-26 13:09:05,813][00201] Avg episode rewards: #0: 29.010, true rewards: #0: 11.610 +[2023-02-26 13:09:05,816][00201] Avg episode reward: 29.010, avg true_objective: 11.610 +[2023-02-26 13:09:05,923][00201] Num frames 5900... +[2023-02-26 13:09:06,034][00201] Num frames 6000... +[2023-02-26 13:09:06,152][00201] Num frames 6100... +[2023-02-26 13:09:06,270][00201] Num frames 6200... +[2023-02-26 13:09:06,384][00201] Num frames 6300... +[2023-02-26 13:09:06,497][00201] Num frames 6400... +[2023-02-26 13:09:06,615][00201] Num frames 6500... +[2023-02-26 13:09:06,755][00201] Num frames 6600... +[2023-02-26 13:09:06,921][00201] Num frames 6700... +[2023-02-26 13:09:07,012][00201] Avg episode rewards: #0: 27.201, true rewards: #0: 11.202 +[2023-02-26 13:09:07,014][00201] Avg episode reward: 27.201, avg true_objective: 11.202 +[2023-02-26 13:09:07,141][00201] Num frames 6800... +[2023-02-26 13:09:07,304][00201] Num frames 6900... +[2023-02-26 13:09:07,457][00201] Num frames 7000... +[2023-02-26 13:09:07,628][00201] Num frames 7100... +[2023-02-26 13:09:07,783][00201] Num frames 7200... +[2023-02-26 13:09:07,941][00201] Num frames 7300... +[2023-02-26 13:09:08,094][00201] Num frames 7400... +[2023-02-26 13:09:08,248][00201] Num frames 7500... +[2023-02-26 13:09:08,404][00201] Num frames 7600... +[2023-02-26 13:09:08,565][00201] Num frames 7700... +[2023-02-26 13:09:08,733][00201] Num frames 7800... +[2023-02-26 13:09:08,912][00201] Num frames 7900... +[2023-02-26 13:09:09,076][00201] Num frames 8000... +[2023-02-26 13:09:09,238][00201] Num frames 8100... +[2023-02-26 13:09:09,411][00201] Num frames 8200... +[2023-02-26 13:09:09,546][00201] Avg episode rewards: #0: 29.493, true rewards: #0: 11.779 +[2023-02-26 13:09:09,548][00201] Avg episode reward: 29.493, avg true_objective: 11.779 +[2023-02-26 13:09:09,641][00201] Num frames 8300... +[2023-02-26 13:09:09,799][00201] Num frames 8400... +[2023-02-26 13:09:09,970][00201] Num frames 8500... +[2023-02-26 13:09:10,135][00201] Num frames 8600... +[2023-02-26 13:09:10,301][00201] Num frames 8700... +[2023-02-26 13:09:10,441][00201] Num frames 8800... +[2023-02-26 13:09:10,572][00201] Num frames 8900... +[2023-02-26 13:09:10,688][00201] Num frames 9000... +[2023-02-26 13:09:10,804][00201] Num frames 9100... +[2023-02-26 13:09:10,916][00201] Num frames 9200... +[2023-02-26 13:09:11,037][00201] Num frames 9300... +[2023-02-26 13:09:11,166][00201] Avg episode rewards: #0: 29.206, true rewards: #0: 11.706 +[2023-02-26 13:09:11,167][00201] Avg episode reward: 29.206, avg true_objective: 11.706 +[2023-02-26 13:09:11,218][00201] Num frames 9400... +[2023-02-26 13:09:11,327][00201] Num frames 9500... +[2023-02-26 13:09:11,446][00201] Num frames 9600... +[2023-02-26 13:09:11,562][00201] Num frames 9700... +[2023-02-26 13:09:11,683][00201] Num frames 9800... +[2023-02-26 13:09:11,803][00201] Num frames 9900... +[2023-02-26 13:09:11,918][00201] Num frames 10000... +[2023-02-26 13:09:11,981][00201] Avg episode rewards: #0: 27.561, true rewards: #0: 11.117 +[2023-02-26 13:09:11,982][00201] Avg episode reward: 27.561, avg true_objective: 11.117 +[2023-02-26 13:09:12,093][00201] Num frames 10100... +[2023-02-26 13:09:12,203][00201] Num frames 10200... +[2023-02-26 13:09:12,316][00201] Num frames 10300... +[2023-02-26 13:09:12,431][00201] Num frames 10400... +[2023-02-26 13:09:12,544][00201] Num frames 10500... +[2023-02-26 13:09:12,672][00201] Num frames 10600... +[2023-02-26 13:09:12,785][00201] Num frames 10700... +[2023-02-26 13:09:12,892][00201] Avg episode rewards: #0: 26.447, true rewards: #0: 10.747 +[2023-02-26 13:09:12,894][00201] Avg episode reward: 26.447, avg true_objective: 10.747 +[2023-02-26 13:10:17,592][00201] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-26 13:22:04,879][00201] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-26 13:22:04,882][00201] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-26 13:22:04,885][00201] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-26 13:22:04,888][00201] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-26 13:22:04,891][00201] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-26 13:22:04,893][00201] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-26 13:22:04,895][00201] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-26 13:22:04,897][00201] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-26 13:22:04,899][00201] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-26 13:22:04,901][00201] Adding new argument 'hf_repository'='oscarb92/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-26 13:22:04,902][00201] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-26 13:22:04,904][00201] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-26 13:22:04,908][00201] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-26 13:22:04,909][00201] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-26 13:22:04,911][00201] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-26 13:22:04,946][00201] RunningMeanStd input shape: (3, 72, 128) +[2023-02-26 13:22:04,949][00201] RunningMeanStd input shape: (1,) +[2023-02-26 13:22:04,967][00201] ConvEncoder: input_channels=3 +[2023-02-26 13:22:05,025][00201] Conv encoder output size: 512 +[2023-02-26 13:22:05,028][00201] Policy head output size: 512 +[2023-02-26 13:22:05,060][00201] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-26 13:22:05,719][00201] Num frames 100... +[2023-02-26 13:22:05,881][00201] Num frames 200... +[2023-02-26 13:22:06,045][00201] Num frames 300... +[2023-02-26 13:22:06,199][00201] Num frames 400... +[2023-02-26 13:22:06,315][00201] Num frames 500... +[2023-02-26 13:22:06,433][00201] Num frames 600... +[2023-02-26 13:22:06,558][00201] Num frames 700... +[2023-02-26 13:22:06,675][00201] Num frames 800... +[2023-02-26 13:22:06,729][00201] Avg episode rewards: #0: 15.000, true rewards: #0: 8.000 +[2023-02-26 13:22:06,730][00201] Avg episode reward: 15.000, avg true_objective: 8.000 +[2023-02-26 13:22:06,859][00201] Num frames 900... +[2023-02-26 13:22:06,983][00201] Num frames 1000... +[2023-02-26 13:22:07,103][00201] Num frames 1100... +[2023-02-26 13:22:07,226][00201] Num frames 1200... +[2023-02-26 13:22:07,355][00201] Num frames 1300... +[2023-02-26 13:22:07,472][00201] Num frames 1400... +[2023-02-26 13:22:07,591][00201] Num frames 1500... +[2023-02-26 13:22:07,721][00201] Num frames 1600... +[2023-02-26 13:22:07,812][00201] Avg episode rewards: #0: 17.160, true rewards: #0: 8.160 +[2023-02-26 13:22:07,814][00201] Avg episode reward: 17.160, avg true_objective: 8.160 +[2023-02-26 13:22:07,905][00201] Num frames 1700... +[2023-02-26 13:22:08,024][00201] Num frames 1800... +[2023-02-26 13:22:08,143][00201] Num frames 1900... +[2023-02-26 13:22:08,256][00201] Num frames 2000... +[2023-02-26 13:22:08,374][00201] Num frames 2100... +[2023-02-26 13:22:08,484][00201] Num frames 2200... +[2023-02-26 13:22:08,598][00201] Num frames 2300... +[2023-02-26 13:22:08,712][00201] Num frames 2400... +[2023-02-26 13:22:08,827][00201] Num frames 2500... +[2023-02-26 13:22:08,947][00201] Num frames 2600... +[2023-02-26 13:22:09,057][00201] Num frames 2700... +[2023-02-26 13:22:09,175][00201] Num frames 2800... +[2023-02-26 13:22:09,286][00201] Num frames 2900... +[2023-02-26 13:22:09,407][00201] Num frames 3000... +[2023-02-26 13:22:09,546][00201] Avg episode rewards: #0: 23.907, true rewards: #0: 10.240 +[2023-02-26 13:22:09,547][00201] Avg episode reward: 23.907, avg true_objective: 10.240 +[2023-02-26 13:22:09,581][00201] Num frames 3100... +[2023-02-26 13:22:09,716][00201] Num frames 3200... +[2023-02-26 13:22:09,829][00201] Num frames 3300... +[2023-02-26 13:22:09,951][00201] Num frames 3400... +[2023-02-26 13:22:10,070][00201] Num frames 3500... +[2023-02-26 13:22:10,193][00201] Num frames 3600... +[2023-02-26 13:22:10,301][00201] Avg episode rewards: #0: 21.620, true rewards: #0: 9.120 +[2023-02-26 13:22:10,303][00201] Avg episode reward: 21.620, avg true_objective: 9.120 +[2023-02-26 13:22:10,366][00201] Num frames 3700... +[2023-02-26 13:22:10,480][00201] Num frames 3800... +[2023-02-26 13:22:10,603][00201] Num frames 3900... +[2023-02-26 13:22:10,719][00201] Num frames 4000... +[2023-02-26 13:22:10,830][00201] Num frames 4100... +[2023-02-26 13:22:10,951][00201] Num frames 4200... +[2023-02-26 13:22:11,073][00201] Num frames 4300... +[2023-02-26 13:22:11,183][00201] Num frames 4400... +[2023-02-26 13:22:11,299][00201] Num frames 4500... +[2023-02-26 13:22:11,416][00201] Num frames 4600... +[2023-02-26 13:22:11,533][00201] Num frames 4700... +[2023-02-26 13:22:11,644][00201] Num frames 4800... +[2023-02-26 13:22:11,760][00201] Num frames 4900... +[2023-02-26 13:22:11,879][00201] Num frames 5000... +[2023-02-26 13:22:11,997][00201] Num frames 5100... +[2023-02-26 13:22:12,076][00201] Avg episode rewards: #0: 24.440, true rewards: #0: 10.240 +[2023-02-26 13:22:12,078][00201] Avg episode reward: 24.440, avg true_objective: 10.240 +[2023-02-26 13:22:12,165][00201] Num frames 5200... +[2023-02-26 13:22:12,279][00201] Num frames 5300... +[2023-02-26 13:22:12,389][00201] Num frames 5400... +[2023-02-26 13:22:12,502][00201] Num frames 5500... +[2023-02-26 13:22:12,612][00201] Num frames 5600... +[2023-02-26 13:22:12,732][00201] Num frames 5700... +[2023-02-26 13:22:12,842][00201] Num frames 5800... +[2023-02-26 13:22:12,969][00201] Num frames 5900... +[2023-02-26 13:22:13,080][00201] Num frames 6000... +[2023-02-26 13:22:13,205][00201] Num frames 6100... +[2023-02-26 13:22:13,318][00201] Num frames 6200... +[2023-02-26 13:22:13,431][00201] Num frames 6300... +[2023-02-26 13:22:13,560][00201] Avg episode rewards: #0: 24.780, true rewards: #0: 10.613 +[2023-02-26 13:22:13,563][00201] Avg episode reward: 24.780, avg true_objective: 10.613 +[2023-02-26 13:22:13,602][00201] Num frames 6400... +[2023-02-26 13:22:13,723][00201] Num frames 6500... +[2023-02-26 13:22:13,836][00201] Num frames 6600... +[2023-02-26 13:22:13,949][00201] Num frames 6700... +[2023-02-26 13:22:14,065][00201] Num frames 6800... +[2023-02-26 13:22:14,175][00201] Num frames 6900... +[2023-02-26 13:22:14,293][00201] Num frames 7000... +[2023-02-26 13:22:14,417][00201] Num frames 7100... +[2023-02-26 13:22:14,535][00201] Num frames 7200... +[2023-02-26 13:22:14,651][00201] Num frames 7300... +[2023-02-26 13:22:14,781][00201] Num frames 7400... +[2023-02-26 13:22:14,902][00201] Num frames 7500... +[2023-02-26 13:22:15,020][00201] Num frames 7600... +[2023-02-26 13:22:15,132][00201] Num frames 7700... +[2023-02-26 13:22:15,252][00201] Num frames 7800... +[2023-02-26 13:22:15,369][00201] Num frames 7900... +[2023-02-26 13:22:15,501][00201] Num frames 8000... +[2023-02-26 13:22:15,622][00201] Num frames 8100... +[2023-02-26 13:22:15,746][00201] Num frames 8200... +[2023-02-26 13:22:15,864][00201] Num frames 8300... +[2023-02-26 13:22:15,982][00201] Num frames 8400... +[2023-02-26 13:22:16,119][00201] Avg episode rewards: #0: 28.954, true rewards: #0: 12.097 +[2023-02-26 13:22:16,121][00201] Avg episode reward: 28.954, avg true_objective: 12.097 +[2023-02-26 13:22:16,160][00201] Num frames 8500... +[2023-02-26 13:22:16,333][00201] Num frames 8600... +[2023-02-26 13:22:16,493][00201] Num frames 8700... +[2023-02-26 13:22:16,657][00201] Num frames 8800... +[2023-02-26 13:22:16,814][00201] Num frames 8900... +[2023-02-26 13:22:16,876][00201] Avg episode rewards: #0: 26.251, true rewards: #0: 11.126 +[2023-02-26 13:22:16,881][00201] Avg episode reward: 26.251, avg true_objective: 11.126 +[2023-02-26 13:22:17,062][00201] Num frames 9000... +[2023-02-26 13:22:17,219][00201] Num frames 9100... +[2023-02-26 13:22:17,373][00201] Num frames 9200... +[2023-02-26 13:22:17,522][00201] Num frames 9300... +[2023-02-26 13:22:17,679][00201] Num frames 9400... +[2023-02-26 13:22:17,834][00201] Num frames 9500... +[2023-02-26 13:22:17,989][00201] Num frames 9600... +[2023-02-26 13:22:18,149][00201] Num frames 9700... +[2023-02-26 13:22:18,361][00201] Avg episode rewards: #0: 25.774, true rewards: #0: 10.886 +[2023-02-26 13:22:18,363][00201] Avg episode reward: 25.774, avg true_objective: 10.886 +[2023-02-26 13:22:18,369][00201] Num frames 9800... +[2023-02-26 13:22:18,533][00201] Num frames 9900... +[2023-02-26 13:22:18,701][00201] Num frames 10000... +[2023-02-26 13:22:18,858][00201] Num frames 10100... +[2023-02-26 13:22:19,016][00201] Num frames 10200... +[2023-02-26 13:22:19,174][00201] Num frames 10300... +[2023-02-26 13:22:19,352][00201] Num frames 10400... +[2023-02-26 13:22:19,511][00201] Num frames 10500... +[2023-02-26 13:22:19,665][00201] Avg episode rewards: #0: 25.060, true rewards: #0: 10.560 +[2023-02-26 13:22:19,667][00201] Avg episode reward: 25.060, avg true_objective: 10.560 +[2023-02-26 13:23:22,857][00201] Replay video saved to /content/train_dir/default_experiment/replay.mp4!