[2023-02-24 13:37:48,414][00980] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-02-24 13:37:48,419][00980] Rollout worker 0 uses device cpu [2023-02-24 13:37:48,421][00980] Rollout worker 1 uses device cpu [2023-02-24 13:37:48,424][00980] Rollout worker 2 uses device cpu [2023-02-24 13:37:48,425][00980] Rollout worker 3 uses device cpu [2023-02-24 13:37:48,426][00980] Rollout worker 4 uses device cpu [2023-02-24 13:37:48,427][00980] Rollout worker 5 uses device cpu [2023-02-24 13:37:48,429][00980] Rollout worker 6 uses device cpu [2023-02-24 13:37:48,430][00980] Rollout worker 7 uses device cpu [2023-02-24 13:37:48,612][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 13:37:48,614][00980] InferenceWorker_p0-w0: min num requests: 2 [2023-02-24 13:37:48,648][00980] Starting all processes... [2023-02-24 13:37:48,650][00980] Starting process learner_proc0 [2023-02-24 13:37:48,704][00980] Starting all processes... [2023-02-24 13:37:48,713][00980] Starting process inference_proc0-0 [2023-02-24 13:37:48,713][00980] Starting process rollout_proc0 [2023-02-24 13:37:48,715][00980] Starting process rollout_proc1 [2023-02-24 13:37:48,715][00980] Starting process rollout_proc2 [2023-02-24 13:37:48,715][00980] Starting process rollout_proc3 [2023-02-24 13:37:48,715][00980] Starting process rollout_proc4 [2023-02-24 13:37:48,716][00980] Starting process rollout_proc5 [2023-02-24 13:37:48,716][00980] Starting process rollout_proc6 [2023-02-24 13:37:48,716][00980] Starting process rollout_proc7 [2023-02-24 13:38:00,073][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 13:38:00,074][11152] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-02-24 13:38:00,181][11169] Worker 1 uses CPU cores [1] [2023-02-24 13:38:00,225][11171] Worker 5 uses CPU cores [1] [2023-02-24 13:38:00,643][11170] Worker 4 uses CPU cores [0] [2023-02-24 13:38:00,644][11166] Worker 0 uses CPU cores [0] [2023-02-24 13:38:00,645][11168] Worker 2 uses CPU cores [0] [2023-02-24 13:38:00,656][11174] Worker 7 uses CPU cores [1] [2023-02-24 13:38:00,681][11173] Worker 3 uses CPU cores [1] [2023-02-24 13:38:00,723][11172] Worker 6 uses CPU cores [0] [2023-02-24 13:38:00,808][11167] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 13:38:00,809][11167] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-02-24 13:38:01,040][11152] Num visible devices: 1 [2023-02-24 13:38:01,040][11167] Num visible devices: 1 [2023-02-24 13:38:01,054][11152] Starting seed is not provided [2023-02-24 13:38:01,054][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 13:38:01,055][11152] Initializing actor-critic model on device cuda:0 [2023-02-24 13:38:01,055][11152] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 13:38:01,057][11152] RunningMeanStd input shape: (1,) [2023-02-24 13:38:01,069][11152] ConvEncoder: input_channels=3 [2023-02-24 13:38:01,348][11152] Conv encoder output size: 512 [2023-02-24 13:38:01,348][11152] Policy head output size: 512 [2023-02-24 13:38:01,398][11152] Created Actor Critic model with architecture: [2023-02-24 13:38:01,398][11152] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-02-24 13:38:08,318][11152] Using optimizer [2023-02-24 13:38:08,319][11152] No checkpoints found [2023-02-24 13:38:08,319][11152] Did not load from checkpoint, starting from scratch! [2023-02-24 13:38:08,319][11152] Initialized policy 0 weights for model version 0 [2023-02-24 13:38:08,324][11152] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 13:38:08,331][11152] LearnerWorker_p0 finished initialization! [2023-02-24 13:38:08,532][11167] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 13:38:08,533][11167] RunningMeanStd input shape: (1,) [2023-02-24 13:38:08,545][11167] ConvEncoder: input_channels=3 [2023-02-24 13:38:08,605][00980] Heartbeat connected on Batcher_0 [2023-02-24 13:38:08,612][00980] Heartbeat connected on LearnerWorker_p0 [2023-02-24 13:38:08,624][00980] Heartbeat connected on RolloutWorker_w0 [2023-02-24 13:38:08,627][00980] Heartbeat connected on RolloutWorker_w1 [2023-02-24 13:38:08,632][00980] Heartbeat connected on RolloutWorker_w2 [2023-02-24 13:38:08,635][00980] Heartbeat connected on RolloutWorker_w3 [2023-02-24 13:38:08,638][00980] Heartbeat connected on RolloutWorker_w4 [2023-02-24 13:38:08,641][00980] Heartbeat connected on RolloutWorker_w5 [2023-02-24 13:38:08,645][00980] Heartbeat connected on RolloutWorker_w6 [2023-02-24 13:38:08,648][00980] Heartbeat connected on RolloutWorker_w7 [2023-02-24 13:38:08,675][11167] Conv encoder output size: 512 [2023-02-24 13:38:08,675][11167] Policy head output size: 512 [2023-02-24 13:38:11,675][00980] Inference worker 0-0 is ready! [2023-02-24 13:38:11,680][00980] All inference workers are ready! Signal rollout workers to start! [2023-02-24 13:38:11,682][00980] Heartbeat connected on InferenceWorker_p0-w0 [2023-02-24 13:38:11,815][11170] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,823][11166] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,836][11172] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,861][11168] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,939][11171] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,970][11169] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,949][11174] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:11,997][11173] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:38:12,941][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 13:38:13,336][11171] Decorrelating experience for 0 frames... [2023-02-24 13:38:13,337][11173] Decorrelating experience for 0 frames... [2023-02-24 13:38:13,337][11172] Decorrelating experience for 0 frames... [2023-02-24 13:38:13,336][11170] Decorrelating experience for 0 frames... [2023-02-24 13:38:13,334][11166] Decorrelating experience for 0 frames... [2023-02-24 13:38:14,535][11173] Decorrelating experience for 32 frames... [2023-02-24 13:38:14,540][11169] Decorrelating experience for 0 frames... [2023-02-24 13:38:14,548][11168] Decorrelating experience for 0 frames... [2023-02-24 13:38:14,554][11171] Decorrelating experience for 32 frames... [2023-02-24 13:38:14,564][11170] Decorrelating experience for 32 frames... [2023-02-24 13:38:15,085][11169] Decorrelating experience for 32 frames... [2023-02-24 13:38:15,712][11166] Decorrelating experience for 32 frames... [2023-02-24 13:38:15,719][11168] Decorrelating experience for 32 frames... [2023-02-24 13:38:15,721][11172] Decorrelating experience for 32 frames... [2023-02-24 13:38:15,962][11170] Decorrelating experience for 64 frames... [2023-02-24 13:38:16,447][11173] Decorrelating experience for 64 frames... [2023-02-24 13:38:16,505][11174] Decorrelating experience for 0 frames... [2023-02-24 13:38:17,213][11168] Decorrelating experience for 64 frames... [2023-02-24 13:38:17,219][11172] Decorrelating experience for 64 frames... [2023-02-24 13:38:17,357][11170] Decorrelating experience for 96 frames... [2023-02-24 13:38:17,559][11171] Decorrelating experience for 64 frames... [2023-02-24 13:38:17,646][11174] Decorrelating experience for 32 frames... [2023-02-24 13:38:17,740][11173] Decorrelating experience for 96 frames... [2023-02-24 13:38:17,909][11166] Decorrelating experience for 64 frames... [2023-02-24 13:38:17,940][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 13:38:18,591][11172] Decorrelating experience for 96 frames... [2023-02-24 13:38:18,779][11168] Decorrelating experience for 96 frames... [2023-02-24 13:38:19,018][11166] Decorrelating experience for 96 frames... [2023-02-24 13:38:19,203][11171] Decorrelating experience for 96 frames... [2023-02-24 13:38:19,352][11174] Decorrelating experience for 64 frames... [2023-02-24 13:38:19,715][11169] Decorrelating experience for 64 frames... [2023-02-24 13:38:20,019][11174] Decorrelating experience for 96 frames... [2023-02-24 13:38:20,258][11169] Decorrelating experience for 96 frames... [2023-02-24 13:38:22,940][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 4.4. Samples: 44. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 13:38:22,942][00980] Avg episode reward: [(0, '1.652')] [2023-02-24 13:38:24,162][11152] Signal inference workers to stop experience collection... [2023-02-24 13:38:24,169][11167] InferenceWorker_p0-w0: stopping experience collection [2023-02-24 13:38:26,901][11152] Signal inference workers to resume experience collection... [2023-02-24 13:38:26,901][11167] InferenceWorker_p0-w0: resuming experience collection [2023-02-24 13:38:27,940][00980] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 173.4. Samples: 2600. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2023-02-24 13:38:27,943][00980] Avg episode reward: [(0, '2.516')] [2023-02-24 13:38:32,941][00980] Fps is (10 sec: 2047.8, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 293.0. Samples: 5860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:38:32,943][00980] Avg episode reward: [(0, '3.416')] [2023-02-24 13:38:36,828][11167] Updated weights for policy 0, policy_version 10 (0.0013) [2023-02-24 13:38:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 1802.4, 300 sec: 1802.4). Total num frames: 45056. Throughput: 0: 363.7. Samples: 9092. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) [2023-02-24 13:38:37,941][00980] Avg episode reward: [(0, '4.245')] [2023-02-24 13:38:42,940][00980] Fps is (10 sec: 4506.1, 60 sec: 2184.6, 300 sec: 2184.6). Total num frames: 65536. Throughput: 0: 537.4. Samples: 16120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:38:42,943][00980] Avg episode reward: [(0, '4.308')] [2023-02-24 13:38:47,434][11167] Updated weights for policy 0, policy_version 20 (0.0013) [2023-02-24 13:38:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 2340.7, 300 sec: 2340.7). Total num frames: 81920. Throughput: 0: 606.9. Samples: 21242. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:38:47,943][00980] Avg episode reward: [(0, '4.248')] [2023-02-24 13:38:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 2355.3, 300 sec: 2355.3). Total num frames: 94208. Throughput: 0: 585.2. Samples: 23408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:38:52,948][00980] Avg episode reward: [(0, '4.218')] [2023-02-24 13:38:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 2639.7, 300 sec: 2639.7). Total num frames: 118784. Throughput: 0: 650.8. Samples: 29286. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:38:57,942][00980] Avg episode reward: [(0, '4.411')] [2023-02-24 13:38:57,945][11152] Saving new best policy, reward=4.411! [2023-02-24 13:38:58,493][11167] Updated weights for policy 0, policy_version 30 (0.0020) [2023-02-24 13:39:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 2867.3, 300 sec: 2867.3). Total num frames: 143360. Throughput: 0: 807.6. Samples: 36342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:39:02,942][00980] Avg episode reward: [(0, '4.305')] [2023-02-24 13:39:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 2904.5, 300 sec: 2904.5). Total num frames: 159744. Throughput: 0: 864.7. Samples: 38956. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-02-24 13:39:07,947][00980] Avg episode reward: [(0, '4.309')] [2023-02-24 13:39:09,169][11167] Updated weights for policy 0, policy_version 40 (0.0027) [2023-02-24 13:39:12,940][00980] Fps is (10 sec: 2867.0, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 172032. Throughput: 0: 906.8. Samples: 43406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:39:12,947][00980] Avg episode reward: [(0, '4.432')] [2023-02-24 13:39:12,961][11152] Saving new best policy, reward=4.432! [2023-02-24 13:39:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3024.8). Total num frames: 196608. Throughput: 0: 973.5. Samples: 49666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:39:17,947][00980] Avg episode reward: [(0, '4.399')] [2023-02-24 13:39:19,472][11167] Updated weights for policy 0, policy_version 50 (0.0025) [2023-02-24 13:39:22,940][00980] Fps is (10 sec: 4915.6, 60 sec: 3686.4, 300 sec: 3159.8). Total num frames: 221184. Throughput: 0: 979.9. Samples: 53186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:39:22,942][00980] Avg episode reward: [(0, '4.317')] [2023-02-24 13:39:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3167.6). Total num frames: 237568. Throughput: 0: 951.6. Samples: 58944. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:39:27,944][00980] Avg episode reward: [(0, '4.377')] [2023-02-24 13:39:30,688][11167] Updated weights for policy 0, policy_version 60 (0.0021) [2023-02-24 13:39:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3823.0, 300 sec: 3123.3). Total num frames: 249856. Throughput: 0: 938.6. Samples: 63480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:39:32,942][00980] Avg episode reward: [(0, '4.564')] [2023-02-24 13:39:32,961][11152] Saving new best policy, reward=4.564! [2023-02-24 13:39:37,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3228.7). Total num frames: 274432. Throughput: 0: 960.5. Samples: 66632. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:39:37,947][00980] Avg episode reward: [(0, '4.579')] [2023-02-24 13:39:37,952][11152] Saving new best policy, reward=4.579! [2023-02-24 13:39:40,521][11167] Updated weights for policy 0, policy_version 70 (0.0024) [2023-02-24 13:39:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3276.9). Total num frames: 294912. Throughput: 0: 985.4. Samples: 73630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:39:42,943][00980] Avg episode reward: [(0, '4.581')] [2023-02-24 13:39:42,956][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth... [2023-02-24 13:39:43,156][11152] Saving new best policy, reward=4.581! [2023-02-24 13:39:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3276.9). Total num frames: 311296. Throughput: 0: 941.9. Samples: 78726. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:39:47,945][00980] Avg episode reward: [(0, '4.659')] [2023-02-24 13:39:47,949][11152] Saving new best policy, reward=4.659! [2023-02-24 13:39:52,851][11167] Updated weights for policy 0, policy_version 80 (0.0015) [2023-02-24 13:39:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3276.9). Total num frames: 327680. Throughput: 0: 929.0. Samples: 80760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:39:52,942][00980] Avg episode reward: [(0, '4.729')] [2023-02-24 13:39:52,955][11152] Saving new best policy, reward=4.729! [2023-02-24 13:39:57,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3315.9). Total num frames: 348160. Throughput: 0: 960.9. Samples: 86646. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:39:57,943][00980] Avg episode reward: [(0, '4.488')] [2023-02-24 13:40:01,727][11167] Updated weights for policy 0, policy_version 90 (0.0021) [2023-02-24 13:40:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3388.6). Total num frames: 372736. Throughput: 0: 983.8. Samples: 93938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:40:02,943][00980] Avg episode reward: [(0, '4.355')] [2023-02-24 13:40:07,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3383.7). Total num frames: 389120. Throughput: 0: 964.3. Samples: 96578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:40:07,945][00980] Avg episode reward: [(0, '4.497')] [2023-02-24 13:40:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3379.2). Total num frames: 405504. Throughput: 0: 936.4. Samples: 101084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:40:12,943][00980] Avg episode reward: [(0, '4.596')] [2023-02-24 13:40:13,930][11167] Updated weights for policy 0, policy_version 100 (0.0047) [2023-02-24 13:40:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3407.9). Total num frames: 425984. Throughput: 0: 977.8. Samples: 107482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:40:17,942][00980] Avg episode reward: [(0, '4.536')] [2023-02-24 13:40:22,536][11167] Updated weights for policy 0, policy_version 110 (0.0022) [2023-02-24 13:40:22,940][00980] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3465.9). Total num frames: 450560. Throughput: 0: 987.2. Samples: 111054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:40:22,943][00980] Avg episode reward: [(0, '4.550')] [2023-02-24 13:40:27,942][00980] Fps is (10 sec: 4094.9, 60 sec: 3822.8, 300 sec: 3458.8). Total num frames: 466944. Throughput: 0: 956.5. Samples: 116674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:40:27,945][00980] Avg episode reward: [(0, '4.564')] [2023-02-24 13:40:32,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3822.9, 300 sec: 3423.1). Total num frames: 479232. Throughput: 0: 943.0. Samples: 121160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:40:32,948][00980] Avg episode reward: [(0, '4.622')] [2023-02-24 13:40:34,819][11167] Updated weights for policy 0, policy_version 120 (0.0035) [2023-02-24 13:40:37,940][00980] Fps is (10 sec: 3687.3, 60 sec: 3822.9, 300 sec: 3474.6). Total num frames: 503808. Throughput: 0: 970.1. Samples: 124414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:40:37,952][00980] Avg episode reward: [(0, '4.742')] [2023-02-24 13:40:37,960][11152] Saving new best policy, reward=4.742! [2023-02-24 13:40:42,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3522.6). Total num frames: 528384. Throughput: 0: 993.2. Samples: 131340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:40:42,943][00980] Avg episode reward: [(0, '4.836')] [2023-02-24 13:40:42,954][11152] Saving new best policy, reward=4.836! [2023-02-24 13:40:43,915][11167] Updated weights for policy 0, policy_version 130 (0.0016) [2023-02-24 13:40:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3488.2). Total num frames: 540672. Throughput: 0: 945.4. Samples: 136480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:40:47,942][00980] Avg episode reward: [(0, '4.832')] [2023-02-24 13:40:52,941][00980] Fps is (10 sec: 2867.0, 60 sec: 3822.9, 300 sec: 3481.6). Total num frames: 557056. Throughput: 0: 934.8. Samples: 138646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:40:52,947][00980] Avg episode reward: [(0, '4.762')] [2023-02-24 13:40:56,054][11167] Updated weights for policy 0, policy_version 140 (0.0025) [2023-02-24 13:40:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3525.1). Total num frames: 581632. Throughput: 0: 970.1. Samples: 144738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:40:57,946][00980] Avg episode reward: [(0, '4.819')] [2023-02-24 13:41:02,942][00980] Fps is (10 sec: 4914.4, 60 sec: 3891.0, 300 sec: 3565.9). Total num frames: 606208. Throughput: 0: 987.2. Samples: 151908. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:41:02,947][00980] Avg episode reward: [(0, '4.724')] [2023-02-24 13:41:05,336][11167] Updated weights for policy 0, policy_version 150 (0.0016) [2023-02-24 13:41:07,943][00980] Fps is (10 sec: 3685.0, 60 sec: 3822.7, 300 sec: 3534.2). Total num frames: 618496. Throughput: 0: 961.7. Samples: 154332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:41:07,949][00980] Avg episode reward: [(0, '4.639')] [2023-02-24 13:41:12,943][00980] Fps is (10 sec: 2866.9, 60 sec: 3822.7, 300 sec: 3527.1). Total num frames: 634880. Throughput: 0: 935.8. Samples: 158786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:41:12,946][00980] Avg episode reward: [(0, '4.888')] [2023-02-24 13:41:12,966][11152] Saving new best policy, reward=4.888! [2023-02-24 13:41:17,053][11167] Updated weights for policy 0, policy_version 160 (0.0031) [2023-02-24 13:41:17,940][00980] Fps is (10 sec: 4097.5, 60 sec: 3891.2, 300 sec: 3564.7). Total num frames: 659456. Throughput: 0: 979.3. Samples: 165230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:41:17,945][00980] Avg episode reward: [(0, '4.879')] [2023-02-24 13:41:22,940][00980] Fps is (10 sec: 4507.2, 60 sec: 3822.9, 300 sec: 3578.6). Total num frames: 679936. Throughput: 0: 984.0. Samples: 168692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:41:22,942][00980] Avg episode reward: [(0, '4.645')] [2023-02-24 13:41:27,308][11167] Updated weights for policy 0, policy_version 170 (0.0025) [2023-02-24 13:41:27,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3822.8, 300 sec: 3570.8). Total num frames: 696320. Throughput: 0: 952.9. Samples: 174226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:41:27,948][00980] Avg episode reward: [(0, '4.523')] [2023-02-24 13:41:32,941][00980] Fps is (10 sec: 3276.3, 60 sec: 3891.1, 300 sec: 3563.5). Total num frames: 712704. Throughput: 0: 939.6. Samples: 178764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:41:32,949][00980] Avg episode reward: [(0, '4.561')] [2023-02-24 13:41:37,940][00980] Fps is (10 sec: 3688.3, 60 sec: 3822.9, 300 sec: 3576.5). Total num frames: 733184. Throughput: 0: 967.9. Samples: 182202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:41:37,942][00980] Avg episode reward: [(0, '4.777')] [2023-02-24 13:41:37,966][11167] Updated weights for policy 0, policy_version 180 (0.0012) [2023-02-24 13:41:42,940][00980] Fps is (10 sec: 4506.3, 60 sec: 3822.9, 300 sec: 3608.4). Total num frames: 757760. Throughput: 0: 991.1. Samples: 189336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:41:42,946][00980] Avg episode reward: [(0, '4.800')] [2023-02-24 13:41:42,957][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth... [2023-02-24 13:41:47,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3600.7). Total num frames: 774144. Throughput: 0: 944.4. Samples: 194404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:41:47,943][00980] Avg episode reward: [(0, '4.725')] [2023-02-24 13:41:48,509][11167] Updated weights for policy 0, policy_version 190 (0.0022) [2023-02-24 13:41:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3593.3). Total num frames: 790528. Throughput: 0: 941.7. Samples: 196704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:41:52,945][00980] Avg episode reward: [(0, '4.819')] [2023-02-24 13:41:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3622.7). Total num frames: 815104. Throughput: 0: 978.3. Samples: 202806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:41:57,945][00980] Avg episode reward: [(0, '5.099')] [2023-02-24 13:41:57,949][11152] Saving new best policy, reward=5.099! [2023-02-24 13:41:58,857][11167] Updated weights for policy 0, policy_version 200 (0.0017) [2023-02-24 13:42:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3823.1, 300 sec: 3633.0). Total num frames: 835584. Throughput: 0: 993.7. Samples: 209946. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:42:02,946][00980] Avg episode reward: [(0, '5.284')] [2023-02-24 13:42:02,959][11152] Saving new best policy, reward=5.284! [2023-02-24 13:42:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3625.4). Total num frames: 851968. Throughput: 0: 972.4. Samples: 212452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:42:07,947][00980] Avg episode reward: [(0, '5.709')] [2023-02-24 13:42:07,949][11152] Saving new best policy, reward=5.709! [2023-02-24 13:42:10,052][11167] Updated weights for policy 0, policy_version 210 (0.0013) [2023-02-24 13:42:12,941][00980] Fps is (10 sec: 3276.4, 60 sec: 3891.3, 300 sec: 3618.1). Total num frames: 868352. Throughput: 0: 948.8. Samples: 216918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:42:12,944][00980] Avg episode reward: [(0, '5.637')] [2023-02-24 13:42:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3627.9). Total num frames: 888832. Throughput: 0: 995.5. Samples: 223560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:42:17,953][00980] Avg episode reward: [(0, '5.676')] [2023-02-24 13:42:19,687][11167] Updated weights for policy 0, policy_version 220 (0.0018) [2023-02-24 13:42:22,940][00980] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3653.7). Total num frames: 913408. Throughput: 0: 998.4. Samples: 227128. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:42:22,944][00980] Avg episode reward: [(0, '5.737')] [2023-02-24 13:42:23,034][11152] Saving new best policy, reward=5.737! [2023-02-24 13:42:27,942][00980] Fps is (10 sec: 4095.1, 60 sec: 3891.4, 300 sec: 3646.2). Total num frames: 929792. Throughput: 0: 966.9. Samples: 232850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:42:27,952][00980] Avg episode reward: [(0, '5.743')] [2023-02-24 13:42:27,960][11152] Saving new best policy, reward=5.743! [2023-02-24 13:42:31,215][11167] Updated weights for policy 0, policy_version 230 (0.0014) [2023-02-24 13:42:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3639.2). Total num frames: 946176. Throughput: 0: 952.4. Samples: 237264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:42:32,944][00980] Avg episode reward: [(0, '5.787')] [2023-02-24 13:42:32,954][11152] Saving new best policy, reward=5.787! [2023-02-24 13:42:37,940][00980] Fps is (10 sec: 4096.9, 60 sec: 3959.5, 300 sec: 3663.2). Total num frames: 970752. Throughput: 0: 978.3. Samples: 240728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:42:37,942][00980] Avg episode reward: [(0, '6.137')] [2023-02-24 13:42:37,948][11152] Saving new best policy, reward=6.137! [2023-02-24 13:42:40,345][11167] Updated weights for policy 0, policy_version 240 (0.0017) [2023-02-24 13:42:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3671.3). Total num frames: 991232. Throughput: 0: 999.2. Samples: 247770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:42:42,945][00980] Avg episode reward: [(0, '6.197')] [2023-02-24 13:42:42,962][11152] Saving new best policy, reward=6.197! [2023-02-24 13:42:47,942][00980] Fps is (10 sec: 3685.6, 60 sec: 3891.1, 300 sec: 3664.1). Total num frames: 1007616. Throughput: 0: 953.6. Samples: 252862. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:42:47,946][00980] Avg episode reward: [(0, '6.539')] [2023-02-24 13:42:47,948][11152] Saving new best policy, reward=6.539! [2023-02-24 13:42:52,614][11167] Updated weights for policy 0, policy_version 250 (0.0030) [2023-02-24 13:42:52,942][00980] Fps is (10 sec: 3276.2, 60 sec: 3891.1, 300 sec: 3657.1). Total num frames: 1024000. Throughput: 0: 945.6. Samples: 255006. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:42:52,946][00980] Avg episode reward: [(0, '6.493')] [2023-02-24 13:42:57,940][00980] Fps is (10 sec: 3687.2, 60 sec: 3822.9, 300 sec: 3664.9). Total num frames: 1044480. Throughput: 0: 984.6. Samples: 261222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:42:57,942][00980] Avg episode reward: [(0, '6.429')] [2023-02-24 13:43:01,309][11167] Updated weights for policy 0, policy_version 260 (0.0016) [2023-02-24 13:43:02,940][00980] Fps is (10 sec: 4506.4, 60 sec: 3891.2, 300 sec: 3686.4). Total num frames: 1069056. Throughput: 0: 998.5. Samples: 268492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:43:02,943][00980] Avg episode reward: [(0, '6.526')] [2023-02-24 13:43:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3679.5). Total num frames: 1085440. Throughput: 0: 974.4. Samples: 270974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:43:07,948][00980] Avg episode reward: [(0, '6.938')] [2023-02-24 13:43:08,030][11152] Saving new best policy, reward=6.938! [2023-02-24 13:43:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3735.0). Total num frames: 1101824. Throughput: 0: 947.2. Samples: 275470. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) [2023-02-24 13:43:12,948][00980] Avg episode reward: [(0, '7.288')] [2023-02-24 13:43:12,964][11152] Saving new best policy, reward=7.288! [2023-02-24 13:43:13,547][11167] Updated weights for policy 0, policy_version 270 (0.0019) [2023-02-24 13:43:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 1126400. Throughput: 0: 991.3. Samples: 281872. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:43:17,945][00980] Avg episode reward: [(0, '6.738')] [2023-02-24 13:43:22,174][11167] Updated weights for policy 0, policy_version 280 (0.0025) [2023-02-24 13:43:22,940][00980] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1146880. Throughput: 0: 992.3. Samples: 285380. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-02-24 13:43:22,943][00980] Avg episode reward: [(0, '6.764')] [2023-02-24 13:43:27,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3891.3, 300 sec: 3873.9). Total num frames: 1163264. Throughput: 0: 963.4. Samples: 291122. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:43:27,950][00980] Avg episode reward: [(0, '7.105')] [2023-02-24 13:43:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1179648. Throughput: 0: 950.9. Samples: 295652. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:43:32,946][00980] Avg episode reward: [(0, '7.147')] [2023-02-24 13:43:34,461][11167] Updated weights for policy 0, policy_version 290 (0.0019) [2023-02-24 13:43:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1204224. Throughput: 0: 978.1. Samples: 299018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:43:37,945][00980] Avg episode reward: [(0, '7.403')] [2023-02-24 13:43:37,951][11152] Saving new best policy, reward=7.403! [2023-02-24 13:43:42,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1224704. Throughput: 0: 996.7. Samples: 306074. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:43:42,944][00980] Avg episode reward: [(0, '8.056')] [2023-02-24 13:43:42,953][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth... [2023-02-24 13:43:43,101][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000072_294912.pth [2023-02-24 13:43:43,110][11152] Saving new best policy, reward=8.056! [2023-02-24 13:43:43,489][11167] Updated weights for policy 0, policy_version 300 (0.0023) [2023-02-24 13:43:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3887.7). Total num frames: 1241088. Throughput: 0: 947.8. Samples: 311142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:43:47,945][00980] Avg episode reward: [(0, '8.824')] [2023-02-24 13:43:47,951][11152] Saving new best policy, reward=8.824! [2023-02-24 13:43:52,940][00980] Fps is (10 sec: 3276.6, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 1257472. Throughput: 0: 939.8. Samples: 313264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:43:52,943][00980] Avg episode reward: [(0, '9.508')] [2023-02-24 13:43:52,971][11152] Saving new best policy, reward=9.508! [2023-02-24 13:43:55,690][11167] Updated weights for policy 0, policy_version 310 (0.0024) [2023-02-24 13:43:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1277952. Throughput: 0: 970.8. Samples: 319158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:43:57,945][00980] Avg episode reward: [(0, '9.624')] [2023-02-24 13:43:57,947][11152] Saving new best policy, reward=9.624! [2023-02-24 13:44:02,940][00980] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1302528. Throughput: 0: 985.2. Samples: 326206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:44:02,945][00980] Avg episode reward: [(0, '8.869')] [2023-02-24 13:44:04,905][11167] Updated weights for policy 0, policy_version 320 (0.0023) [2023-02-24 13:44:07,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1318912. Throughput: 0: 964.5. Samples: 328784. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:44:07,945][00980] Avg episode reward: [(0, '8.736')] [2023-02-24 13:44:12,941][00980] Fps is (10 sec: 3276.5, 60 sec: 3891.1, 300 sec: 3859.9). Total num frames: 1335296. Throughput: 0: 937.4. Samples: 333304. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:44:12,946][00980] Avg episode reward: [(0, '9.513')] [2023-02-24 13:44:16,504][11167] Updated weights for policy 0, policy_version 330 (0.0021) [2023-02-24 13:44:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1355776. Throughput: 0: 982.5. Samples: 339864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:44:17,948][00980] Avg episode reward: [(0, '9.925')] [2023-02-24 13:44:17,953][11152] Saving new best policy, reward=9.925! [2023-02-24 13:44:22,940][00980] Fps is (10 sec: 4506.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1380352. Throughput: 0: 985.2. Samples: 343354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:44:22,943][00980] Avg episode reward: [(0, '10.988')] [2023-02-24 13:44:22,954][11152] Saving new best policy, reward=10.988! [2023-02-24 13:44:26,333][11167] Updated weights for policy 0, policy_version 340 (0.0014) [2023-02-24 13:44:27,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1396736. Throughput: 0: 953.3. Samples: 348974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:44:27,945][00980] Avg episode reward: [(0, '10.246')] [2023-02-24 13:44:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1409024. Throughput: 0: 943.7. Samples: 353608. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:44:32,949][00980] Avg episode reward: [(0, '10.671')] [2023-02-24 13:44:37,313][11167] Updated weights for policy 0, policy_version 350 (0.0030) [2023-02-24 13:44:37,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 1433600. Throughput: 0: 970.6. Samples: 356940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:44:37,942][00980] Avg episode reward: [(0, '10.539')] [2023-02-24 13:44:42,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1458176. Throughput: 0: 998.4. Samples: 364086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:44:42,943][00980] Avg episode reward: [(0, '10.770')] [2023-02-24 13:44:47,461][11167] Updated weights for policy 0, policy_version 360 (0.0028) [2023-02-24 13:44:47,941][00980] Fps is (10 sec: 4095.3, 60 sec: 3891.1, 300 sec: 3887.7). Total num frames: 1474560. Throughput: 0: 959.2. Samples: 369370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:44:47,951][00980] Avg episode reward: [(0, '11.574')] [2023-02-24 13:44:47,959][11152] Saving new best policy, reward=11.574! [2023-02-24 13:44:52,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3823.0, 300 sec: 3860.0). Total num frames: 1486848. Throughput: 0: 949.8. Samples: 371524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:44:52,945][00980] Avg episode reward: [(0, '12.029')] [2023-02-24 13:44:52,963][11152] Saving new best policy, reward=12.029! [2023-02-24 13:44:57,940][00980] Fps is (10 sec: 3687.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1511424. Throughput: 0: 982.5. Samples: 377514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:44:57,946][00980] Avg episode reward: [(0, '13.549')] [2023-02-24 13:44:57,950][11152] Saving new best policy, reward=13.549! [2023-02-24 13:44:58,343][11167] Updated weights for policy 0, policy_version 370 (0.0021) [2023-02-24 13:45:02,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1536000. Throughput: 0: 996.8. Samples: 384718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:02,942][00980] Avg episode reward: [(0, '13.641')] [2023-02-24 13:45:02,955][11152] Saving new best policy, reward=13.641! [2023-02-24 13:45:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1552384. Throughput: 0: 976.7. Samples: 387304. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:45:07,943][00980] Avg episode reward: [(0, '13.450')] [2023-02-24 13:45:08,858][11167] Updated weights for policy 0, policy_version 380 (0.0019) [2023-02-24 13:45:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 1568768. Throughput: 0: 953.3. Samples: 391874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:12,945][00980] Avg episode reward: [(0, '12.302')] [2023-02-24 13:45:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1589248. Throughput: 0: 993.2. Samples: 398302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:17,942][00980] Avg episode reward: [(0, '11.242')] [2023-02-24 13:45:18,981][11167] Updated weights for policy 0, policy_version 390 (0.0014) [2023-02-24 13:45:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 1613824. Throughput: 0: 999.1. Samples: 401898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:22,942][00980] Avg episode reward: [(0, '11.683')] [2023-02-24 13:45:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1630208. Throughput: 0: 970.1. Samples: 407740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:45:27,948][00980] Avg episode reward: [(0, '11.982')] [2023-02-24 13:45:29,929][11167] Updated weights for policy 0, policy_version 400 (0.0024) [2023-02-24 13:45:32,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 1646592. Throughput: 0: 951.9. Samples: 412202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:45:32,948][00980] Avg episode reward: [(0, '12.382')] [2023-02-24 13:45:37,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1662976. Throughput: 0: 962.6. Samples: 414840. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:45:37,948][00980] Avg episode reward: [(0, '13.828')] [2023-02-24 13:45:37,950][11152] Saving new best policy, reward=13.828! [2023-02-24 13:45:41,821][11167] Updated weights for policy 0, policy_version 410 (0.0015) [2023-02-24 13:45:42,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1683456. Throughput: 0: 950.4. Samples: 420280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:42,943][00980] Avg episode reward: [(0, '13.286')] [2023-02-24 13:45:42,953][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth... [2023-02-24 13:45:43,087][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000185_757760.pth [2023-02-24 13:45:47,941][00980] Fps is (10 sec: 3686.1, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1699840. Throughput: 0: 908.7. Samples: 425610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:45:47,950][00980] Avg episode reward: [(0, '13.446')] [2023-02-24 13:45:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 1712128. Throughput: 0: 900.2. Samples: 427814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:45:52,949][00980] Avg episode reward: [(0, '14.849')] [2023-02-24 13:45:52,959][11152] Saving new best policy, reward=14.849! [2023-02-24 13:45:54,263][11167] Updated weights for policy 0, policy_version 420 (0.0045) [2023-02-24 13:45:57,940][00980] Fps is (10 sec: 3686.7, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 1736704. Throughput: 0: 928.5. Samples: 433658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:45:57,948][00980] Avg episode reward: [(0, '15.688')] [2023-02-24 13:45:57,952][11152] Saving new best policy, reward=15.688! [2023-02-24 13:46:02,793][11167] Updated weights for policy 0, policy_version 430 (0.0017) [2023-02-24 13:46:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1761280. Throughput: 0: 943.8. Samples: 440774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:46:02,947][00980] Avg episode reward: [(0, '15.851')] [2023-02-24 13:46:02,961][11152] Saving new best policy, reward=15.851! [2023-02-24 13:46:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1777664. Throughput: 0: 920.5. Samples: 443322. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:46:07,947][00980] Avg episode reward: [(0, '14.958')] [2023-02-24 13:46:12,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3832.2). Total num frames: 1789952. Throughput: 0: 892.1. Samples: 447886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:46:12,945][00980] Avg episode reward: [(0, '13.428')] [2023-02-24 13:46:15,105][11167] Updated weights for policy 0, policy_version 440 (0.0014) [2023-02-24 13:46:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 1814528. Throughput: 0: 934.8. Samples: 454268. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:46:17,946][00980] Avg episode reward: [(0, '13.581')] [2023-02-24 13:46:22,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1839104. Throughput: 0: 957.1. Samples: 457908. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:46:22,945][00980] Avg episode reward: [(0, '13.883')] [2023-02-24 13:46:23,563][11167] Updated weights for policy 0, policy_version 450 (0.0012) [2023-02-24 13:46:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 1855488. Throughput: 0: 965.1. Samples: 463710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:46:27,944][00980] Avg episode reward: [(0, '14.481')] [2023-02-24 13:46:32,940][00980] Fps is (10 sec: 2867.1, 60 sec: 3686.4, 300 sec: 3846.1). Total num frames: 1867776. Throughput: 0: 944.3. Samples: 468102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:46:32,943][00980] Avg episode reward: [(0, '14.688')] [2023-02-24 13:46:35,998][11167] Updated weights for policy 0, policy_version 460 (0.0018) [2023-02-24 13:46:37,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 1892352. Throughput: 0: 967.8. Samples: 471366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:46:37,943][00980] Avg episode reward: [(0, '14.700')] [2023-02-24 13:46:42,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1916928. Throughput: 0: 999.8. Samples: 478648. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:46:42,942][00980] Avg episode reward: [(0, '14.915')] [2023-02-24 13:46:44,917][11167] Updated weights for policy 0, policy_version 470 (0.0016) [2023-02-24 13:46:47,941][00980] Fps is (10 sec: 4095.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1933312. Throughput: 0: 962.1. Samples: 484070. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:46:47,944][00980] Avg episode reward: [(0, '15.869')] [2023-02-24 13:46:47,946][11152] Saving new best policy, reward=15.869! [2023-02-24 13:46:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 1945600. Throughput: 0: 953.3. Samples: 486220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:46:52,945][00980] Avg episode reward: [(0, '16.879')] [2023-02-24 13:46:52,977][11152] Saving new best policy, reward=16.879! [2023-02-24 13:46:56,655][11167] Updated weights for policy 0, policy_version 480 (0.0021) [2023-02-24 13:46:57,940][00980] Fps is (10 sec: 3686.9, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 1970176. Throughput: 0: 984.3. Samples: 492180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:46:57,942][00980] Avg episode reward: [(0, '17.186')] [2023-02-24 13:46:57,949][11152] Saving new best policy, reward=17.186! [2023-02-24 13:47:02,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 1994752. Throughput: 0: 1001.2. Samples: 499320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:47:02,943][00980] Avg episode reward: [(0, '17.813')] [2023-02-24 13:47:02,953][11152] Saving new best policy, reward=17.813! [2023-02-24 13:47:06,352][11167] Updated weights for policy 0, policy_version 490 (0.0013) [2023-02-24 13:47:07,942][00980] Fps is (10 sec: 4095.1, 60 sec: 3891.1, 300 sec: 3873.8). Total num frames: 2011136. Throughput: 0: 976.7. Samples: 501860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:47:07,952][00980] Avg episode reward: [(0, '18.134')] [2023-02-24 13:47:07,954][11152] Saving new best policy, reward=18.134! [2023-02-24 13:47:12,940][00980] Fps is (10 sec: 2867.3, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2023424. Throughput: 0: 947.6. Samples: 506350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:47:12,943][00980] Avg episode reward: [(0, '17.744')] [2023-02-24 13:47:17,484][11167] Updated weights for policy 0, policy_version 500 (0.0025) [2023-02-24 13:47:17,940][00980] Fps is (10 sec: 3687.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2048000. Throughput: 0: 993.2. Samples: 512794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:47:17,942][00980] Avg episode reward: [(0, '16.609')] [2023-02-24 13:47:22,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2072576. Throughput: 0: 1001.3. Samples: 516424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:47:22,942][00980] Avg episode reward: [(0, '17.144')] [2023-02-24 13:47:27,541][11167] Updated weights for policy 0, policy_version 510 (0.0019) [2023-02-24 13:47:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2088960. Throughput: 0: 966.0. Samples: 522118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:47:27,945][00980] Avg episode reward: [(0, '17.301')] [2023-02-24 13:47:32,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3959.4, 300 sec: 3846.1). Total num frames: 2105344. Throughput: 0: 947.3. Samples: 526696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:47:32,946][00980] Avg episode reward: [(0, '17.404')] [2023-02-24 13:47:37,890][11167] Updated weights for policy 0, policy_version 520 (0.0014) [2023-02-24 13:47:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2129920. Throughput: 0: 977.2. Samples: 530192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:47:37,950][00980] Avg episode reward: [(0, '17.412')] [2023-02-24 13:47:42,940][00980] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2150400. Throughput: 0: 1006.9. Samples: 537490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:47:42,942][00980] Avg episode reward: [(0, '17.494')] [2023-02-24 13:47:42,957][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000525_2150400.pth... [2023-02-24 13:47:43,094][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000299_1224704.pth [2023-02-24 13:47:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3873.9). Total num frames: 2166784. Throughput: 0: 961.7. Samples: 542594. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:47:47,943][00980] Avg episode reward: [(0, '19.088')] [2023-02-24 13:47:47,948][11152] Saving new best policy, reward=19.088! [2023-02-24 13:47:48,795][11167] Updated weights for policy 0, policy_version 530 (0.0032) [2023-02-24 13:47:52,940][00980] Fps is (10 sec: 3276.6, 60 sec: 3959.4, 300 sec: 3859.9). Total num frames: 2183168. Throughput: 0: 954.4. Samples: 544806. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:47:52,949][00980] Avg episode reward: [(0, '20.210')] [2023-02-24 13:47:52,965][11152] Saving new best policy, reward=20.210! [2023-02-24 13:47:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2207744. Throughput: 0: 991.3. Samples: 550958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:47:57,947][00980] Avg episode reward: [(0, '20.504')] [2023-02-24 13:47:57,951][11152] Saving new best policy, reward=20.504! [2023-02-24 13:47:58,747][11167] Updated weights for policy 0, policy_version 540 (0.0030) [2023-02-24 13:48:02,940][00980] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2228224. Throughput: 0: 1009.2. Samples: 558208. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:48:02,942][00980] Avg episode reward: [(0, '21.654')] [2023-02-24 13:48:02,953][11152] Saving new best policy, reward=21.654! [2023-02-24 13:48:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 2244608. Throughput: 0: 981.4. Samples: 560588. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:48:07,942][00980] Avg episode reward: [(0, '21.914')] [2023-02-24 13:48:07,952][11152] Saving new best policy, reward=21.914! [2023-02-24 13:48:09,868][11167] Updated weights for policy 0, policy_version 550 (0.0016) [2023-02-24 13:48:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 2260992. Throughput: 0: 955.5. Samples: 565114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:48:12,942][00980] Avg episode reward: [(0, '22.633')] [2023-02-24 13:48:12,955][11152] Saving new best policy, reward=22.633! [2023-02-24 13:48:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 2285568. Throughput: 0: 1003.4. Samples: 571850. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:48:17,942][00980] Avg episode reward: [(0, '22.016')] [2023-02-24 13:48:19,372][11167] Updated weights for policy 0, policy_version 560 (0.0027) [2023-02-24 13:48:22,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2310144. Throughput: 0: 1006.8. Samples: 575496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:22,942][00980] Avg episode reward: [(0, '21.889')] [2023-02-24 13:48:27,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3890.9, 300 sec: 3873.8). Total num frames: 2322432. Throughput: 0: 968.2. Samples: 581066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:48:27,948][00980] Avg episode reward: [(0, '23.015')] [2023-02-24 13:48:27,951][11152] Saving new best policy, reward=23.015! [2023-02-24 13:48:31,140][11167] Updated weights for policy 0, policy_version 570 (0.0022) [2023-02-24 13:48:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2338816. Throughput: 0: 956.6. Samples: 585642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:32,942][00980] Avg episode reward: [(0, '24.470')] [2023-02-24 13:48:32,963][11152] Saving new best policy, reward=24.470! [2023-02-24 13:48:37,940][00980] Fps is (10 sec: 4098.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2363392. Throughput: 0: 985.7. Samples: 589160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:37,942][00980] Avg episode reward: [(0, '23.279')] [2023-02-24 13:48:40,147][11167] Updated weights for policy 0, policy_version 580 (0.0017) [2023-02-24 13:48:42,940][00980] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2387968. Throughput: 0: 1008.7. Samples: 596350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:42,942][00980] Avg episode reward: [(0, '23.115')] [2023-02-24 13:48:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 2400256. Throughput: 0: 956.3. Samples: 601242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:47,948][00980] Avg episode reward: [(0, '22.919')] [2023-02-24 13:48:52,332][11167] Updated weights for policy 0, policy_version 590 (0.0013) [2023-02-24 13:48:52,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2416640. Throughput: 0: 953.6. Samples: 603498. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:48:52,947][00980] Avg episode reward: [(0, '22.899')] [2023-02-24 13:48:57,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2441216. Throughput: 0: 996.4. Samples: 609954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:48:57,947][00980] Avg episode reward: [(0, '20.811')] [2023-02-24 13:49:00,802][11167] Updated weights for policy 0, policy_version 600 (0.0014) [2023-02-24 13:49:02,940][00980] Fps is (10 sec: 4915.3, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2465792. Throughput: 0: 1005.1. Samples: 617078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:02,946][00980] Avg episode reward: [(0, '19.670')] [2023-02-24 13:49:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 2482176. Throughput: 0: 975.1. Samples: 619374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:07,947][00980] Avg episode reward: [(0, '19.588')] [2023-02-24 13:49:12,935][11167] Updated weights for policy 0, policy_version 610 (0.0021) [2023-02-24 13:49:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2498560. Throughput: 0: 955.4. Samples: 624052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:49:12,944][00980] Avg episode reward: [(0, '19.264')] [2023-02-24 13:49:17,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2519040. Throughput: 0: 1006.2. Samples: 630922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:49:17,942][00980] Avg episode reward: [(0, '18.994')] [2023-02-24 13:49:21,487][11167] Updated weights for policy 0, policy_version 620 (0.0025) [2023-02-24 13:49:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2543616. Throughput: 0: 1008.9. Samples: 634562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:22,944][00980] Avg episode reward: [(0, '19.400')] [2023-02-24 13:49:27,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.8, 300 sec: 3901.6). Total num frames: 2560000. Throughput: 0: 966.9. Samples: 639860. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:49:27,947][00980] Avg episode reward: [(0, '19.749')] [2023-02-24 13:49:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2576384. Throughput: 0: 963.9. Samples: 644616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:32,942][00980] Avg episode reward: [(0, '21.772')] [2023-02-24 13:49:33,630][11167] Updated weights for policy 0, policy_version 630 (0.0034) [2023-02-24 13:49:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2600960. Throughput: 0: 992.8. Samples: 648176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:49:37,942][00980] Avg episode reward: [(0, '23.388')] [2023-02-24 13:49:42,134][11167] Updated weights for policy 0, policy_version 640 (0.0037) [2023-02-24 13:49:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2621440. Throughput: 0: 1010.4. Samples: 655422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:42,945][00980] Avg episode reward: [(0, '22.442')] [2023-02-24 13:49:42,956][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth... [2023-02-24 13:49:43,091][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000411_1683456.pth [2023-02-24 13:49:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2637824. Throughput: 0: 953.8. Samples: 660000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:47,945][00980] Avg episode reward: [(0, '24.144')] [2023-02-24 13:49:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 2654208. Throughput: 0: 951.3. Samples: 662184. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:49:52,942][00980] Avg episode reward: [(0, '23.765')] [2023-02-24 13:49:54,499][11167] Updated weights for policy 0, policy_version 650 (0.0033) [2023-02-24 13:49:57,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 2674688. Throughput: 0: 993.3. Samples: 668752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:49:57,942][00980] Avg episode reward: [(0, '22.009')] [2023-02-24 13:50:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2699264. Throughput: 0: 994.2. Samples: 675660. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:50:02,944][00980] Avg episode reward: [(0, '20.942')] [2023-02-24 13:50:03,645][11167] Updated weights for policy 0, policy_version 660 (0.0015) [2023-02-24 13:50:07,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2715648. Throughput: 0: 963.9. Samples: 677938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:50:07,946][00980] Avg episode reward: [(0, '21.439')] [2023-02-24 13:50:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 2732032. Throughput: 0: 949.9. Samples: 682606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:50:12,947][00980] Avg episode reward: [(0, '21.345')] [2023-02-24 13:50:15,229][11167] Updated weights for policy 0, policy_version 670 (0.0014) [2023-02-24 13:50:17,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3959.4, 300 sec: 3873.8). Total num frames: 2756608. Throughput: 0: 1000.8. Samples: 689654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:50:17,942][00980] Avg episode reward: [(0, '20.952')] [2023-02-24 13:50:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2777088. Throughput: 0: 1000.1. Samples: 693182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:50:22,943][00980] Avg episode reward: [(0, '20.930')] [2023-02-24 13:50:24,913][11167] Updated weights for policy 0, policy_version 680 (0.0015) [2023-02-24 13:50:27,940][00980] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2793472. Throughput: 0: 949.3. Samples: 698142. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:50:27,946][00980] Avg episode reward: [(0, '22.144')] [2023-02-24 13:50:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 2809856. Throughput: 0: 957.6. Samples: 703092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:50:32,946][00980] Avg episode reward: [(0, '22.441')] [2023-02-24 13:50:36,209][11167] Updated weights for policy 0, policy_version 690 (0.0049) [2023-02-24 13:50:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2834432. Throughput: 0: 988.8. Samples: 706678. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:50:37,942][00980] Avg episode reward: [(0, '23.206')] [2023-02-24 13:50:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2854912. Throughput: 0: 1003.4. Samples: 713906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:50:42,950][00980] Avg episode reward: [(0, '23.539')] [2023-02-24 13:50:46,085][11167] Updated weights for policy 0, policy_version 700 (0.0017) [2023-02-24 13:50:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2871296. Throughput: 0: 955.2. Samples: 718644. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:50:47,942][00980] Avg episode reward: [(0, '24.095')] [2023-02-24 13:50:52,940][00980] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2887680. Throughput: 0: 955.8. Samples: 720950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:50:52,947][00980] Avg episode reward: [(0, '26.693')] [2023-02-24 13:50:52,965][11152] Saving new best policy, reward=26.693! [2023-02-24 13:50:56,683][11167] Updated weights for policy 0, policy_version 710 (0.0016) [2023-02-24 13:50:57,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 2912256. Throughput: 0: 998.4. Samples: 727534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:50:57,946][00980] Avg episode reward: [(0, '26.035')] [2023-02-24 13:51:02,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2932736. Throughput: 0: 996.5. Samples: 734498. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:51:02,943][00980] Avg episode reward: [(0, '25.625')] [2023-02-24 13:51:07,116][11167] Updated weights for policy 0, policy_version 720 (0.0012) [2023-02-24 13:51:07,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2949120. Throughput: 0: 968.7. Samples: 736774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:51:07,945][00980] Avg episode reward: [(0, '24.872')] [2023-02-24 13:51:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2965504. Throughput: 0: 961.1. Samples: 741390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:51:12,947][00980] Avg episode reward: [(0, '25.277')] [2023-02-24 13:51:17,311][11167] Updated weights for policy 0, policy_version 730 (0.0035) [2023-02-24 13:51:17,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 2990080. Throughput: 0: 1012.2. Samples: 748642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:51:17,943][00980] Avg episode reward: [(0, '24.269')] [2023-02-24 13:51:22,940][00980] Fps is (10 sec: 4915.1, 60 sec: 3959.4, 300 sec: 3929.4). Total num frames: 3014656. Throughput: 0: 1012.3. Samples: 752230. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:51:22,944][00980] Avg episode reward: [(0, '24.348')] [2023-02-24 13:51:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3026944. Throughput: 0: 961.3. Samples: 757164. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:51:27,948][00980] Avg episode reward: [(0, '25.070')] [2023-02-24 13:51:28,432][11167] Updated weights for policy 0, policy_version 740 (0.0012) [2023-02-24 13:51:32,940][00980] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3043328. Throughput: 0: 964.5. Samples: 762048. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:51:32,948][00980] Avg episode reward: [(0, '25.088')] [2023-02-24 13:51:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3067904. Throughput: 0: 992.7. Samples: 765622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:51:37,942][00980] Avg episode reward: [(0, '25.119')] [2023-02-24 13:51:38,118][11167] Updated weights for policy 0, policy_version 750 (0.0014) [2023-02-24 13:51:42,944][00980] Fps is (10 sec: 4913.2, 60 sec: 3959.2, 300 sec: 3929.3). Total num frames: 3092480. Throughput: 0: 1009.5. Samples: 772966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:51:42,947][00980] Avg episode reward: [(0, '24.609')] [2023-02-24 13:51:42,966][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth... [2023-02-24 13:51:43,132][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000525_2150400.pth [2023-02-24 13:51:47,940][00980] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3108864. Throughput: 0: 958.9. Samples: 777648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:51:47,942][00980] Avg episode reward: [(0, '24.922')] [2023-02-24 13:51:48,996][11167] Updated weights for policy 0, policy_version 760 (0.0011) [2023-02-24 13:51:52,940][00980] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3125248. Throughput: 0: 959.2. Samples: 779936. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-02-24 13:51:52,949][00980] Avg episode reward: [(0, '24.443')] [2023-02-24 13:51:57,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3149824. Throughput: 0: 1006.7. Samples: 786692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:51:57,943][00980] Avg episode reward: [(0, '23.121')] [2023-02-24 13:51:58,600][11167] Updated weights for policy 0, policy_version 770 (0.0016) [2023-02-24 13:52:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3170304. Throughput: 0: 1000.4. Samples: 793662. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:52:02,942][00980] Avg episode reward: [(0, '23.086')] [2023-02-24 13:52:07,940][00980] Fps is (10 sec: 3686.3, 60 sec: 3959.4, 300 sec: 3943.3). Total num frames: 3186688. Throughput: 0: 971.6. Samples: 795950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:52:07,950][00980] Avg episode reward: [(0, '22.290')] [2023-02-24 13:52:10,165][11167] Updated weights for policy 0, policy_version 780 (0.0025) [2023-02-24 13:52:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3203072. Throughput: 0: 963.8. Samples: 800536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:12,942][00980] Avg episode reward: [(0, '21.696')] [2023-02-24 13:52:17,940][00980] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3227648. Throughput: 0: 1018.3. Samples: 807870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:17,946][00980] Avg episode reward: [(0, '21.529')] [2023-02-24 13:52:19,091][11167] Updated weights for policy 0, policy_version 790 (0.0020) [2023-02-24 13:52:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3248128. Throughput: 0: 1019.9. Samples: 811518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:22,943][00980] Avg episode reward: [(0, '22.744')] [2023-02-24 13:52:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3264512. Throughput: 0: 967.6. Samples: 816506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:27,942][00980] Avg episode reward: [(0, '23.093')] [2023-02-24 13:52:30,941][11167] Updated weights for policy 0, policy_version 800 (0.0033) [2023-02-24 13:52:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3915.5). Total num frames: 3284992. Throughput: 0: 976.4. Samples: 821586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:52:32,947][00980] Avg episode reward: [(0, '24.660')] [2023-02-24 13:52:37,940][00980] Fps is (10 sec: 4505.5, 60 sec: 4027.7, 300 sec: 3929.4). Total num frames: 3309568. Throughput: 0: 1006.2. Samples: 825214. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2023-02-24 13:52:37,947][00980] Avg episode reward: [(0, '25.534')] [2023-02-24 13:52:39,715][11167] Updated weights for policy 0, policy_version 810 (0.0028) [2023-02-24 13:52:42,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.7, 300 sec: 3943.3). Total num frames: 3330048. Throughput: 0: 1014.2. Samples: 832332. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:42,943][00980] Avg episode reward: [(0, '25.907')] [2023-02-24 13:52:47,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3342336. Throughput: 0: 960.6. Samples: 836890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:52:47,944][00980] Avg episode reward: [(0, '26.039')] [2023-02-24 13:52:51,855][11167] Updated weights for policy 0, policy_version 820 (0.0023) [2023-02-24 13:52:52,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3362816. Throughput: 0: 959.3. Samples: 839118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:52:52,942][00980] Avg episode reward: [(0, '26.542')] [2023-02-24 13:52:57,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3387392. Throughput: 0: 1013.3. Samples: 846134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:52:57,947][00980] Avg episode reward: [(0, '25.855')] [2023-02-24 13:53:00,300][11167] Updated weights for policy 0, policy_version 830 (0.0014) [2023-02-24 13:53:02,942][00980] Fps is (10 sec: 4504.6, 60 sec: 3959.3, 300 sec: 3943.2). Total num frames: 3407872. Throughput: 0: 997.3. Samples: 852752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:53:02,950][00980] Avg episode reward: [(0, '25.344')] [2023-02-24 13:53:07,945][00980] Fps is (10 sec: 3684.5, 60 sec: 3959.1, 300 sec: 3943.2). Total num frames: 3424256. Throughput: 0: 967.7. Samples: 855068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:07,950][00980] Avg episode reward: [(0, '25.157')] [2023-02-24 13:53:12,322][11167] Updated weights for policy 0, policy_version 840 (0.0017) [2023-02-24 13:53:12,940][00980] Fps is (10 sec: 3277.5, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3440640. Throughput: 0: 966.1. Samples: 859980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:53:12,942][00980] Avg episode reward: [(0, '24.986')] [2023-02-24 13:53:17,940][00980] Fps is (10 sec: 4098.0, 60 sec: 3959.5, 300 sec: 3915.5). Total num frames: 3465216. Throughput: 0: 1015.6. Samples: 867288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2023-02-24 13:53:17,942][00980] Avg episode reward: [(0, '23.994')] [2023-02-24 13:53:20,771][11167] Updated weights for policy 0, policy_version 850 (0.0017) [2023-02-24 13:53:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3485696. Throughput: 0: 1014.5. Samples: 870868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:53:22,946][00980] Avg episode reward: [(0, '23.515')] [2023-02-24 13:53:27,940][00980] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3502080. Throughput: 0: 961.0. Samples: 875576. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:27,942][00980] Avg episode reward: [(0, '22.749')] [2023-02-24 13:53:32,844][11167] Updated weights for policy 0, policy_version 860 (0.0013) [2023-02-24 13:53:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3522560. Throughput: 0: 980.3. Samples: 881004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:32,943][00980] Avg episode reward: [(0, '24.068')] [2023-02-24 13:53:37,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 3543040. Throughput: 0: 1011.7. Samples: 884644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:37,942][00980] Avg episode reward: [(0, '24.460')] [2023-02-24 13:53:41,525][11167] Updated weights for policy 0, policy_version 870 (0.0016) [2023-02-24 13:53:42,944][00980] Fps is (10 sec: 4503.7, 60 sec: 3959.2, 300 sec: 3957.1). Total num frames: 3567616. Throughput: 0: 1006.7. Samples: 891438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:42,946][00980] Avg episode reward: [(0, '24.611')] [2023-02-24 13:53:42,962][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth... [2023-02-24 13:53:43,098][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000640_2621440.pth [2023-02-24 13:53:47,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3579904. Throughput: 0: 961.9. Samples: 896036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:47,956][00980] Avg episode reward: [(0, '24.148')] [2023-02-24 13:53:52,940][00980] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3600384. Throughput: 0: 962.2. Samples: 898364. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:53:52,947][00980] Avg episode reward: [(0, '24.942')] [2023-02-24 13:53:53,338][11167] Updated weights for policy 0, policy_version 880 (0.0013) [2023-02-24 13:53:57,940][00980] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3624960. Throughput: 0: 1015.3. Samples: 905668. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2023-02-24 13:53:57,943][00980] Avg episode reward: [(0, '26.245')] [2023-02-24 13:54:02,912][11167] Updated weights for policy 0, policy_version 890 (0.0014) [2023-02-24 13:54:02,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.6, 300 sec: 3943.3). Total num frames: 3645440. Throughput: 0: 989.6. Samples: 911818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:54:02,942][00980] Avg episode reward: [(0, '24.977')] [2023-02-24 13:54:07,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.5, 300 sec: 3929.4). Total num frames: 3657728. Throughput: 0: 959.3. Samples: 914038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:54:07,948][00980] Avg episode reward: [(0, '25.792')] [2023-02-24 13:54:12,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3678208. Throughput: 0: 971.8. Samples: 919308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:54:12,949][00980] Avg episode reward: [(0, '25.107')] [2023-02-24 13:54:14,064][11167] Updated weights for policy 0, policy_version 900 (0.0025) [2023-02-24 13:54:17,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3702784. Throughput: 0: 1013.9. Samples: 926628. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:54:17,943][00980] Avg episode reward: [(0, '26.024')] [2023-02-24 13:54:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3723264. Throughput: 0: 1010.1. Samples: 930100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:54:22,942][00980] Avg episode reward: [(0, '25.060')] [2023-02-24 13:54:23,755][11167] Updated weights for policy 0, policy_version 910 (0.0012) [2023-02-24 13:54:27,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3739648. Throughput: 0: 961.2. Samples: 934686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:54:27,946][00980] Avg episode reward: [(0, '25.466')] [2023-02-24 13:54:32,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3760128. Throughput: 0: 986.6. Samples: 940434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:54:32,945][00980] Avg episode reward: [(0, '26.161')] [2023-02-24 13:54:34,537][11167] Updated weights for policy 0, policy_version 920 (0.0017) [2023-02-24 13:54:37,940][00980] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3784704. Throughput: 0: 1016.0. Samples: 944084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:54:37,944][00980] Avg episode reward: [(0, '26.766')] [2023-02-24 13:54:37,949][11152] Saving new best policy, reward=26.766! [2023-02-24 13:54:42,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.5, 300 sec: 3943.3). Total num frames: 3801088. Throughput: 0: 998.6. Samples: 950606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:54:42,947][00980] Avg episode reward: [(0, '26.990')] [2023-02-24 13:54:42,958][11152] Saving new best policy, reward=26.990! [2023-02-24 13:54:44,675][11167] Updated weights for policy 0, policy_version 930 (0.0017) [2023-02-24 13:54:47,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3817472. Throughput: 0: 961.6. Samples: 955088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2023-02-24 13:54:47,944][00980] Avg episode reward: [(0, '27.957')] [2023-02-24 13:54:47,950][11152] Saving new best policy, reward=27.957! [2023-02-24 13:54:52,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3837952. Throughput: 0: 970.5. Samples: 957710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:54:52,942][00980] Avg episode reward: [(0, '26.032')] [2023-02-24 13:54:55,178][11167] Updated weights for policy 0, policy_version 940 (0.0032) [2023-02-24 13:54:57,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3862528. Throughput: 0: 1016.5. Samples: 965052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:54:57,942][00980] Avg episode reward: [(0, '25.783')] [2023-02-24 13:55:02,940][00980] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3878912. Throughput: 0: 988.7. Samples: 971118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:55:02,944][00980] Avg episode reward: [(0, '25.187')] [2023-02-24 13:55:05,669][11167] Updated weights for policy 0, policy_version 950 (0.0013) [2023-02-24 13:55:07,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3895296. Throughput: 0: 961.2. Samples: 973356. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:55:07,949][00980] Avg episode reward: [(0, '24.429')] [2023-02-24 13:55:12,940][00980] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3915776. Throughput: 0: 982.3. Samples: 978890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2023-02-24 13:55:12,942][00980] Avg episode reward: [(0, '23.922')] [2023-02-24 13:55:15,707][11167] Updated weights for policy 0, policy_version 960 (0.0025) [2023-02-24 13:55:17,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3940352. Throughput: 0: 1015.5. Samples: 986132. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:55:17,943][00980] Avg episode reward: [(0, '25.560')] [2023-02-24 13:55:22,940][00980] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3960832. Throughput: 0: 1002.9. Samples: 989216. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2023-02-24 13:55:22,943][00980] Avg episode reward: [(0, '25.104')] [2023-02-24 13:55:26,867][11167] Updated weights for policy 0, policy_version 970 (0.0013) [2023-02-24 13:55:27,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 3973120. Throughput: 0: 961.3. Samples: 993864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2023-02-24 13:55:27,948][00980] Avg episode reward: [(0, '26.268')] [2023-02-24 13:55:32,940][00980] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3993600. Throughput: 0: 993.0. Samples: 999774. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2023-02-24 13:55:32,947][00980] Avg episode reward: [(0, '25.975')] [2023-02-24 13:55:34,585][11152] Stopping Batcher_0... [2023-02-24 13:55:34,586][00980] Component Batcher_0 stopped! [2023-02-24 13:55:34,589][11152] Loop batcher_evt_loop terminating... [2023-02-24 13:55:34,588][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-24 13:55:34,643][11167] Weights refcount: 2 0 [2023-02-24 13:55:34,658][00980] Component InferenceWorker_p0-w0 stopped! [2023-02-24 13:55:34,660][11167] Stopping InferenceWorker_p0-w0... [2023-02-24 13:55:34,661][11167] Loop inference_proc0-0_evt_loop terminating... [2023-02-24 13:55:34,674][11171] Stopping RolloutWorker_w5... [2023-02-24 13:55:34,674][00980] Component RolloutWorker_w5 stopped! [2023-02-24 13:55:34,684][11173] Stopping RolloutWorker_w3... [2023-02-24 13:55:34,684][00980] Component RolloutWorker_w4 stopped! [2023-02-24 13:55:34,686][00980] Component RolloutWorker_w3 stopped! [2023-02-24 13:55:34,683][11170] Stopping RolloutWorker_w4... [2023-02-24 13:55:34,692][00980] Component RolloutWorker_w0 stopped! [2023-02-24 13:55:34,694][11169] Stopping RolloutWorker_w1... [2023-02-24 13:55:34,695][00980] Component RolloutWorker_w1 stopped! [2023-02-24 13:55:34,689][11170] Loop rollout_proc4_evt_loop terminating... [2023-02-24 13:55:34,694][11166] Stopping RolloutWorker_w0... [2023-02-24 13:55:34,701][00980] Component RolloutWorker_w2 stopped! [2023-02-24 13:55:34,701][11168] Stopping RolloutWorker_w2... [2023-02-24 13:55:34,710][11174] Stopping RolloutWorker_w7... [2023-02-24 13:55:34,710][11174] Loop rollout_proc7_evt_loop terminating... [2023-02-24 13:55:34,709][00980] Component RolloutWorker_w6 stopped! [2023-02-24 13:55:34,680][11171] Loop rollout_proc5_evt_loop terminating... [2023-02-24 13:55:34,709][11173] Loop rollout_proc3_evt_loop terminating... [2023-02-24 13:55:34,709][11172] Stopping RolloutWorker_w6... [2023-02-24 13:55:34,711][00980] Component RolloutWorker_w7 stopped! [2023-02-24 13:55:34,695][11169] Loop rollout_proc1_evt_loop terminating... [2023-02-24 13:55:34,700][11166] Loop rollout_proc0_evt_loop terminating... [2023-02-24 13:55:34,717][11168] Loop rollout_proc2_evt_loop terminating... [2023-02-24 13:55:34,719][11172] Loop rollout_proc6_evt_loop terminating... [2023-02-24 13:55:34,781][11152] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000755_3092480.pth [2023-02-24 13:55:34,793][11152] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-24 13:55:34,954][00980] Component LearnerWorker_p0 stopped! [2023-02-24 13:55:34,963][00980] Waiting for process learner_proc0 to stop... [2023-02-24 13:55:34,967][11152] Stopping LearnerWorker_p0... [2023-02-24 13:55:34,968][11152] Loop learner_proc0_evt_loop terminating... [2023-02-24 13:55:36,742][00980] Waiting for process inference_proc0-0 to join... [2023-02-24 13:55:37,070][00980] Waiting for process rollout_proc0 to join... [2023-02-24 13:55:37,548][00980] Waiting for process rollout_proc1 to join... [2023-02-24 13:55:37,549][00980] Waiting for process rollout_proc2 to join... [2023-02-24 13:55:37,551][00980] Waiting for process rollout_proc3 to join... [2023-02-24 13:55:37,552][00980] Waiting for process rollout_proc4 to join... [2023-02-24 13:55:37,570][00980] Waiting for process rollout_proc5 to join... [2023-02-24 13:55:37,571][00980] Waiting for process rollout_proc6 to join... [2023-02-24 13:55:37,572][00980] Waiting for process rollout_proc7 to join... [2023-02-24 13:55:37,573][00980] Batcher 0 profile tree view: batching: 25.3280, releasing_batches: 0.0221 [2023-02-24 13:55:37,575][00980] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0005 wait_policy_total: 498.3727 update_model: 7.4088 weight_update: 0.0013 one_step: 0.0023 handle_policy_step: 495.8979 deserialize: 14.1770, stack: 2.8848, obs_to_device_normalize: 112.3387, forward: 236.1196, send_messages: 25.3594 prepare_outputs: 80.6792 to_cpu: 50.7526 [2023-02-24 13:55:37,576][00980] Learner 0 profile tree view: misc: 0.0053, prepare_batch: 17.0874 train: 74.1658 epoch_init: 0.0057, minibatch_init: 0.0202, losses_postprocess: 0.5531, kl_divergence: 0.5942, after_optimizer: 32.2145 calculate_losses: 25.9284 losses_init: 0.0048, forward_head: 1.6499, bptt_initial: 17.2608, tail: 0.9288, advantages_returns: 0.3566, losses: 3.2248 bptt: 2.1849 bptt_forward_core: 2.1100 update: 14.2904 clip: 1.4789 [2023-02-24 13:55:37,578][00980] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.3031, enqueue_policy_requests: 129.9487, env_step: 791.6916, overhead: 18.9921, complete_rollouts: 6.7884 save_policy_outputs: 19.2065 split_output_tensors: 9.2842 [2023-02-24 13:55:37,579][00980] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.3690, enqueue_policy_requests: 129.6848, env_step: 791.0784, overhead: 18.7556, complete_rollouts: 6.6190 save_policy_outputs: 18.7805 split_output_tensors: 9.0138 [2023-02-24 13:55:37,581][00980] Loop Runner_EvtLoop terminating... [2023-02-24 13:55:37,583][00980] Runner profile tree view: main_loop: 1068.9353 [2023-02-24 13:55:37,585][00980] Collected {0: 4005888}, FPS: 3747.5 [2023-02-24 13:55:37,828][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 13:55:37,830][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 13:55:37,833][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 13:55:37,836][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 13:55:37,838][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 13:55:37,840][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 13:55:37,842][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 13:55:37,843][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 13:55:37,844][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-02-24 13:55:37,845][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-02-24 13:55:37,847][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 13:55:37,849][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 13:55:37,850][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 13:55:37,851][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 13:55:37,853][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 13:55:37,880][00980] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 13:55:37,882][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 13:55:37,884][00980] RunningMeanStd input shape: (1,) [2023-02-24 13:55:37,900][00980] ConvEncoder: input_channels=3 [2023-02-24 13:55:38,593][00980] Conv encoder output size: 512 [2023-02-24 13:55:38,595][00980] Policy head output size: 512 [2023-02-24 13:55:41,532][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-24 13:55:43,247][00980] Num frames 100... [2023-02-24 13:55:43,370][00980] Num frames 200... [2023-02-24 13:55:43,480][00980] Num frames 300... [2023-02-24 13:55:43,592][00980] Num frames 400... [2023-02-24 13:55:43,709][00980] Num frames 500... [2023-02-24 13:55:43,824][00980] Num frames 600... [2023-02-24 13:55:43,947][00980] Num frames 700... [2023-02-24 13:55:44,057][00980] Num frames 800... [2023-02-24 13:55:44,170][00980] Num frames 900... [2023-02-24 13:55:44,283][00980] Num frames 1000... [2023-02-24 13:55:44,409][00980] Num frames 1100... [2023-02-24 13:55:44,526][00980] Num frames 1200... [2023-02-24 13:55:44,643][00980] Num frames 1300... [2023-02-24 13:55:44,757][00980] Num frames 1400... [2023-02-24 13:55:44,870][00980] Num frames 1500... [2023-02-24 13:55:44,982][00980] Num frames 1600... [2023-02-24 13:55:45,098][00980] Num frames 1700... [2023-02-24 13:55:45,212][00980] Num frames 1800... [2023-02-24 13:55:45,331][00980] Num frames 1900... [2023-02-24 13:55:45,452][00980] Num frames 2000... [2023-02-24 13:55:45,569][00980] Num frames 2100... [2023-02-24 13:55:45,622][00980] Avg episode rewards: #0: 55.999, true rewards: #0: 21.000 [2023-02-24 13:55:45,624][00980] Avg episode reward: 55.999, avg true_objective: 21.000 [2023-02-24 13:55:45,739][00980] Num frames 2200... [2023-02-24 13:55:45,851][00980] Num frames 2300... [2023-02-24 13:55:45,962][00980] Num frames 2400... [2023-02-24 13:55:46,071][00980] Num frames 2500... [2023-02-24 13:55:46,192][00980] Num frames 2600... [2023-02-24 13:55:46,311][00980] Num frames 2700... [2023-02-24 13:55:46,430][00980] Num frames 2800... [2023-02-24 13:55:46,541][00980] Num frames 2900... [2023-02-24 13:55:46,668][00980] Num frames 3000... [2023-02-24 13:55:46,778][00980] Num frames 3100... [2023-02-24 13:55:46,891][00980] Num frames 3200... [2023-02-24 13:55:47,003][00980] Num frames 3300... [2023-02-24 13:55:47,121][00980] Num frames 3400... [2023-02-24 13:55:47,240][00980] Num frames 3500... [2023-02-24 13:55:47,360][00980] Num frames 3600... [2023-02-24 13:55:47,480][00980] Num frames 3700... [2023-02-24 13:55:47,591][00980] Num frames 3800... [2023-02-24 13:55:47,704][00980] Num frames 3900... [2023-02-24 13:55:47,868][00980] Avg episode rewards: #0: 52.459, true rewards: #0: 19.960 [2023-02-24 13:55:47,869][00980] Avg episode reward: 52.459, avg true_objective: 19.960 [2023-02-24 13:55:47,884][00980] Num frames 4000... [2023-02-24 13:55:47,999][00980] Num frames 4100... [2023-02-24 13:55:48,109][00980] Num frames 4200... [2023-02-24 13:55:48,219][00980] Num frames 4300... [2023-02-24 13:55:48,332][00980] Num frames 4400... [2023-02-24 13:55:48,468][00980] Num frames 4500... [2023-02-24 13:55:48,588][00980] Num frames 4600... [2023-02-24 13:55:48,709][00980] Num frames 4700... [2023-02-24 13:55:48,822][00980] Num frames 4800... [2023-02-24 13:55:48,937][00980] Num frames 4900... [2023-02-24 13:55:49,050][00980] Num frames 5000... [2023-02-24 13:55:49,206][00980] Avg episode rewards: #0: 43.623, true rewards: #0: 16.957 [2023-02-24 13:55:49,208][00980] Avg episode reward: 43.623, avg true_objective: 16.957 [2023-02-24 13:55:49,228][00980] Num frames 5100... [2023-02-24 13:55:49,341][00980] Num frames 5200... [2023-02-24 13:55:49,466][00980] Num frames 5300... [2023-02-24 13:55:49,578][00980] Num frames 5400... [2023-02-24 13:55:49,690][00980] Num frames 5500... [2023-02-24 13:55:49,804][00980] Num frames 5600... [2023-02-24 13:55:49,968][00980] Avg episode rewards: #0: 35.492, true rewards: #0: 14.243 [2023-02-24 13:55:49,970][00980] Avg episode reward: 35.492, avg true_objective: 14.243 [2023-02-24 13:55:49,976][00980] Num frames 5700... [2023-02-24 13:55:50,090][00980] Num frames 5800... [2023-02-24 13:55:50,202][00980] Num frames 5900... [2023-02-24 13:55:50,314][00980] Num frames 6000... [2023-02-24 13:55:50,439][00980] Num frames 6100... [2023-02-24 13:55:50,519][00980] Avg episode rewards: #0: 30.026, true rewards: #0: 12.226 [2023-02-24 13:55:50,521][00980] Avg episode reward: 30.026, avg true_objective: 12.226 [2023-02-24 13:55:50,632][00980] Num frames 6200... [2023-02-24 13:55:50,747][00980] Num frames 6300... [2023-02-24 13:55:50,861][00980] Num frames 6400... [2023-02-24 13:55:50,987][00980] Num frames 6500... [2023-02-24 13:55:51,101][00980] Num frames 6600... [2023-02-24 13:55:51,224][00980] Num frames 6700... [2023-02-24 13:55:51,338][00980] Num frames 6800... [2023-02-24 13:55:51,469][00980] Num frames 6900... [2023-02-24 13:55:51,585][00980] Num frames 7000... [2023-02-24 13:55:51,665][00980] Avg episode rewards: #0: 28.361, true rewards: #0: 11.695 [2023-02-24 13:55:51,669][00980] Avg episode reward: 28.361, avg true_objective: 11.695 [2023-02-24 13:55:51,760][00980] Num frames 7100... [2023-02-24 13:55:51,873][00980] Num frames 7200... [2023-02-24 13:55:51,994][00980] Num frames 7300... [2023-02-24 13:55:52,108][00980] Num frames 7400... [2023-02-24 13:55:52,221][00980] Num frames 7500... [2023-02-24 13:55:52,336][00980] Num frames 7600... [2023-02-24 13:55:52,448][00980] Num frames 7700... [2023-02-24 13:55:52,572][00980] Num frames 7800... [2023-02-24 13:55:52,725][00980] Avg episode rewards: #0: 27.265, true rewards: #0: 11.266 [2023-02-24 13:55:52,727][00980] Avg episode reward: 27.265, avg true_objective: 11.266 [2023-02-24 13:55:52,746][00980] Num frames 7900... [2023-02-24 13:55:52,858][00980] Num frames 8000... [2023-02-24 13:55:52,967][00980] Num frames 8100... [2023-02-24 13:55:53,081][00980] Num frames 8200... [2023-02-24 13:55:53,228][00980] Num frames 8300... [2023-02-24 13:55:53,398][00980] Num frames 8400... [2023-02-24 13:55:53,602][00980] Avg episode rewards: #0: 25.492, true rewards: #0: 10.617 [2023-02-24 13:55:53,605][00980] Avg episode reward: 25.492, avg true_objective: 10.617 [2023-02-24 13:55:53,619][00980] Num frames 8500... [2023-02-24 13:55:53,775][00980] Num frames 8600... [2023-02-24 13:55:53,934][00980] Num frames 8700... [2023-02-24 13:55:54,021][00980] Avg episode rewards: #0: 23.020, true rewards: #0: 9.687 [2023-02-24 13:55:54,024][00980] Avg episode reward: 23.020, avg true_objective: 9.687 [2023-02-24 13:55:54,159][00980] Num frames 8800... [2023-02-24 13:55:54,318][00980] Num frames 8900... [2023-02-24 13:55:54,487][00980] Num frames 9000... [2023-02-24 13:55:54,648][00980] Num frames 9100... [2023-02-24 13:55:54,815][00980] Num frames 9200... [2023-02-24 13:55:54,976][00980] Num frames 9300... [2023-02-24 13:55:55,142][00980] Num frames 9400... [2023-02-24 13:55:55,310][00980] Num frames 9500... [2023-02-24 13:55:55,486][00980] Num frames 9600... [2023-02-24 13:55:55,656][00980] Num frames 9700... [2023-02-24 13:55:55,819][00980] Num frames 9800... [2023-02-24 13:55:55,981][00980] Num frames 9900... [2023-02-24 13:55:56,199][00980] Avg episode rewards: #0: 23.598, true rewards: #0: 9.998 [2023-02-24 13:55:56,202][00980] Avg episode reward: 23.598, avg true_objective: 9.998 [2023-02-24 13:55:56,206][00980] Num frames 10000... [2023-02-24 13:56:56,166][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 13:57:22,832][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 13:57:22,834][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 13:57:22,837][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 13:57:22,839][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 13:57:22,841][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 13:57:22,844][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 13:57:22,848][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-02-24 13:57:22,850][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 13:57:22,851][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-02-24 13:57:22,852][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-02-24 13:57:22,854][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 13:57:22,855][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 13:57:22,857][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 13:57:22,858][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 13:57:22,860][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 13:57:22,883][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 13:57:22,885][00980] RunningMeanStd input shape: (1,) [2023-02-24 13:57:22,898][00980] ConvEncoder: input_channels=3 [2023-02-24 13:57:22,934][00980] Conv encoder output size: 512 [2023-02-24 13:57:22,935][00980] Policy head output size: 512 [2023-02-24 13:57:22,956][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-24 13:57:23,383][00980] Num frames 100... [2023-02-24 13:57:23,490][00980] Num frames 200... [2023-02-24 13:57:23,605][00980] Num frames 300... [2023-02-24 13:57:23,714][00980] Num frames 400... [2023-02-24 13:57:23,828][00980] Num frames 500... [2023-02-24 13:57:23,940][00980] Num frames 600... [2023-02-24 13:57:24,054][00980] Num frames 700... [2023-02-24 13:57:24,167][00980] Num frames 800... [2023-02-24 13:57:24,279][00980] Num frames 900... [2023-02-24 13:57:24,389][00980] Num frames 1000... [2023-02-24 13:57:24,472][00980] Avg episode rewards: #0: 22.240, true rewards: #0: 10.240 [2023-02-24 13:57:24,475][00980] Avg episode reward: 22.240, avg true_objective: 10.240 [2023-02-24 13:57:24,575][00980] Num frames 1100... [2023-02-24 13:57:24,689][00980] Num frames 1200... [2023-02-24 13:57:24,811][00980] Num frames 1300... [2023-02-24 13:57:24,924][00980] Num frames 1400... [2023-02-24 13:57:25,036][00980] Num frames 1500... [2023-02-24 13:57:25,148][00980] Num frames 1600... [2023-02-24 13:57:25,270][00980] Num frames 1700... [2023-02-24 13:57:25,389][00980] Num frames 1800... [2023-02-24 13:57:25,499][00980] Num frames 1900... [2023-02-24 13:57:25,577][00980] Avg episode rewards: #0: 22.600, true rewards: #0: 9.600 [2023-02-24 13:57:25,579][00980] Avg episode reward: 22.600, avg true_objective: 9.600 [2023-02-24 13:57:25,669][00980] Num frames 2000... [2023-02-24 13:57:25,783][00980] Num frames 2100... [2023-02-24 13:57:25,905][00980] Num frames 2200... [2023-02-24 13:57:26,018][00980] Num frames 2300... [2023-02-24 13:57:26,130][00980] Num frames 2400... [2023-02-24 13:57:26,242][00980] Num frames 2500... [2023-02-24 13:57:26,361][00980] Num frames 2600... [2023-02-24 13:57:26,477][00980] Num frames 2700... [2023-02-24 13:57:26,607][00980] Num frames 2800... [2023-02-24 13:57:26,693][00980] Avg episode rewards: #0: 21.094, true rewards: #0: 9.427 [2023-02-24 13:57:26,696][00980] Avg episode reward: 21.094, avg true_objective: 9.427 [2023-02-24 13:57:26,779][00980] Num frames 2900... [2023-02-24 13:57:26,891][00980] Num frames 3000... [2023-02-24 13:57:27,001][00980] Num frames 3100... [2023-02-24 13:57:27,115][00980] Num frames 3200... [2023-02-24 13:57:27,231][00980] Num frames 3300... [2023-02-24 13:57:27,342][00980] Num frames 3400... [2023-02-24 13:57:27,453][00980] Num frames 3500... [2023-02-24 13:57:27,556][00980] Avg episode rewards: #0: 19.358, true rewards: #0: 8.857 [2023-02-24 13:57:27,558][00980] Avg episode reward: 19.358, avg true_objective: 8.857 [2023-02-24 13:57:27,635][00980] Num frames 3600... [2023-02-24 13:57:27,748][00980] Num frames 3700... [2023-02-24 13:57:27,870][00980] Num frames 3800... [2023-02-24 13:57:27,981][00980] Num frames 3900... [2023-02-24 13:57:28,109][00980] Num frames 4000... [2023-02-24 13:57:28,204][00980] Avg episode rewards: #0: 16.846, true rewards: #0: 8.046 [2023-02-24 13:57:28,206][00980] Avg episode reward: 16.846, avg true_objective: 8.046 [2023-02-24 13:57:28,337][00980] Num frames 4100... [2023-02-24 13:57:28,500][00980] Num frames 4200... [2023-02-24 13:57:28,661][00980] Num frames 4300... [2023-02-24 13:57:28,821][00980] Num frames 4400... [2023-02-24 13:57:28,984][00980] Num frames 4500... [2023-02-24 13:57:29,140][00980] Num frames 4600... [2023-02-24 13:57:29,302][00980] Num frames 4700... [2023-02-24 13:57:29,457][00980] Num frames 4800... [2023-02-24 13:57:29,617][00980] Num frames 4900... [2023-02-24 13:57:29,793][00980] Num frames 5000... [2023-02-24 13:57:29,958][00980] Num frames 5100... [2023-02-24 13:57:30,119][00980] Num frames 5200... [2023-02-24 13:57:30,283][00980] Num frames 5300... [2023-02-24 13:57:30,457][00980] Num frames 5400... [2023-02-24 13:57:30,620][00980] Num frames 5500... [2023-02-24 13:57:30,788][00980] Num frames 5600... [2023-02-24 13:57:30,955][00980] Num frames 5700... [2023-02-24 13:57:31,092][00980] Avg episode rewards: #0: 21.418, true rewards: #0: 9.585 [2023-02-24 13:57:31,094][00980] Avg episode reward: 21.418, avg true_objective: 9.585 [2023-02-24 13:57:31,179][00980] Num frames 5800... [2023-02-24 13:57:31,343][00980] Num frames 5900... [2023-02-24 13:57:31,515][00980] Num frames 6000... [2023-02-24 13:57:31,682][00980] Num frames 6100... [2023-02-24 13:57:31,813][00980] Num frames 6200... [2023-02-24 13:57:31,932][00980] Num frames 6300... [2023-02-24 13:57:32,043][00980] Num frames 6400... [2023-02-24 13:57:32,153][00980] Num frames 6500... [2023-02-24 13:57:32,264][00980] Num frames 6600... [2023-02-24 13:57:32,382][00980] Num frames 6700... [2023-02-24 13:57:32,495][00980] Num frames 6800... [2023-02-24 13:57:32,613][00980] Num frames 6900... [2023-02-24 13:57:32,725][00980] Avg episode rewards: #0: 22.359, true rewards: #0: 9.930 [2023-02-24 13:57:32,726][00980] Avg episode reward: 22.359, avg true_objective: 9.930 [2023-02-24 13:57:32,788][00980] Num frames 7000... [2023-02-24 13:57:32,898][00980] Num frames 7100... [2023-02-24 13:57:33,014][00980] Num frames 7200... [2023-02-24 13:57:33,124][00980] Num frames 7300... [2023-02-24 13:57:33,239][00980] Num frames 7400... [2023-02-24 13:57:33,351][00980] Num frames 7500... [2023-02-24 13:57:33,507][00980] Avg episode rewards: #0: 20.989, true rewards: #0: 9.489 [2023-02-24 13:57:33,509][00980] Avg episode reward: 20.989, avg true_objective: 9.489 [2023-02-24 13:57:33,522][00980] Num frames 7600... [2023-02-24 13:57:33,636][00980] Num frames 7700... [2023-02-24 13:57:33,769][00980] Num frames 7800... [2023-02-24 13:57:33,889][00980] Num frames 7900... [2023-02-24 13:57:33,999][00980] Num frames 8000... [2023-02-24 13:57:34,109][00980] Num frames 8100... [2023-02-24 13:57:34,218][00980] Num frames 8200... [2023-02-24 13:57:34,330][00980] Num frames 8300... [2023-02-24 13:57:34,446][00980] Num frames 8400... [2023-02-24 13:57:34,563][00980] Num frames 8500... [2023-02-24 13:57:34,678][00980] Num frames 8600... [2023-02-24 13:57:34,796][00980] Num frames 8700... [2023-02-24 13:57:34,909][00980] Num frames 8800... [2023-02-24 13:57:35,045][00980] Avg episode rewards: #0: 22.523, true rewards: #0: 9.857 [2023-02-24 13:57:35,047][00980] Avg episode reward: 22.523, avg true_objective: 9.857 [2023-02-24 13:57:35,083][00980] Num frames 8900... [2023-02-24 13:57:35,194][00980] Num frames 9000... [2023-02-24 13:57:35,310][00980] Num frames 9100... [2023-02-24 13:57:35,422][00980] Num frames 9200... [2023-02-24 13:57:35,550][00980] Num frames 9300... [2023-02-24 13:57:35,664][00980] Num frames 9400... [2023-02-24 13:57:35,775][00980] Num frames 9500... [2023-02-24 13:57:35,895][00980] Num frames 9600... [2023-02-24 13:57:36,007][00980] Num frames 9700... [2023-02-24 13:57:36,121][00980] Num frames 9800... [2023-02-24 13:57:36,241][00980] Num frames 9900... [2023-02-24 13:57:36,392][00980] Avg episode rewards: #0: 22.988, true rewards: #0: 9.988 [2023-02-24 13:57:36,394][00980] Avg episode reward: 22.988, avg true_objective: 9.988 [2023-02-24 13:58:35,822][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 13:58:39,900][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme [2023-02-24 14:05:27,094][00980] Environment doom_basic already registered, overwriting... [2023-02-24 14:05:27,098][00980] Environment doom_two_colors_easy already registered, overwriting... [2023-02-24 14:05:27,100][00980] Environment doom_two_colors_hard already registered, overwriting... [2023-02-24 14:05:27,101][00980] Environment doom_dm already registered, overwriting... [2023-02-24 14:05:27,105][00980] Environment doom_dwango5 already registered, overwriting... [2023-02-24 14:05:27,107][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-02-24 14:05:27,109][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-02-24 14:05:27,110][00980] Environment doom_my_way_home already registered, overwriting... [2023-02-24 14:05:27,114][00980] Environment doom_deadly_corridor already registered, overwriting... [2023-02-24 14:05:27,115][00980] Environment doom_defend_the_center already registered, overwriting... [2023-02-24 14:05:27,119][00980] Environment doom_defend_the_line already registered, overwriting... [2023-02-24 14:05:27,120][00980] Environment doom_health_gathering already registered, overwriting... [2023-02-24 14:05:27,121][00980] Environment doom_health_gathering_supreme already registered, overwriting... [2023-02-24 14:05:27,122][00980] Environment doom_battle already registered, overwriting... [2023-02-24 14:05:27,124][00980] Environment doom_battle2 already registered, overwriting... [2023-02-24 14:05:27,128][00980] Environment doom_duel_bots already registered, overwriting... [2023-02-24 14:05:27,130][00980] Environment doom_deathmatch_bots already registered, overwriting... [2023-02-24 14:05:27,131][00980] Environment doom_duel already registered, overwriting... [2023-02-24 14:05:27,132][00980] Environment doom_deathmatch_full already registered, overwriting... [2023-02-24 14:05:27,133][00980] Environment doom_benchmark already registered, overwriting... [2023-02-24 14:05:27,135][00980] register_encoder_factory: [2023-02-24 14:05:27,169][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:05:27,170][00980] Overriding arg 'train_for_env_steps' with value 1000000 passed from command line [2023-02-24 14:05:27,179][00980] Experiment dir /content/train_dir/default_experiment already exists! [2023-02-24 14:05:27,185][00980] Resuming existing experiment from /content/train_dir/default_experiment... [2023-02-24 14:05:27,186][00980] Weights and Biases integration disabled [2023-02-24 14:05:27,191][00980] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-02-24 14:05:29,280][00980] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=1000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2023-02-24 14:05:29,284][00980] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-02-24 14:05:29,288][00980] Rollout worker 0 uses device cpu [2023-02-24 14:05:29,293][00980] Rollout worker 1 uses device cpu [2023-02-24 14:05:29,295][00980] Rollout worker 2 uses device cpu [2023-02-24 14:05:29,297][00980] Rollout worker 3 uses device cpu [2023-02-24 14:05:29,299][00980] Rollout worker 4 uses device cpu [2023-02-24 14:05:29,301][00980] Rollout worker 5 uses device cpu [2023-02-24 14:05:29,303][00980] Rollout worker 6 uses device cpu [2023-02-24 14:05:29,305][00980] Rollout worker 7 uses device cpu [2023-02-24 14:05:29,424][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:05:29,426][00980] InferenceWorker_p0-w0: min num requests: 2 [2023-02-24 14:05:29,459][00980] Starting all processes... [2023-02-24 14:05:29,461][00980] Starting process learner_proc0 [2023-02-24 14:05:29,592][00980] Starting all processes... [2023-02-24 14:05:29,603][00980] Starting process inference_proc0-0 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc0 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc1 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc2 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc3 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc4 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc5 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc6 [2023-02-24 14:05:29,604][00980] Starting process rollout_proc7 [2023-02-24 14:05:37,475][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:05:37,492][20720] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-02-24 14:05:37,530][20720] Num visible devices: 1 [2023-02-24 14:05:37,562][20720] Starting seed is not provided [2023-02-24 14:05:37,563][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:05:37,564][20720] Initializing actor-critic model on device cuda:0 [2023-02-24 14:05:37,565][20720] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:05:37,567][20720] RunningMeanStd input shape: (1,) [2023-02-24 14:05:37,652][20720] ConvEncoder: input_channels=3 [2023-02-24 14:05:38,454][20720] Conv encoder output size: 512 [2023-02-24 14:05:38,462][20720] Policy head output size: 512 [2023-02-24 14:05:38,547][20720] Created Actor Critic model with architecture: [2023-02-24 14:05:38,560][20720] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-02-24 14:05:38,848][20735] Worker 0 uses CPU cores [0] [2023-02-24 14:05:38,874][20734] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:05:38,874][20734] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-02-24 14:05:38,948][20734] Num visible devices: 1 [2023-02-24 14:05:39,017][20737] Worker 2 uses CPU cores [0] [2023-02-24 14:05:39,818][20747] Worker 4 uses CPU cores [0] [2023-02-24 14:05:39,866][20739] Worker 3 uses CPU cores [1] [2023-02-24 14:05:40,106][20741] Worker 1 uses CPU cores [1] [2023-02-24 14:05:40,463][20748] Worker 7 uses CPU cores [1] [2023-02-24 14:05:40,605][20750] Worker 5 uses CPU cores [1] [2023-02-24 14:05:40,683][20753] Worker 6 uses CPU cores [0] [2023-02-24 14:05:43,236][20720] Using optimizer [2023-02-24 14:05:43,237][20720] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2023-02-24 14:05:43,279][20720] Loading model from checkpoint [2023-02-24 14:05:43,287][20720] Loaded experiment state at self.train_step=978, self.env_steps=4005888 [2023-02-24 14:05:43,287][20720] Initialized policy 0 weights for model version 978 [2023-02-24 14:05:43,293][20720] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:05:43,302][20720] LearnerWorker_p0 finished initialization! [2023-02-24 14:05:43,650][20734] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:05:43,652][20734] RunningMeanStd input shape: (1,) [2023-02-24 14:05:43,671][20734] ConvEncoder: input_channels=3 [2023-02-24 14:05:43,805][20734] Conv encoder output size: 512 [2023-02-24 14:05:43,805][20734] Policy head output size: 512 [2023-02-24 14:05:46,152][00980] Inference worker 0-0 is ready! [2023-02-24 14:05:46,154][00980] All inference workers are ready! Signal rollout workers to start! [2023-02-24 14:05:46,278][20747] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,277][20735] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,274][20753] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,298][20737] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,310][20739] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,313][20748] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,317][20750] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:46,320][20741] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:05:47,191][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:05:47,495][20747] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,496][20735] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,497][20753] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,497][20750] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,500][20739] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,503][20741] Decorrelating experience for 0 frames... [2023-02-24 14:05:47,857][20741] Decorrelating experience for 32 frames... [2023-02-24 14:05:48,254][20741] Decorrelating experience for 64 frames... [2023-02-24 14:05:48,444][20753] Decorrelating experience for 32 frames... [2023-02-24 14:05:48,461][20735] Decorrelating experience for 32 frames... [2023-02-24 14:05:48,471][20737] Decorrelating experience for 0 frames... [2023-02-24 14:05:48,844][20750] Decorrelating experience for 32 frames... [2023-02-24 14:05:49,216][20737] Decorrelating experience for 32 frames... [2023-02-24 14:05:49,254][20748] Decorrelating experience for 0 frames... [2023-02-24 14:05:49,317][20753] Decorrelating experience for 64 frames... [2023-02-24 14:05:49,417][00980] Heartbeat connected on Batcher_0 [2023-02-24 14:05:49,427][00980] Heartbeat connected on LearnerWorker_p0 [2023-02-24 14:05:49,469][00980] Heartbeat connected on InferenceWorker_p0-w0 [2023-02-24 14:05:50,064][20735] Decorrelating experience for 64 frames... [2023-02-24 14:05:50,153][20753] Decorrelating experience for 96 frames... [2023-02-24 14:05:50,238][00980] Heartbeat connected on RolloutWorker_w6 [2023-02-24 14:05:50,333][20748] Decorrelating experience for 32 frames... [2023-02-24 14:05:50,358][20750] Decorrelating experience for 64 frames... [2023-02-24 14:05:50,369][20741] Decorrelating experience for 96 frames... [2023-02-24 14:05:50,673][00980] Heartbeat connected on RolloutWorker_w1 [2023-02-24 14:05:51,453][20737] Decorrelating experience for 64 frames... [2023-02-24 14:05:51,472][20747] Decorrelating experience for 32 frames... [2023-02-24 14:05:51,931][20750] Decorrelating experience for 96 frames... [2023-02-24 14:05:52,032][20748] Decorrelating experience for 64 frames... [2023-02-24 14:05:52,195][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:05:52,267][00980] Heartbeat connected on RolloutWorker_w5 [2023-02-24 14:05:53,434][20735] Decorrelating experience for 96 frames... [2023-02-24 14:05:53,617][20737] Decorrelating experience for 96 frames... [2023-02-24 14:05:53,926][00980] Heartbeat connected on RolloutWorker_w0 [2023-02-24 14:05:54,196][00980] Heartbeat connected on RolloutWorker_w2 [2023-02-24 14:05:54,491][20747] Decorrelating experience for 64 frames... [2023-02-24 14:05:54,847][20748] Decorrelating experience for 96 frames... [2023-02-24 14:05:55,791][00980] Heartbeat connected on RolloutWorker_w7 [2023-02-24 14:05:57,191][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 152.4. Samples: 1524. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:05:57,201][00980] Avg episode reward: [(0, '2.762')] [2023-02-24 14:05:57,549][20720] Signal inference workers to stop experience collection... [2023-02-24 14:05:57,572][20739] Decorrelating experience for 32 frames... [2023-02-24 14:05:57,574][20734] InferenceWorker_p0-w0: stopping experience collection [2023-02-24 14:05:58,375][20747] Decorrelating experience for 96 frames... [2023-02-24 14:05:58,585][00980] Heartbeat connected on RolloutWorker_w4 [2023-02-24 14:05:58,843][20739] Decorrelating experience for 64 frames... [2023-02-24 14:05:59,511][20739] Decorrelating experience for 96 frames... [2023-02-24 14:05:59,622][00980] Heartbeat connected on RolloutWorker_w3 [2023-02-24 14:06:00,370][20720] Signal inference workers to resume experience collection... [2023-02-24 14:06:00,371][20734] InferenceWorker_p0-w0: resuming experience collection [2023-02-24 14:06:00,372][20720] Stopping Batcher_0... [2023-02-24 14:06:00,373][20720] Loop batcher_evt_loop terminating... [2023-02-24 14:06:00,398][00980] Component Batcher_0 stopped! [2023-02-24 14:06:00,464][20734] Weights refcount: 2 0 [2023-02-24 14:06:00,467][20734] Stopping InferenceWorker_p0-w0... [2023-02-24 14:06:00,467][00980] Component InferenceWorker_p0-w0 stopped! [2023-02-24 14:06:00,467][20734] Loop inference_proc0-0_evt_loop terminating... [2023-02-24 14:06:00,567][20747] Stopping RolloutWorker_w4... [2023-02-24 14:06:00,575][20747] Loop rollout_proc4_evt_loop terminating... [2023-02-24 14:06:00,571][00980] Component RolloutWorker_w4 stopped! [2023-02-24 14:06:00,580][20753] Stopping RolloutWorker_w6... [2023-02-24 14:06:00,581][20753] Loop rollout_proc6_evt_loop terminating... [2023-02-24 14:06:00,581][00980] Component RolloutWorker_w6 stopped! [2023-02-24 14:06:00,585][20737] Stopping RolloutWorker_w2... [2023-02-24 14:06:00,585][20737] Loop rollout_proc2_evt_loop terminating... [2023-02-24 14:06:00,588][00980] Component RolloutWorker_w2 stopped! [2023-02-24 14:06:00,594][20735] Stopping RolloutWorker_w0... [2023-02-24 14:06:00,595][00980] Component RolloutWorker_w0 stopped! [2023-02-24 14:06:00,594][20735] Loop rollout_proc0_evt_loop terminating... [2023-02-24 14:06:00,623][00980] Component RolloutWorker_w7 stopped! [2023-02-24 14:06:00,629][20748] Stopping RolloutWorker_w7... [2023-02-24 14:06:00,630][20748] Loop rollout_proc7_evt_loop terminating... [2023-02-24 14:06:00,659][00980] Component RolloutWorker_w1 stopped! [2023-02-24 14:06:00,664][00980] Component RolloutWorker_w5 stopped! [2023-02-24 14:06:00,668][20750] Stopping RolloutWorker_w5... [2023-02-24 14:06:00,669][20750] Loop rollout_proc5_evt_loop terminating... [2023-02-24 14:06:00,670][20741] Stopping RolloutWorker_w1... [2023-02-24 14:06:00,670][20741] Loop rollout_proc1_evt_loop terminating... [2023-02-24 14:06:00,699][00980] Component RolloutWorker_w3 stopped! [2023-02-24 14:06:00,710][20739] Stopping RolloutWorker_w3... [2023-02-24 14:06:00,710][20739] Loop rollout_proc3_evt_loop terminating... [2023-02-24 14:06:03,543][20720] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... [2023-02-24 14:06:03,663][20720] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000871_3567616.pth [2023-02-24 14:06:03,677][20720] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... [2023-02-24 14:06:03,858][20720] Stopping LearnerWorker_p0... [2023-02-24 14:06:03,859][20720] Loop learner_proc0_evt_loop terminating... [2023-02-24 14:06:03,858][00980] Component LearnerWorker_p0 stopped! [2023-02-24 14:06:03,869][00980] Waiting for process learner_proc0 to stop... [2023-02-24 14:06:04,891][00980] Waiting for process inference_proc0-0 to join... [2023-02-24 14:06:04,893][00980] Waiting for process rollout_proc0 to join... [2023-02-24 14:06:04,896][00980] Waiting for process rollout_proc1 to join... [2023-02-24 14:06:04,901][00980] Waiting for process rollout_proc2 to join... [2023-02-24 14:06:04,902][00980] Waiting for process rollout_proc3 to join... [2023-02-24 14:06:04,904][00980] Waiting for process rollout_proc4 to join... [2023-02-24 14:06:04,909][00980] Waiting for process rollout_proc5 to join... [2023-02-24 14:06:04,911][00980] Waiting for process rollout_proc6 to join... [2023-02-24 14:06:04,913][00980] Waiting for process rollout_proc7 to join... [2023-02-24 14:06:04,914][00980] Batcher 0 profile tree view: batching: 0.0456, releasing_batches: 0.0011 [2023-02-24 14:06:04,916][00980] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0051 wait_policy_total: 7.3640 update_model: 0.0356 weight_update: 0.0127 one_step: 0.0326 handle_policy_step: 3.9057 deserialize: 0.0505, stack: 0.0081, obs_to_device_normalize: 0.3641, forward: 3.0388, send_messages: 0.0944 prepare_outputs: 0.2652 to_cpu: 0.1469 [2023-02-24 14:06:04,917][00980] Learner 0 profile tree view: misc: 0.0000, prepare_batch: 8.0217 train: 1.7323 epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0005, kl_divergence: 0.0026, after_optimizer: 0.0345 calculate_losses: 0.2408 losses_init: 0.0000, forward_head: 0.1146, bptt_initial: 0.0866, tail: 0.0014, advantages_returns: 0.0010, losses: 0.0318 bptt: 0.0049 bptt_forward_core: 0.0048 update: 1.4525 clip: 0.0120 [2023-02-24 14:06:04,918][00980] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.0005, enqueue_policy_requests: 0.7739, env_step: 2.6918, overhead: 0.0454, complete_rollouts: 0.0249 save_policy_outputs: 0.0684 split_output_tensors: 0.0449 [2023-02-24 14:06:04,920][00980] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.0015, enqueue_policy_requests: 0.2734, env_step: 1.2700, overhead: 0.0664, complete_rollouts: 0.0008 save_policy_outputs: 0.0364 split_output_tensors: 0.0053 [2023-02-24 14:06:04,922][00980] Loop Runner_EvtLoop terminating... [2023-02-24 14:06:04,924][00980] Runner profile tree view: main_loop: 35.4650 [2023-02-24 14:06:04,927][00980] Collected {0: 4014080}, FPS: 231.0 [2023-02-24 14:06:04,960][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:06:04,963][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:06:04,964][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:06:04,966][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:06:04,971][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:06:04,972][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:06:04,976][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:06:04,977][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:06:04,980][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-02-24 14:06:04,982][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-02-24 14:06:04,984][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:06:04,987][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:06:04,989][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:06:04,990][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:06:04,991][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:06:05,021][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:06:05,024][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:06:05,042][00980] ConvEncoder: input_channels=3 [2023-02-24 14:06:05,093][00980] Conv encoder output size: 512 [2023-02-24 14:06:05,095][00980] Policy head output size: 512 [2023-02-24 14:06:05,123][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... [2023-02-24 14:06:05,565][00980] Num frames 100... [2023-02-24 14:06:05,685][00980] Num frames 200... [2023-02-24 14:06:05,798][00980] Num frames 300... [2023-02-24 14:06:05,912][00980] Num frames 400... [2023-02-24 14:06:06,025][00980] Num frames 500... [2023-02-24 14:06:06,142][00980] Num frames 600... [2023-02-24 14:06:06,254][00980] Num frames 700... [2023-02-24 14:06:06,363][00980] Num frames 800... [2023-02-24 14:06:06,480][00980] Num frames 900... [2023-02-24 14:06:06,611][00980] Num frames 1000... [2023-02-24 14:06:06,767][00980] Avg episode rewards: #0: 22.880, true rewards: #0: 10.880 [2023-02-24 14:06:06,770][00980] Avg episode reward: 22.880, avg true_objective: 10.880 [2023-02-24 14:06:06,788][00980] Num frames 1100... [2023-02-24 14:06:06,898][00980] Num frames 1200... [2023-02-24 14:06:07,011][00980] Num frames 1300... [2023-02-24 14:06:07,122][00980] Num frames 1400... [2023-02-24 14:06:07,234][00980] Num frames 1500... [2023-02-24 14:06:07,364][00980] Num frames 1600... [2023-02-24 14:06:07,487][00980] Num frames 1700... [2023-02-24 14:06:07,603][00980] Num frames 1800... [2023-02-24 14:06:07,718][00980] Num frames 1900... [2023-02-24 14:06:07,835][00980] Num frames 2000... [2023-02-24 14:06:07,949][00980] Num frames 2100... [2023-02-24 14:06:08,032][00980] Avg episode rewards: #0: 23.120, true rewards: #0: 10.620 [2023-02-24 14:06:08,034][00980] Avg episode reward: 23.120, avg true_objective: 10.620 [2023-02-24 14:06:08,123][00980] Num frames 2200... [2023-02-24 14:06:08,238][00980] Num frames 2300... [2023-02-24 14:06:08,364][00980] Num frames 2400... [2023-02-24 14:06:08,483][00980] Num frames 2500... [2023-02-24 14:06:08,607][00980] Num frames 2600... [2023-02-24 14:06:08,738][00980] Num frames 2700... [2023-02-24 14:06:08,854][00980] Num frames 2800... [2023-02-24 14:06:08,970][00980] Num frames 2900... [2023-02-24 14:06:09,084][00980] Num frames 3000... [2023-02-24 14:06:09,201][00980] Num frames 3100... [2023-02-24 14:06:09,319][00980] Num frames 3200... [2023-02-24 14:06:09,437][00980] Num frames 3300... [2023-02-24 14:06:09,559][00980] Num frames 3400... [2023-02-24 14:06:09,674][00980] Num frames 3500... [2023-02-24 14:06:09,795][00980] Num frames 3600... [2023-02-24 14:06:09,910][00980] Num frames 3700... [2023-02-24 14:06:10,035][00980] Num frames 3800... [2023-02-24 14:06:10,160][00980] Num frames 3900... [2023-02-24 14:06:10,281][00980] Num frames 4000... [2023-02-24 14:06:10,443][00980] Avg episode rewards: #0: 31.307, true rewards: #0: 13.640 [2023-02-24 14:06:10,444][00980] Avg episode reward: 31.307, avg true_objective: 13.640 [2023-02-24 14:06:10,460][00980] Num frames 4100... [2023-02-24 14:06:10,583][00980] Num frames 4200... [2023-02-24 14:06:10,703][00980] Num frames 4300... [2023-02-24 14:06:10,816][00980] Num frames 4400... [2023-02-24 14:06:10,934][00980] Num frames 4500... [2023-02-24 14:06:11,046][00980] Num frames 4600... [2023-02-24 14:06:11,171][00980] Num frames 4700... [2023-02-24 14:06:11,286][00980] Num frames 4800... [2023-02-24 14:06:11,389][00980] Avg episode rewards: #0: 26.857, true rewards: #0: 12.107 [2023-02-24 14:06:11,392][00980] Avg episode reward: 26.857, avg true_objective: 12.107 [2023-02-24 14:06:11,458][00980] Num frames 4900... [2023-02-24 14:06:11,579][00980] Num frames 5000... [2023-02-24 14:06:11,692][00980] Num frames 5100... [2023-02-24 14:06:11,808][00980] Num frames 5200... [2023-02-24 14:06:11,922][00980] Num frames 5300... [2023-02-24 14:06:12,040][00980] Num frames 5400... [2023-02-24 14:06:12,155][00980] Num frames 5500... [2023-02-24 14:06:12,306][00980] Avg episode rewards: #0: 24.558, true rewards: #0: 11.158 [2023-02-24 14:06:12,308][00980] Avg episode reward: 24.558, avg true_objective: 11.158 [2023-02-24 14:06:12,333][00980] Num frames 5600... [2023-02-24 14:06:12,453][00980] Num frames 5700... [2023-02-24 14:06:12,571][00980] Num frames 5800... [2023-02-24 14:06:12,692][00980] Num frames 5900... [2023-02-24 14:06:12,805][00980] Num frames 6000... [2023-02-24 14:06:12,925][00980] Num frames 6100... [2023-02-24 14:06:13,040][00980] Num frames 6200... [2023-02-24 14:06:13,152][00980] Num frames 6300... [2023-02-24 14:06:13,263][00980] Num frames 6400... [2023-02-24 14:06:13,378][00980] Num frames 6500... [2023-02-24 14:06:13,492][00980] Num frames 6600... [2023-02-24 14:06:13,622][00980] Num frames 6700... [2023-02-24 14:06:13,713][00980] Avg episode rewards: #0: 25.052, true rewards: #0: 11.218 [2023-02-24 14:06:13,718][00980] Avg episode reward: 25.052, avg true_objective: 11.218 [2023-02-24 14:06:13,801][00980] Num frames 6800... [2023-02-24 14:06:13,913][00980] Num frames 6900... [2023-02-24 14:06:14,031][00980] Num frames 7000... [2023-02-24 14:06:14,144][00980] Num frames 7100... [2023-02-24 14:06:14,264][00980] Num frames 7200... [2023-02-24 14:06:14,407][00980] Num frames 7300... [2023-02-24 14:06:14,587][00980] Num frames 7400... [2023-02-24 14:06:14,653][00980] Avg episode rewards: #0: 23.147, true rewards: #0: 10.576 [2023-02-24 14:06:14,655][00980] Avg episode reward: 23.147, avg true_objective: 10.576 [2023-02-24 14:06:14,815][00980] Num frames 7500... [2023-02-24 14:06:14,976][00980] Num frames 7600... [2023-02-24 14:06:15,133][00980] Num frames 7700... [2023-02-24 14:06:15,299][00980] Num frames 7800... [2023-02-24 14:06:15,451][00980] Num frames 7900... [2023-02-24 14:06:15,614][00980] Num frames 8000... [2023-02-24 14:06:15,775][00980] Num frames 8100... [2023-02-24 14:06:15,937][00980] Num frames 8200... [2023-02-24 14:06:16,099][00980] Num frames 8300... [2023-02-24 14:06:16,250][00980] Avg episode rewards: #0: 23.198, true rewards: #0: 10.447 [2023-02-24 14:06:16,255][00980] Avg episode reward: 23.198, avg true_objective: 10.447 [2023-02-24 14:06:16,325][00980] Num frames 8400... [2023-02-24 14:06:16,489][00980] Num frames 8500... [2023-02-24 14:06:16,656][00980] Num frames 8600... [2023-02-24 14:06:16,818][00980] Num frames 8700... [2023-02-24 14:06:16,985][00980] Num frames 8800... [2023-02-24 14:06:17,151][00980] Num frames 8900... [2023-02-24 14:06:17,319][00980] Num frames 9000... [2023-02-24 14:06:17,482][00980] Num frames 9100... [2023-02-24 14:06:17,648][00980] Num frames 9200... [2023-02-24 14:06:17,812][00980] Num frames 9300... [2023-02-24 14:06:17,974][00980] Num frames 9400... [2023-02-24 14:06:18,116][00980] Num frames 9500... [2023-02-24 14:06:18,261][00980] Avg episode rewards: #0: 23.638, true rewards: #0: 10.638 [2023-02-24 14:06:18,262][00980] Avg episode reward: 23.638, avg true_objective: 10.638 [2023-02-24 14:06:18,294][00980] Num frames 9600... [2023-02-24 14:06:18,410][00980] Num frames 9700... [2023-02-24 14:06:18,527][00980] Num frames 9800... [2023-02-24 14:06:18,644][00980] Num frames 9900... [2023-02-24 14:06:18,775][00980] Num frames 10000... [2023-02-24 14:06:18,887][00980] Num frames 10100... [2023-02-24 14:06:18,997][00980] Num frames 10200... [2023-02-24 14:06:19,079][00980] Avg episode rewards: #0: 22.522, true rewards: #0: 10.222 [2023-02-24 14:06:19,080][00980] Avg episode reward: 22.522, avg true_objective: 10.222 [2023-02-24 14:07:20,449][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 14:07:20,479][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:07:20,481][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:07:20,482][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:07:20,485][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:07:20,486][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:07:20,487][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:07:20,488][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-02-24 14:07:20,490][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:07:20,491][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-02-24 14:07:20,492][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-02-24 14:07:20,493][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:07:20,494][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:07:20,495][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:07:20,496][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:07:20,498][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:07:20,521][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:07:20,526][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:07:20,541][00980] ConvEncoder: input_channels=3 [2023-02-24 14:07:20,579][00980] Conv encoder output size: 512 [2023-02-24 14:07:20,580][00980] Policy head output size: 512 [2023-02-24 14:07:20,603][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... [2023-02-24 14:07:21,046][00980] Num frames 100... [2023-02-24 14:07:21,160][00980] Num frames 200... [2023-02-24 14:07:21,286][00980] Num frames 300... [2023-02-24 14:07:21,417][00980] Num frames 400... [2023-02-24 14:07:21,535][00980] Num frames 500... [2023-02-24 14:07:21,654][00980] Num frames 600... [2023-02-24 14:07:21,774][00980] Num frames 700... [2023-02-24 14:07:21,897][00980] Num frames 800... [2023-02-24 14:07:22,009][00980] Num frames 900... [2023-02-24 14:07:22,126][00980] Num frames 1000... [2023-02-24 14:07:22,242][00980] Num frames 1100... [2023-02-24 14:07:22,360][00980] Num frames 1200... [2023-02-24 14:07:22,476][00980] Num frames 1300... [2023-02-24 14:07:22,587][00980] Num frames 1400... [2023-02-24 14:07:22,688][00980] Avg episode rewards: #0: 38.400, true rewards: #0: 14.400 [2023-02-24 14:07:22,690][00980] Avg episode reward: 38.400, avg true_objective: 14.400 [2023-02-24 14:07:22,761][00980] Num frames 1500... [2023-02-24 14:07:22,884][00980] Num frames 1600... [2023-02-24 14:07:22,997][00980] Num frames 1700... [2023-02-24 14:07:23,113][00980] Num frames 1800... [2023-02-24 14:07:23,225][00980] Num frames 1900... [2023-02-24 14:07:23,350][00980] Num frames 2000... [2023-02-24 14:07:23,465][00980] Num frames 2100... [2023-02-24 14:07:23,585][00980] Num frames 2200... [2023-02-24 14:07:23,706][00980] Num frames 2300... [2023-02-24 14:07:23,820][00980] Num frames 2400... [2023-02-24 14:07:23,943][00980] Num frames 2500... [2023-02-24 14:07:24,060][00980] Num frames 2600... [2023-02-24 14:07:24,180][00980] Num frames 2700... [2023-02-24 14:07:24,295][00980] Num frames 2800... [2023-02-24 14:07:24,437][00980] Avg episode rewards: #0: 39.320, true rewards: #0: 14.320 [2023-02-24 14:07:24,439][00980] Avg episode reward: 39.320, avg true_objective: 14.320 [2023-02-24 14:07:24,483][00980] Num frames 2900... [2023-02-24 14:07:24,593][00980] Num frames 3000... [2023-02-24 14:07:24,707][00980] Num frames 3100... [2023-02-24 14:07:24,818][00980] Num frames 3200... [2023-02-24 14:07:24,931][00980] Num frames 3300... [2023-02-24 14:07:25,050][00980] Num frames 3400... [2023-02-24 14:07:25,172][00980] Num frames 3500... [2023-02-24 14:07:25,297][00980] Num frames 3600... [2023-02-24 14:07:25,451][00980] Num frames 3700... [2023-02-24 14:07:25,657][00980] Avg episode rewards: #0: 32.650, true rewards: #0: 12.650 [2023-02-24 14:07:25,659][00980] Avg episode reward: 32.650, avg true_objective: 12.650 [2023-02-24 14:07:25,673][00980] Num frames 3800... [2023-02-24 14:07:25,833][00980] Num frames 3900... [2023-02-24 14:07:25,993][00980] Num frames 4000... [2023-02-24 14:07:26,148][00980] Num frames 4100... [2023-02-24 14:07:26,308][00980] Num frames 4200... [2023-02-24 14:07:26,444][00980] Avg episode rewards: #0: 26.877, true rewards: #0: 10.627 [2023-02-24 14:07:26,446][00980] Avg episode reward: 26.877, avg true_objective: 10.627 [2023-02-24 14:07:26,529][00980] Num frames 4300... [2023-02-24 14:07:26,686][00980] Num frames 4400... [2023-02-24 14:07:26,847][00980] Num frames 4500... [2023-02-24 14:07:27,009][00980] Num frames 4600... [2023-02-24 14:07:27,168][00980] Num frames 4700... [2023-02-24 14:07:27,331][00980] Num frames 4800... [2023-02-24 14:07:27,483][00980] Avg episode rewards: #0: 24.118, true rewards: #0: 9.718 [2023-02-24 14:07:27,485][00980] Avg episode reward: 24.118, avg true_objective: 9.718 [2023-02-24 14:07:27,552][00980] Num frames 4900... [2023-02-24 14:07:27,730][00980] Num frames 5000... [2023-02-24 14:07:27,901][00980] Num frames 5100... [2023-02-24 14:07:28,066][00980] Num frames 5200... [2023-02-24 14:07:28,232][00980] Num frames 5300... [2023-02-24 14:07:28,400][00980] Num frames 5400... [2023-02-24 14:07:28,564][00980] Num frames 5500... [2023-02-24 14:07:28,733][00980] Num frames 5600... [2023-02-24 14:07:28,903][00980] Num frames 5700... [2023-02-24 14:07:29,022][00980] Num frames 5800... [2023-02-24 14:07:29,134][00980] Num frames 5900... [2023-02-24 14:07:29,251][00980] Num frames 6000... [2023-02-24 14:07:29,362][00980] Num frames 6100... [2023-02-24 14:07:29,475][00980] Num frames 6200... [2023-02-24 14:07:29,592][00980] Num frames 6300... [2023-02-24 14:07:29,711][00980] Num frames 6400... [2023-02-24 14:07:29,826][00980] Num frames 6500... [2023-02-24 14:07:29,949][00980] Num frames 6600... [2023-02-24 14:07:30,066][00980] Num frames 6700... [2023-02-24 14:07:30,189][00980] Num frames 6800... [2023-02-24 14:07:30,303][00980] Num frames 6900... [2023-02-24 14:07:30,431][00980] Avg episode rewards: #0: 30.098, true rewards: #0: 11.598 [2023-02-24 14:07:30,433][00980] Avg episode reward: 30.098, avg true_objective: 11.598 [2023-02-24 14:07:30,484][00980] Num frames 7000... [2023-02-24 14:07:30,609][00980] Num frames 7100... [2023-02-24 14:07:30,725][00980] Num frames 7200... [2023-02-24 14:07:30,843][00980] Num frames 7300... [2023-02-24 14:07:30,958][00980] Num frames 7400... [2023-02-24 14:07:31,078][00980] Num frames 7500... [2023-02-24 14:07:31,202][00980] Num frames 7600... [2023-02-24 14:07:31,320][00980] Num frames 7700... [2023-02-24 14:07:31,437][00980] Num frames 7800... [2023-02-24 14:07:31,556][00980] Num frames 7900... [2023-02-24 14:07:31,679][00980] Num frames 8000... [2023-02-24 14:07:31,799][00980] Num frames 8100... [2023-02-24 14:07:31,926][00980] Num frames 8200... [2023-02-24 14:07:32,015][00980] Avg episode rewards: #0: 29.753, true rewards: #0: 11.753 [2023-02-24 14:07:32,016][00980] Avg episode reward: 29.753, avg true_objective: 11.753 [2023-02-24 14:07:32,100][00980] Num frames 8300... [2023-02-24 14:07:32,215][00980] Num frames 8400... [2023-02-24 14:07:32,326][00980] Num frames 8500... [2023-02-24 14:07:32,439][00980] Num frames 8600... [2023-02-24 14:07:32,558][00980] Num frames 8700... [2023-02-24 14:07:32,675][00980] Num frames 8800... [2023-02-24 14:07:32,786][00980] Num frames 8900... [2023-02-24 14:07:32,900][00980] Num frames 9000... [2023-02-24 14:07:33,022][00980] Num frames 9100... [2023-02-24 14:07:33,096][00980] Avg episode rewards: #0: 28.519, true rewards: #0: 11.394 [2023-02-24 14:07:33,097][00980] Avg episode reward: 28.519, avg true_objective: 11.394 [2023-02-24 14:07:33,197][00980] Num frames 9200... [2023-02-24 14:07:33,309][00980] Num frames 9300... [2023-02-24 14:07:33,431][00980] Num frames 9400... [2023-02-24 14:07:33,547][00980] Num frames 9500... [2023-02-24 14:07:33,667][00980] Num frames 9600... [2023-02-24 14:07:33,792][00980] Num frames 9700... [2023-02-24 14:07:33,909][00980] Num frames 9800... [2023-02-24 14:07:34,026][00980] Num frames 9900... [2023-02-24 14:07:34,137][00980] Num frames 10000... [2023-02-24 14:07:34,271][00980] Avg episode rewards: #0: 27.523, true rewards: #0: 11.190 [2023-02-24 14:07:34,273][00980] Avg episode reward: 27.523, avg true_objective: 11.190 [2023-02-24 14:07:34,312][00980] Num frames 10100... [2023-02-24 14:07:34,431][00980] Num frames 10200... [2023-02-24 14:07:34,546][00980] Num frames 10300... [2023-02-24 14:07:34,672][00980] Num frames 10400... [2023-02-24 14:07:34,785][00980] Num frames 10500... [2023-02-24 14:07:34,907][00980] Num frames 10600... [2023-02-24 14:07:35,024][00980] Num frames 10700... [2023-02-24 14:07:35,142][00980] Num frames 10800... [2023-02-24 14:07:35,255][00980] Num frames 10900... [2023-02-24 14:07:35,378][00980] Num frames 11000... [2023-02-24 14:07:35,489][00980] Num frames 11100... [2023-02-24 14:07:35,614][00980] Num frames 11200... [2023-02-24 14:07:35,734][00980] Num frames 11300... [2023-02-24 14:07:35,852][00980] Avg episode rewards: #0: 27.851, true rewards: #0: 11.351 [2023-02-24 14:07:35,853][00980] Avg episode reward: 27.851, avg true_objective: 11.351 [2023-02-24 14:08:43,682][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 14:08:46,837][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme [2023-02-24 14:09:29,770][00980] Environment doom_basic already registered, overwriting... [2023-02-24 14:09:29,773][00980] Environment doom_two_colors_easy already registered, overwriting... [2023-02-24 14:09:29,775][00980] Environment doom_two_colors_hard already registered, overwriting... [2023-02-24 14:09:29,776][00980] Environment doom_dm already registered, overwriting... [2023-02-24 14:09:29,777][00980] Environment doom_dwango5 already registered, overwriting... [2023-02-24 14:09:29,783][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-02-24 14:09:29,784][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-02-24 14:09:29,785][00980] Environment doom_my_way_home already registered, overwriting... [2023-02-24 14:09:29,786][00980] Environment doom_deadly_corridor already registered, overwriting... [2023-02-24 14:09:29,787][00980] Environment doom_defend_the_center already registered, overwriting... [2023-02-24 14:09:29,788][00980] Environment doom_defend_the_line already registered, overwriting... [2023-02-24 14:09:29,789][00980] Environment doom_health_gathering already registered, overwriting... [2023-02-24 14:09:29,791][00980] Environment doom_health_gathering_supreme already registered, overwriting... [2023-02-24 14:09:29,792][00980] Environment doom_battle already registered, overwriting... [2023-02-24 14:09:29,793][00980] Environment doom_battle2 already registered, overwriting... [2023-02-24 14:09:29,794][00980] Environment doom_duel_bots already registered, overwriting... [2023-02-24 14:09:29,796][00980] Environment doom_deathmatch_bots already registered, overwriting... [2023-02-24 14:09:29,797][00980] Environment doom_duel already registered, overwriting... [2023-02-24 14:09:29,798][00980] Environment doom_deathmatch_full already registered, overwriting... [2023-02-24 14:09:29,799][00980] Environment doom_benchmark already registered, overwriting... [2023-02-24 14:09:29,801][00980] register_encoder_factory: [2023-02-24 14:09:29,829][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:09:29,831][00980] Overriding arg 'train_for_env_steps' with value 2000000 passed from command line [2023-02-24 14:09:29,845][00980] Experiment dir /content/train_dir/default_experiment already exists! [2023-02-24 14:09:29,846][00980] Resuming existing experiment from /content/train_dir/default_experiment... [2023-02-24 14:09:29,848][00980] Weights and Biases integration disabled [2023-02-24 14:09:29,855][00980] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-02-24 14:09:32,673][00980] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=2000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2023-02-24 14:09:32,676][00980] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-02-24 14:09:32,680][00980] Rollout worker 0 uses device cpu [2023-02-24 14:09:32,683][00980] Rollout worker 1 uses device cpu [2023-02-24 14:09:32,684][00980] Rollout worker 2 uses device cpu [2023-02-24 14:09:32,685][00980] Rollout worker 3 uses device cpu [2023-02-24 14:09:32,686][00980] Rollout worker 4 uses device cpu [2023-02-24 14:09:32,688][00980] Rollout worker 5 uses device cpu [2023-02-24 14:09:32,689][00980] Rollout worker 6 uses device cpu [2023-02-24 14:09:32,691][00980] Rollout worker 7 uses device cpu [2023-02-24 14:09:32,810][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:09:32,813][00980] InferenceWorker_p0-w0: min num requests: 2 [2023-02-24 14:09:32,846][00980] Starting all processes... [2023-02-24 14:09:32,847][00980] Starting process learner_proc0 [2023-02-24 14:09:32,984][00980] Starting all processes... [2023-02-24 14:09:32,994][00980] Starting process inference_proc0-0 [2023-02-24 14:09:32,994][00980] Starting process rollout_proc0 [2023-02-24 14:09:32,996][00980] Starting process rollout_proc1 [2023-02-24 14:09:32,999][00980] Starting process rollout_proc2 [2023-02-24 14:09:32,999][00980] Starting process rollout_proc3 [2023-02-24 14:09:32,999][00980] Starting process rollout_proc4 [2023-02-24 14:09:33,000][00980] Starting process rollout_proc5 [2023-02-24 14:09:33,000][00980] Starting process rollout_proc6 [2023-02-24 14:09:33,000][00980] Starting process rollout_proc7 [2023-02-24 14:09:41,559][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:09:41,559][24666] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-02-24 14:09:41,666][24666] Num visible devices: 1 [2023-02-24 14:09:41,726][24666] Starting seed is not provided [2023-02-24 14:09:41,727][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:09:41,728][24666] Initializing actor-critic model on device cuda:0 [2023-02-24 14:09:41,729][24666] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:09:41,730][24666] RunningMeanStd input shape: (1,) [2023-02-24 14:09:41,877][24666] ConvEncoder: input_channels=3 [2023-02-24 14:09:43,471][24666] Conv encoder output size: 512 [2023-02-24 14:09:43,472][24666] Policy head output size: 512 [2023-02-24 14:09:43,665][24666] Created Actor Critic model with architecture: [2023-02-24 14:09:43,674][24666] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-02-24 14:09:43,933][24680] Worker 0 uses CPU cores [0] [2023-02-24 14:09:43,977][24681] Worker 1 uses CPU cores [1] [2023-02-24 14:09:44,335][24683] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:09:44,340][24683] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-02-24 14:09:44,430][24683] Num visible devices: 1 [2023-02-24 14:09:45,037][24690] Worker 2 uses CPU cores [0] [2023-02-24 14:09:45,309][24688] Worker 3 uses CPU cores [1] [2023-02-24 14:09:45,561][24695] Worker 5 uses CPU cores [1] [2023-02-24 14:09:45,612][24693] Worker 4 uses CPU cores [0] [2023-02-24 14:09:45,799][24701] Worker 7 uses CPU cores [1] [2023-02-24 14:09:45,881][24703] Worker 6 uses CPU cores [0] [2023-02-24 14:09:49,269][24666] Using optimizer [2023-02-24 14:09:49,270][24666] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... [2023-02-24 14:09:49,304][24666] Loading model from checkpoint [2023-02-24 14:09:49,308][24666] Loaded experiment state at self.train_step=980, self.env_steps=4014080 [2023-02-24 14:09:49,308][24666] Initialized policy 0 weights for model version 980 [2023-02-24 14:09:49,312][24666] LearnerWorker_p0 finished initialization! [2023-02-24 14:09:49,314][24666] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:09:49,529][24683] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:09:49,531][24683] RunningMeanStd input shape: (1,) [2023-02-24 14:09:49,543][24683] ConvEncoder: input_channels=3 [2023-02-24 14:09:49,645][24683] Conv encoder output size: 512 [2023-02-24 14:09:49,645][24683] Policy head output size: 512 [2023-02-24 14:09:49,856][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4014080. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:09:51,910][00980] Inference worker 0-0 is ready! [2023-02-24 14:09:51,911][00980] All inference workers are ready! Signal rollout workers to start! [2023-02-24 14:09:52,016][24680] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,012][24693] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,014][24703] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,017][24690] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,031][24688] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,032][24695] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,028][24701] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,030][24681] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:09:52,803][00980] Heartbeat connected on Batcher_0 [2023-02-24 14:09:52,810][00980] Heartbeat connected on LearnerWorker_p0 [2023-02-24 14:09:52,840][24701] Decorrelating experience for 0 frames... [2023-02-24 14:09:52,841][24695] Decorrelating experience for 0 frames... [2023-02-24 14:09:52,845][00980] Heartbeat connected on InferenceWorker_p0-w0 [2023-02-24 14:09:53,214][24693] Decorrelating experience for 0 frames... [2023-02-24 14:09:53,221][24690] Decorrelating experience for 0 frames... [2023-02-24 14:09:53,227][24703] Decorrelating experience for 0 frames... [2023-02-24 14:09:53,926][24688] Decorrelating experience for 0 frames... [2023-02-24 14:09:53,929][24695] Decorrelating experience for 32 frames... [2023-02-24 14:09:53,931][24701] Decorrelating experience for 32 frames... [2023-02-24 14:09:54,439][24703] Decorrelating experience for 32 frames... [2023-02-24 14:09:54,441][24693] Decorrelating experience for 32 frames... [2023-02-24 14:09:54,501][24680] Decorrelating experience for 0 frames... [2023-02-24 14:09:54,732][24688] Decorrelating experience for 32 frames... [2023-02-24 14:09:54,847][24695] Decorrelating experience for 64 frames... [2023-02-24 14:09:54,855][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:09:55,223][24690] Decorrelating experience for 32 frames... [2023-02-24 14:09:55,748][24680] Decorrelating experience for 32 frames... [2023-02-24 14:09:55,944][24688] Decorrelating experience for 64 frames... [2023-02-24 14:09:55,960][24701] Decorrelating experience for 64 frames... [2023-02-24 14:09:55,990][24703] Decorrelating experience for 64 frames... [2023-02-24 14:09:56,934][24681] Decorrelating experience for 0 frames... [2023-02-24 14:09:57,310][24690] Decorrelating experience for 64 frames... [2023-02-24 14:09:58,045][24695] Decorrelating experience for 96 frames... [2023-02-24 14:09:58,046][24693] Decorrelating experience for 64 frames... [2023-02-24 14:09:58,090][24680] Decorrelating experience for 64 frames... [2023-02-24 14:09:58,226][24688] Decorrelating experience for 96 frames... [2023-02-24 14:09:58,248][24701] Decorrelating experience for 96 frames... [2023-02-24 14:09:58,487][00980] Heartbeat connected on RolloutWorker_w5 [2023-02-24 14:09:58,896][00980] Heartbeat connected on RolloutWorker_w3 [2023-02-24 14:09:58,898][00980] Heartbeat connected on RolloutWorker_w7 [2023-02-24 14:09:59,487][24703] Decorrelating experience for 96 frames... [2023-02-24 14:09:59,560][24681] Decorrelating experience for 32 frames... [2023-02-24 14:09:59,855][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:10:00,072][00980] Heartbeat connected on RolloutWorker_w6 [2023-02-24 14:10:00,336][24690] Decorrelating experience for 96 frames... [2023-02-24 14:10:00,841][00980] Heartbeat connected on RolloutWorker_w2 [2023-02-24 14:10:01,251][24693] Decorrelating experience for 96 frames... [2023-02-24 14:10:01,256][24680] Decorrelating experience for 96 frames... [2023-02-24 14:10:01,914][00980] Heartbeat connected on RolloutWorker_w0 [2023-02-24 14:10:01,917][24681] Decorrelating experience for 64 frames... [2023-02-24 14:10:01,957][00980] Heartbeat connected on RolloutWorker_w4 [2023-02-24 14:10:04,856][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4014080. Throughput: 0: 116.8. Samples: 1752. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:10:04,862][00980] Avg episode reward: [(0, '1.850')] [2023-02-24 14:10:04,999][24666] Signal inference workers to stop experience collection... [2023-02-24 14:10:05,017][24683] InferenceWorker_p0-w0: stopping experience collection [2023-02-24 14:10:05,324][24681] Decorrelating experience for 96 frames... [2023-02-24 14:10:05,455][00980] Heartbeat connected on RolloutWorker_w1 [2023-02-24 14:10:07,790][24666] Signal inference workers to resume experience collection... [2023-02-24 14:10:07,790][24683] InferenceWorker_p0-w0: resuming experience collection [2023-02-24 14:10:07,796][24666] Stopping Batcher_0... [2023-02-24 14:10:07,798][24666] Loop batcher_evt_loop terminating... [2023-02-24 14:10:07,799][00980] Component Batcher_0 stopped! [2023-02-24 14:10:07,851][24703] Stopping RolloutWorker_w6... [2023-02-24 14:10:07,851][00980] Component RolloutWorker_w6 stopped! [2023-02-24 14:10:07,860][24693] Stopping RolloutWorker_w4... [2023-02-24 14:10:07,861][24693] Loop rollout_proc4_evt_loop terminating... [2023-02-24 14:10:07,854][24680] Stopping RolloutWorker_w0... [2023-02-24 14:10:07,863][24680] Loop rollout_proc0_evt_loop terminating... [2023-02-24 14:10:07,855][00980] Component RolloutWorker_w0 stopped! [2023-02-24 14:10:07,863][00980] Component RolloutWorker_w4 stopped! [2023-02-24 14:10:07,866][00980] Component RolloutWorker_w2 stopped! [2023-02-24 14:10:07,866][24690] Stopping RolloutWorker_w2... [2023-02-24 14:10:07,851][24703] Loop rollout_proc6_evt_loop terminating... [2023-02-24 14:10:07,870][24690] Loop rollout_proc2_evt_loop terminating... [2023-02-24 14:10:07,886][00980] Component RolloutWorker_w1 stopped! [2023-02-24 14:10:07,886][24681] Stopping RolloutWorker_w1... [2023-02-24 14:10:07,908][00980] Component RolloutWorker_w7 stopped! [2023-02-24 14:10:07,916][00980] Component RolloutWorker_w3 stopped! [2023-02-24 14:10:07,917][24688] Stopping RolloutWorker_w3... [2023-02-24 14:10:07,923][00980] Component RolloutWorker_w5 stopped! [2023-02-24 14:10:07,909][24701] Stopping RolloutWorker_w7... [2023-02-24 14:10:07,904][24681] Loop rollout_proc1_evt_loop terminating... [2023-02-24 14:10:07,924][24695] Stopping RolloutWorker_w5... [2023-02-24 14:10:07,922][24688] Loop rollout_proc3_evt_loop terminating... [2023-02-24 14:10:07,928][24701] Loop rollout_proc7_evt_loop terminating... [2023-02-24 14:10:07,930][24695] Loop rollout_proc5_evt_loop terminating... [2023-02-24 14:10:07,947][24683] Weights refcount: 2 0 [2023-02-24 14:10:07,957][24683] Stopping InferenceWorker_p0-w0... [2023-02-24 14:10:07,957][00980] Component InferenceWorker_p0-w0 stopped! [2023-02-24 14:10:07,962][24683] Loop inference_proc0-0_evt_loop terminating... [2023-02-24 14:10:10,081][24666] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... [2023-02-24 14:10:10,180][24666] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth [2023-02-24 14:10:10,186][24666] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... [2023-02-24 14:10:10,319][24666] Stopping LearnerWorker_p0... [2023-02-24 14:10:10,320][24666] Loop learner_proc0_evt_loop terminating... [2023-02-24 14:10:10,321][00980] Component LearnerWorker_p0 stopped! [2023-02-24 14:10:10,323][00980] Waiting for process learner_proc0 to stop... [2023-02-24 14:10:11,370][00980] Waiting for process inference_proc0-0 to join... [2023-02-24 14:10:11,373][00980] Waiting for process rollout_proc0 to join... [2023-02-24 14:10:11,376][00980] Waiting for process rollout_proc1 to join... [2023-02-24 14:10:11,379][00980] Waiting for process rollout_proc2 to join... [2023-02-24 14:10:11,382][00980] Waiting for process rollout_proc3 to join... [2023-02-24 14:10:11,385][00980] Waiting for process rollout_proc4 to join... [2023-02-24 14:10:11,386][00980] Waiting for process rollout_proc5 to join... [2023-02-24 14:10:11,388][00980] Waiting for process rollout_proc6 to join... [2023-02-24 14:10:11,390][00980] Waiting for process rollout_proc7 to join... [2023-02-24 14:10:11,391][00980] Batcher 0 profile tree view: batching: 0.1455, releasing_batches: 0.0007 [2023-02-24 14:10:11,392][00980] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0126 wait_policy_total: 8.5993 update_model: 0.0278 weight_update: 0.0017 one_step: 0.0564 handle_policy_step: 4.2885 deserialize: 0.0514, stack: 0.0076, obs_to_device_normalize: 0.4279, forward: 3.3584, send_messages: 0.1076 prepare_outputs: 0.2513 to_cpu: 0.1288 [2023-02-24 14:10:11,393][00980] Learner 0 profile tree view: misc: 0.0000, prepare_batch: 5.3320 train: 1.3646 epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0003, kl_divergence: 0.0015, after_optimizer: 0.0074 calculate_losses: 0.2007 losses_init: 0.0000, forward_head: 0.1107, bptt_initial: 0.0672, tail: 0.0012, advantages_returns: 0.0008, losses: 0.0182 bptt: 0.0023 bptt_forward_core: 0.0021 update: 1.1537 clip: 0.0045 [2023-02-24 14:10:11,395][00980] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.0015, enqueue_policy_requests: 0.6653, env_step: 1.8552, overhead: 0.1000, complete_rollouts: 0.0290 save_policy_outputs: 0.1110 split_output_tensors: 0.0652 [2023-02-24 14:10:11,400][00980] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.0007, enqueue_policy_requests: 0.5677, env_step: 2.9069, overhead: 0.1583, complete_rollouts: 0.0367 save_policy_outputs: 0.1729 split_output_tensors: 0.1089 [2023-02-24 14:10:11,401][00980] Loop Runner_EvtLoop terminating... [2023-02-24 14:10:11,403][00980] Runner profile tree view: main_loop: 38.5571 [2023-02-24 14:10:11,405][00980] Collected {0: 4022272}, FPS: 212.5 [2023-02-24 14:10:11,454][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:10:11,455][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:10:11,457][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:10:11,459][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:10:11,463][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:10:11,469][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:10:11,472][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:10:11,473][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:10:11,475][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-02-24 14:10:11,476][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-02-24 14:10:11,478][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:10:11,479][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:10:11,480][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:10:11,481][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:10:11,483][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:10:11,505][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:10:11,512][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:10:11,531][00980] ConvEncoder: input_channels=3 [2023-02-24 14:10:11,573][00980] Conv encoder output size: 512 [2023-02-24 14:10:11,575][00980] Policy head output size: 512 [2023-02-24 14:10:11,597][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... [2023-02-24 14:10:12,226][00980] Num frames 100... [2023-02-24 14:10:12,342][00980] Num frames 200... [2023-02-24 14:10:12,466][00980] Num frames 300... [2023-02-24 14:10:12,599][00980] Num frames 400... [2023-02-24 14:10:12,711][00980] Num frames 500... [2023-02-24 14:10:12,828][00980] Num frames 600... [2023-02-24 14:10:12,947][00980] Num frames 700... [2023-02-24 14:10:13,071][00980] Num frames 800... [2023-02-24 14:10:13,155][00980] Avg episode rewards: #0: 22.240, true rewards: #0: 8.240 [2023-02-24 14:10:13,157][00980] Avg episode reward: 22.240, avg true_objective: 8.240 [2023-02-24 14:10:13,254][00980] Num frames 900... [2023-02-24 14:10:13,391][00980] Num frames 1000... [2023-02-24 14:10:13,503][00980] Num frames 1100... [2023-02-24 14:10:13,625][00980] Num frames 1200... [2023-02-24 14:10:13,742][00980] Num frames 1300... [2023-02-24 14:10:13,869][00980] Num frames 1400... [2023-02-24 14:10:13,995][00980] Num frames 1500... [2023-02-24 14:10:14,112][00980] Num frames 1600... [2023-02-24 14:10:14,228][00980] Num frames 1700... [2023-02-24 14:10:14,350][00980] Num frames 1800... [2023-02-24 14:10:14,468][00980] Num frames 1900... [2023-02-24 14:10:14,612][00980] Avg episode rewards: #0: 23.880, true rewards: #0: 9.880 [2023-02-24 14:10:14,614][00980] Avg episode reward: 23.880, avg true_objective: 9.880 [2023-02-24 14:10:14,646][00980] Num frames 2000... [2023-02-24 14:10:14,768][00980] Num frames 2100... [2023-02-24 14:10:14,893][00980] Num frames 2200... [2023-02-24 14:10:14,946][00980] Avg episode rewards: #0: 17.000, true rewards: #0: 7.333 [2023-02-24 14:10:14,948][00980] Avg episode reward: 17.000, avg true_objective: 7.333 [2023-02-24 14:10:15,119][00980] Num frames 2300... [2023-02-24 14:10:15,316][00980] Num frames 2400... [2023-02-24 14:10:15,494][00980] Num frames 2500... [2023-02-24 14:10:15,671][00980] Num frames 2600... [2023-02-24 14:10:15,840][00980] Num frames 2700... [2023-02-24 14:10:16,019][00980] Num frames 2800... [2023-02-24 14:10:16,200][00980] Num frames 2900... [2023-02-24 14:10:16,366][00980] Num frames 3000... [2023-02-24 14:10:16,531][00980] Num frames 3100... [2023-02-24 14:10:16,699][00980] Num frames 3200... [2023-02-24 14:10:16,857][00980] Num frames 3300... [2023-02-24 14:10:17,023][00980] Num frames 3400... [2023-02-24 14:10:17,190][00980] Num frames 3500... [2023-02-24 14:10:17,357][00980] Num frames 3600... [2023-02-24 14:10:17,518][00980] Num frames 3700... [2023-02-24 14:10:17,635][00980] Avg episode rewards: #0: 22.590, true rewards: #0: 9.340 [2023-02-24 14:10:17,636][00980] Avg episode reward: 22.590, avg true_objective: 9.340 [2023-02-24 14:10:17,746][00980] Num frames 3800... [2023-02-24 14:10:17,911][00980] Num frames 3900... [2023-02-24 14:10:18,074][00980] Num frames 4000... [2023-02-24 14:10:18,236][00980] Num frames 4100... [2023-02-24 14:10:18,371][00980] Avg episode rewards: #0: 19.304, true rewards: #0: 8.304 [2023-02-24 14:10:18,373][00980] Avg episode reward: 19.304, avg true_objective: 8.304 [2023-02-24 14:10:18,451][00980] Num frames 4200... [2023-02-24 14:10:18,613][00980] Num frames 4300... [2023-02-24 14:10:18,785][00980] Num frames 4400... [2023-02-24 14:10:18,963][00980] Num frames 4500... [2023-02-24 14:10:19,145][00980] Num frames 4600... [2023-02-24 14:10:19,318][00980] Num frames 4700... [2023-02-24 14:10:19,493][00980] Num frames 4800... [2023-02-24 14:10:19,629][00980] Num frames 4900... [2023-02-24 14:10:19,750][00980] Num frames 5000... [2023-02-24 14:10:19,875][00980] Num frames 5100... [2023-02-24 14:10:19,993][00980] Num frames 5200... [2023-02-24 14:10:20,122][00980] Num frames 5300... [2023-02-24 14:10:20,241][00980] Num frames 5400... [2023-02-24 14:10:20,361][00980] Num frames 5500... [2023-02-24 14:10:20,480][00980] Num frames 5600... [2023-02-24 14:10:20,592][00980] Num frames 5700... [2023-02-24 14:10:20,709][00980] Num frames 5800... [2023-02-24 14:10:20,823][00980] Num frames 5900... [2023-02-24 14:10:20,893][00980] Avg episode rewards: #0: 23.187, true rewards: #0: 9.853 [2023-02-24 14:10:20,895][00980] Avg episode reward: 23.187, avg true_objective: 9.853 [2023-02-24 14:10:20,999][00980] Num frames 6000... [2023-02-24 14:10:21,065][00980] Avg episode rewards: #0: 20.154, true rewards: #0: 8.583 [2023-02-24 14:10:21,066][00980] Avg episode reward: 20.154, avg true_objective: 8.583 [2023-02-24 14:10:21,180][00980] Num frames 6100... [2023-02-24 14:10:21,305][00980] Num frames 6200... [2023-02-24 14:10:21,421][00980] Num frames 6300... [2023-02-24 14:10:21,544][00980] Num frames 6400... [2023-02-24 14:10:21,669][00980] Num frames 6500... [2023-02-24 14:10:21,781][00980] Num frames 6600... [2023-02-24 14:10:21,903][00980] Num frames 6700... [2023-02-24 14:10:22,023][00980] Num frames 6800... [2023-02-24 14:10:22,151][00980] Num frames 6900... [2023-02-24 14:10:22,270][00980] Num frames 7000... [2023-02-24 14:10:22,391][00980] Num frames 7100... [2023-02-24 14:10:22,520][00980] Avg episode rewards: #0: 20.951, true rewards: #0: 8.951 [2023-02-24 14:10:22,521][00980] Avg episode reward: 20.951, avg true_objective: 8.951 [2023-02-24 14:10:22,572][00980] Num frames 7200... [2023-02-24 14:10:22,691][00980] Num frames 7300... [2023-02-24 14:10:22,808][00980] Num frames 7400... [2023-02-24 14:10:22,931][00980] Num frames 7500... [2023-02-24 14:10:23,056][00980] Num frames 7600... [2023-02-24 14:10:23,189][00980] Num frames 7700... [2023-02-24 14:10:23,314][00980] Num frames 7800... [2023-02-24 14:10:23,440][00980] Num frames 7900... [2023-02-24 14:10:23,556][00980] Num frames 8000... [2023-02-24 14:10:23,682][00980] Num frames 8100... [2023-02-24 14:10:23,839][00980] Avg episode rewards: #0: 21.100, true rewards: #0: 9.100 [2023-02-24 14:10:23,841][00980] Avg episode reward: 21.100, avg true_objective: 9.100 [2023-02-24 14:10:23,857][00980] Num frames 8200... [2023-02-24 14:10:23,974][00980] Num frames 8300... [2023-02-24 14:10:24,100][00980] Num frames 8400... [2023-02-24 14:10:24,219][00980] Num frames 8500... [2023-02-24 14:10:24,333][00980] Num frames 8600... [2023-02-24 14:10:24,450][00980] Num frames 8700... [2023-02-24 14:10:24,566][00980] Num frames 8800... [2023-02-24 14:10:24,690][00980] Num frames 8900... [2023-02-24 14:10:24,807][00980] Num frames 9000... [2023-02-24 14:10:24,928][00980] Num frames 9100... [2023-02-24 14:10:25,045][00980] Num frames 9200... [2023-02-24 14:10:25,176][00980] Num frames 9300... [2023-02-24 14:10:25,295][00980] Num frames 9400... [2023-02-24 14:10:25,420][00980] Num frames 9500... [2023-02-24 14:10:25,546][00980] Num frames 9600... [2023-02-24 14:10:25,620][00980] Avg episode rewards: #0: 22.314, true rewards: #0: 9.614 [2023-02-24 14:10:25,622][00980] Avg episode reward: 22.314, avg true_objective: 9.614 [2023-02-24 14:11:23,996][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 14:11:24,026][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:11:24,027][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:11:24,028][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:11:24,031][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:11:24,033][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:11:24,035][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:11:24,045][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-02-24 14:11:24,047][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:11:24,050][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-02-24 14:11:24,053][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-02-24 14:11:24,055][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:11:24,058][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:11:24,059][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:11:24,060][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:11:24,062][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:11:24,081][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:11:24,083][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:11:24,098][00980] ConvEncoder: input_channels=3 [2023-02-24 14:11:24,135][00980] Conv encoder output size: 512 [2023-02-24 14:11:24,139][00980] Policy head output size: 512 [2023-02-24 14:11:24,159][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... [2023-02-24 14:11:24,602][00980] Num frames 100... [2023-02-24 14:11:24,723][00980] Num frames 200... [2023-02-24 14:11:24,864][00980] Avg episode rewards: #0: 5.760, true rewards: #0: 2.760 [2023-02-24 14:11:24,866][00980] Avg episode reward: 5.760, avg true_objective: 2.760 [2023-02-24 14:11:24,899][00980] Num frames 300... [2023-02-24 14:11:25,014][00980] Num frames 400... [2023-02-24 14:11:25,127][00980] Num frames 500... [2023-02-24 14:11:25,239][00980] Num frames 600... [2023-02-24 14:11:25,361][00980] Num frames 700... [2023-02-24 14:11:25,478][00980] Num frames 800... [2023-02-24 14:11:25,598][00980] Num frames 900... [2023-02-24 14:11:25,720][00980] Num frames 1000... [2023-02-24 14:11:25,780][00980] Avg episode rewards: #0: 11.015, true rewards: #0: 5.015 [2023-02-24 14:11:25,781][00980] Avg episode reward: 11.015, avg true_objective: 5.015 [2023-02-24 14:11:25,893][00980] Num frames 1100... [2023-02-24 14:11:26,017][00980] Num frames 1200... [2023-02-24 14:11:26,132][00980] Num frames 1300... [2023-02-24 14:11:26,254][00980] Num frames 1400... [2023-02-24 14:11:26,368][00980] Num frames 1500... [2023-02-24 14:11:26,484][00980] Num frames 1600... [2023-02-24 14:11:26,621][00980] Num frames 1700... [2023-02-24 14:11:26,803][00980] Num frames 1800... [2023-02-24 14:11:26,965][00980] Num frames 1900... [2023-02-24 14:11:27,126][00980] Num frames 2000... [2023-02-24 14:11:27,288][00980] Num frames 2100... [2023-02-24 14:11:27,438][00980] Avg episode rewards: #0: 15.183, true rewards: #0: 7.183 [2023-02-24 14:11:27,441][00980] Avg episode reward: 15.183, avg true_objective: 7.183 [2023-02-24 14:11:27,529][00980] Num frames 2200... [2023-02-24 14:11:27,688][00980] Num frames 2300... [2023-02-24 14:11:27,853][00980] Num frames 2400... [2023-02-24 14:11:28,016][00980] Num frames 2500... [2023-02-24 14:11:28,173][00980] Num frames 2600... [2023-02-24 14:11:28,336][00980] Num frames 2700... [2023-02-24 14:11:28,499][00980] Num frames 2800... [2023-02-24 14:11:28,668][00980] Num frames 2900... [2023-02-24 14:11:28,828][00980] Num frames 3000... [2023-02-24 14:11:29,003][00980] Num frames 3100... [2023-02-24 14:11:29,184][00980] Num frames 3200... [2023-02-24 14:11:29,363][00980] Num frames 3300... [2023-02-24 14:11:29,537][00980] Num frames 3400... [2023-02-24 14:11:29,710][00980] Num frames 3500... [2023-02-24 14:11:29,884][00980] Num frames 3600... [2023-02-24 14:11:30,060][00980] Num frames 3700... [2023-02-24 14:11:30,160][00980] Avg episode rewards: #0: 21.807, true rewards: #0: 9.307 [2023-02-24 14:11:30,162][00980] Avg episode reward: 21.807, avg true_objective: 9.307 [2023-02-24 14:11:30,272][00980] Num frames 3800... [2023-02-24 14:11:30,387][00980] Num frames 3900... [2023-02-24 14:11:30,504][00980] Num frames 4000... [2023-02-24 14:11:30,625][00980] Num frames 4100... [2023-02-24 14:11:30,746][00980] Num frames 4200... [2023-02-24 14:11:30,851][00980] Avg episode rewards: #0: 19.284, true rewards: #0: 8.484 [2023-02-24 14:11:30,853][00980] Avg episode reward: 19.284, avg true_objective: 8.484 [2023-02-24 14:11:30,922][00980] Num frames 4300... [2023-02-24 14:11:31,038][00980] Num frames 4400... [2023-02-24 14:11:31,164][00980] Num frames 4500... [2023-02-24 14:11:31,287][00980] Num frames 4600... [2023-02-24 14:11:31,406][00980] Num frames 4700... [2023-02-24 14:11:31,517][00980] Num frames 4800... [2023-02-24 14:11:31,633][00980] Num frames 4900... [2023-02-24 14:11:31,746][00980] Num frames 5000... [2023-02-24 14:11:31,851][00980] Avg episode rewards: #0: 19.237, true rewards: #0: 8.403 [2023-02-24 14:11:31,853][00980] Avg episode reward: 19.237, avg true_objective: 8.403 [2023-02-24 14:11:31,924][00980] Num frames 5100... [2023-02-24 14:11:32,046][00980] Num frames 5200... [2023-02-24 14:11:32,170][00980] Num frames 5300... [2023-02-24 14:11:32,286][00980] Num frames 5400... [2023-02-24 14:11:32,397][00980] Num frames 5500... [2023-02-24 14:11:32,512][00980] Avg episode rewards: #0: 17.506, true rewards: #0: 7.934 [2023-02-24 14:11:32,513][00980] Avg episode reward: 17.506, avg true_objective: 7.934 [2023-02-24 14:11:32,568][00980] Num frames 5600... [2023-02-24 14:11:32,687][00980] Num frames 5700... [2023-02-24 14:11:32,800][00980] Num frames 5800... [2023-02-24 14:11:32,920][00980] Num frames 5900... [2023-02-24 14:11:33,040][00980] Num frames 6000... [2023-02-24 14:11:33,157][00980] Num frames 6100... [2023-02-24 14:11:33,280][00980] Num frames 6200... [2023-02-24 14:11:33,441][00980] Avg episode rewards: #0: 17.363, true rewards: #0: 7.862 [2023-02-24 14:11:33,442][00980] Avg episode reward: 17.363, avg true_objective: 7.862 [2023-02-24 14:11:33,457][00980] Num frames 6300... [2023-02-24 14:11:33,571][00980] Num frames 6400... [2023-02-24 14:11:33,685][00980] Num frames 6500... [2023-02-24 14:11:33,807][00980] Num frames 6600... [2023-02-24 14:11:33,927][00980] Num frames 6700... [2023-02-24 14:11:34,099][00980] Avg episode rewards: #0: 16.440, true rewards: #0: 7.551 [2023-02-24 14:11:34,102][00980] Avg episode reward: 16.440, avg true_objective: 7.551 [2023-02-24 14:11:34,110][00980] Num frames 6800... [2023-02-24 14:11:34,224][00980] Num frames 6900... [2023-02-24 14:11:34,343][00980] Num frames 7000... [2023-02-24 14:11:34,457][00980] Num frames 7100... [2023-02-24 14:11:34,576][00980] Num frames 7200... [2023-02-24 14:11:34,691][00980] Num frames 7300... [2023-02-24 14:11:34,805][00980] Num frames 7400... [2023-02-24 14:11:34,926][00980] Num frames 7500... [2023-02-24 14:11:35,044][00980] Num frames 7600... [2023-02-24 14:11:35,165][00980] Num frames 7700... [2023-02-24 14:11:35,281][00980] Num frames 7800... [2023-02-24 14:11:35,399][00980] Num frames 7900... [2023-02-24 14:11:35,514][00980] Num frames 8000... [2023-02-24 14:11:35,643][00980] Num frames 8100... [2023-02-24 14:11:35,756][00980] Num frames 8200... [2023-02-24 14:11:35,872][00980] Num frames 8300... [2023-02-24 14:11:36,001][00980] Num frames 8400... [2023-02-24 14:11:36,126][00980] Num frames 8500... [2023-02-24 14:11:36,241][00980] Num frames 8600... [2023-02-24 14:11:36,360][00980] Num frames 8700... [2023-02-24 14:11:36,486][00980] Num frames 8800... [2023-02-24 14:11:36,650][00980] Avg episode rewards: #0: 20.196, true rewards: #0: 8.896 [2023-02-24 14:11:36,652][00980] Avg episode reward: 20.196, avg true_objective: 8.896 [2023-02-24 14:12:30,762][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 14:12:33,506][00980] The model has been pushed to https://huggingface.co/mnavas/rl_course_vizdoom_health_gathering_supreme [2023-02-24 14:14:51,332][00980] Environment doom_basic already registered, overwriting... [2023-02-24 14:14:51,335][00980] Environment doom_two_colors_easy already registered, overwriting... [2023-02-24 14:14:51,337][00980] Environment doom_two_colors_hard already registered, overwriting... [2023-02-24 14:14:51,338][00980] Environment doom_dm already registered, overwriting... [2023-02-24 14:14:51,339][00980] Environment doom_dwango5 already registered, overwriting... [2023-02-24 14:14:51,341][00980] Environment doom_my_way_home_flat_actions already registered, overwriting... [2023-02-24 14:14:51,342][00980] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2023-02-24 14:14:51,343][00980] Environment doom_my_way_home already registered, overwriting... [2023-02-24 14:14:51,344][00980] Environment doom_deadly_corridor already registered, overwriting... [2023-02-24 14:14:51,345][00980] Environment doom_defend_the_center already registered, overwriting... [2023-02-24 14:14:51,347][00980] Environment doom_defend_the_line already registered, overwriting... [2023-02-24 14:14:51,348][00980] Environment doom_health_gathering already registered, overwriting... [2023-02-24 14:14:51,349][00980] Environment doom_health_gathering_supreme already registered, overwriting... [2023-02-24 14:14:51,350][00980] Environment doom_battle already registered, overwriting... [2023-02-24 14:14:51,351][00980] Environment doom_battle2 already registered, overwriting... [2023-02-24 14:14:51,353][00980] Environment doom_duel_bots already registered, overwriting... [2023-02-24 14:14:51,354][00980] Environment doom_deathmatch_bots already registered, overwriting... [2023-02-24 14:14:51,356][00980] Environment doom_duel already registered, overwriting... [2023-02-24 14:14:51,357][00980] Environment doom_deathmatch_full already registered, overwriting... [2023-02-24 14:14:51,358][00980] Environment doom_benchmark already registered, overwriting... [2023-02-24 14:14:51,359][00980] register_encoder_factory: [2023-02-24 14:14:51,386][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:14:51,387][00980] Overriding arg 'train_for_env_steps' with value 1000000 passed from command line [2023-02-24 14:14:51,393][00980] Experiment dir /content/train_dir/default_experiment already exists! [2023-02-24 14:14:51,398][00980] Resuming existing experiment from /content/train_dir/default_experiment... [2023-02-24 14:14:51,399][00980] Weights and Biases integration disabled [2023-02-24 14:14:51,406][00980] Environment var CUDA_VISIBLE_DEVICES is 0 [2023-02-24 14:14:53,401][00980] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=8 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=1000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2023-02-24 14:14:53,404][00980] Saving configuration to /content/train_dir/default_experiment/config.json... [2023-02-24 14:14:53,407][00980] Rollout worker 0 uses device cpu [2023-02-24 14:14:53,408][00980] Rollout worker 1 uses device cpu [2023-02-24 14:14:53,409][00980] Rollout worker 2 uses device cpu [2023-02-24 14:14:53,414][00980] Rollout worker 3 uses device cpu [2023-02-24 14:14:53,415][00980] Rollout worker 4 uses device cpu [2023-02-24 14:14:53,416][00980] Rollout worker 5 uses device cpu [2023-02-24 14:14:53,417][00980] Rollout worker 6 uses device cpu [2023-02-24 14:14:53,418][00980] Rollout worker 7 uses device cpu [2023-02-24 14:14:53,580][00980] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:14:53,582][00980] InferenceWorker_p0-w0: min num requests: 2 [2023-02-24 14:14:53,630][00980] Starting all processes... [2023-02-24 14:14:53,634][00980] Starting process learner_proc0 [2023-02-24 14:14:53,825][00980] Starting all processes... [2023-02-24 14:14:53,836][00980] Starting process inference_proc0-0 [2023-02-24 14:14:53,837][00980] Starting process rollout_proc0 [2023-02-24 14:14:53,837][00980] Starting process rollout_proc1 [2023-02-24 14:14:53,837][00980] Starting process rollout_proc2 [2023-02-24 14:14:53,837][00980] Starting process rollout_proc3 [2023-02-24 14:14:53,968][00980] Starting process rollout_proc4 [2023-02-24 14:14:53,982][00980] Starting process rollout_proc5 [2023-02-24 14:14:53,987][00980] Starting process rollout_proc6 [2023-02-24 14:14:53,993][00980] Starting process rollout_proc7 [2023-02-24 14:15:03,371][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:15:03,375][26253] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2023-02-24 14:15:03,434][26253] Num visible devices: 1 [2023-02-24 14:15:03,466][26253] Starting seed is not provided [2023-02-24 14:15:03,467][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:15:03,468][26253] Initializing actor-critic model on device cuda:0 [2023-02-24 14:15:03,469][26253] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:15:03,470][26253] RunningMeanStd input shape: (1,) [2023-02-24 14:15:03,546][26253] ConvEncoder: input_channels=3 [2023-02-24 14:15:04,380][26267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:15:04,381][26267] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2023-02-24 14:15:04,438][26267] Num visible devices: 1 [2023-02-24 14:15:04,447][26253] Conv encoder output size: 512 [2023-02-24 14:15:04,451][26253] Policy head output size: 512 [2023-02-24 14:15:04,535][26253] Created Actor Critic model with architecture: [2023-02-24 14:15:04,537][26253] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2023-02-24 14:15:04,916][26268] Worker 1 uses CPU cores [1] [2023-02-24 14:15:05,071][26270] Worker 0 uses CPU cores [0] [2023-02-24 14:15:05,141][26272] Worker 3 uses CPU cores [1] [2023-02-24 14:15:05,438][26278] Worker 2 uses CPU cores [0] [2023-02-24 14:15:05,710][26282] Worker 6 uses CPU cores [0] [2023-02-24 14:15:05,772][26280] Worker 4 uses CPU cores [0] [2023-02-24 14:15:05,851][26288] Worker 7 uses CPU cores [1] [2023-02-24 14:15:05,918][26290] Worker 5 uses CPU cores [1] [2023-02-24 14:15:08,019][26253] Using optimizer [2023-02-24 14:15:08,021][26253] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000982_4022272.pth... [2023-02-24 14:15:08,064][26253] Loading model from checkpoint [2023-02-24 14:15:08,071][26253] Loaded experiment state at self.train_step=982, self.env_steps=4022272 [2023-02-24 14:15:08,072][26253] Initialized policy 0 weights for model version 982 [2023-02-24 14:15:08,083][26253] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2023-02-24 14:15:08,090][26253] LearnerWorker_p0 finished initialization! [2023-02-24 14:15:08,365][26267] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:15:08,367][26267] RunningMeanStd input shape: (1,) [2023-02-24 14:15:08,389][26267] ConvEncoder: input_channels=3 [2023-02-24 14:15:08,548][26267] Conv encoder output size: 512 [2023-02-24 14:15:08,549][26267] Policy head output size: 512 [2023-02-24 14:15:11,407][00980] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4022272. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:15:11,633][00980] Inference worker 0-0 is ready! [2023-02-24 14:15:11,635][00980] All inference workers are ready! Signal rollout workers to start! [2023-02-24 14:15:11,741][26272] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,744][26288] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,740][26290] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,738][26268] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,837][26270] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,846][26278] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,840][26282] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:11,854][26280] Doom resolution: 160x120, resize resolution: (128, 72) [2023-02-24 14:15:12,819][26280] Decorrelating experience for 0 frames... [2023-02-24 14:15:12,826][26270] Decorrelating experience for 0 frames... [2023-02-24 14:15:13,068][26288] Decorrelating experience for 0 frames... [2023-02-24 14:15:13,073][26290] Decorrelating experience for 0 frames... [2023-02-24 14:15:13,078][26272] Decorrelating experience for 0 frames... [2023-02-24 14:15:13,323][26280] Decorrelating experience for 32 frames... [2023-02-24 14:15:13,570][00980] Heartbeat connected on Batcher_0 [2023-02-24 14:15:13,575][00980] Heartbeat connected on LearnerWorker_p0 [2023-02-24 14:15:13,606][00980] Heartbeat connected on InferenceWorker_p0-w0 [2023-02-24 14:15:13,949][26270] Decorrelating experience for 32 frames... [2023-02-24 14:15:13,964][26278] Decorrelating experience for 0 frames... [2023-02-24 14:15:14,383][26278] Decorrelating experience for 32 frames... [2023-02-24 14:15:14,500][26290] Decorrelating experience for 32 frames... [2023-02-24 14:15:14,514][26288] Decorrelating experience for 32 frames... [2023-02-24 14:15:14,521][26268] Decorrelating experience for 0 frames... [2023-02-24 14:15:14,519][26272] Decorrelating experience for 32 frames... [2023-02-24 14:15:15,359][26278] Decorrelating experience for 64 frames... [2023-02-24 14:15:15,390][26268] Decorrelating experience for 32 frames... [2023-02-24 14:15:15,574][26290] Decorrelating experience for 64 frames... [2023-02-24 14:15:15,614][26280] Decorrelating experience for 64 frames... [2023-02-24 14:15:15,688][26270] Decorrelating experience for 64 frames... [2023-02-24 14:15:16,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:15:16,520][26272] Decorrelating experience for 64 frames... [2023-02-24 14:15:16,649][26278] Decorrelating experience for 96 frames... [2023-02-24 14:15:16,700][26268] Decorrelating experience for 64 frames... [2023-02-24 14:15:16,752][26288] Decorrelating experience for 64 frames... [2023-02-24 14:15:16,844][00980] Heartbeat connected on RolloutWorker_w2 [2023-02-24 14:15:16,853][26282] Decorrelating experience for 0 frames... [2023-02-24 14:15:16,994][26280] Decorrelating experience for 96 frames... [2023-02-24 14:15:17,218][00980] Heartbeat connected on RolloutWorker_w4 [2023-02-24 14:15:17,608][26270] Decorrelating experience for 96 frames... [2023-02-24 14:15:17,961][00980] Heartbeat connected on RolloutWorker_w0 [2023-02-24 14:15:18,366][26272] Decorrelating experience for 96 frames... [2023-02-24 14:15:18,585][26282] Decorrelating experience for 32 frames... [2023-02-24 14:15:18,665][00980] Heartbeat connected on RolloutWorker_w3 [2023-02-24 14:15:18,671][26290] Decorrelating experience for 96 frames... [2023-02-24 14:15:18,677][26268] Decorrelating experience for 96 frames... [2023-02-24 14:15:18,724][26288] Decorrelating experience for 96 frames... [2023-02-24 14:15:19,012][00980] Heartbeat connected on RolloutWorker_w5 [2023-02-24 14:15:19,020][00980] Heartbeat connected on RolloutWorker_w1 [2023-02-24 14:15:19,070][00980] Heartbeat connected on RolloutWorker_w7 [2023-02-24 14:15:21,407][00980] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4022272. Throughput: 0: 175.6. Samples: 1756. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2023-02-24 14:15:21,413][00980] Avg episode reward: [(0, '2.045')] [2023-02-24 14:15:21,489][26253] Signal inference workers to stop experience collection... [2023-02-24 14:15:21,510][26267] InferenceWorker_p0-w0: stopping experience collection [2023-02-24 14:15:21,564][26282] Decorrelating experience for 64 frames... [2023-02-24 14:15:22,135][26282] Decorrelating experience for 96 frames... [2023-02-24 14:15:22,232][00980] Heartbeat connected on RolloutWorker_w6 [2023-02-24 14:15:24,615][26253] Signal inference workers to resume experience collection... [2023-02-24 14:15:24,647][26253] Stopping Batcher_0... [2023-02-24 14:15:24,648][26253] Loop batcher_evt_loop terminating... [2023-02-24 14:15:24,644][26267] Weights refcount: 2 0 [2023-02-24 14:15:24,648][00980] Component Batcher_0 stopped! [2023-02-24 14:15:24,658][26267] Stopping InferenceWorker_p0-w0... [2023-02-24 14:15:24,659][26267] Loop inference_proc0-0_evt_loop terminating... [2023-02-24 14:15:24,658][00980] Component InferenceWorker_p0-w0 stopped! [2023-02-24 14:15:24,856][00980] Component RolloutWorker_w7 stopped! [2023-02-24 14:15:24,860][26272] Stopping RolloutWorker_w3... [2023-02-24 14:15:24,861][00980] Component RolloutWorker_w3 stopped! [2023-02-24 14:15:24,861][26288] Stopping RolloutWorker_w7... [2023-02-24 14:15:24,868][26288] Loop rollout_proc7_evt_loop terminating... [2023-02-24 14:15:24,871][26272] Loop rollout_proc3_evt_loop terminating... [2023-02-24 14:15:24,877][00980] Component RolloutWorker_w0 stopped! [2023-02-24 14:15:24,877][26268] Stopping RolloutWorker_w1... [2023-02-24 14:15:24,878][26290] Stopping RolloutWorker_w5... [2023-02-24 14:15:24,881][00980] Component RolloutWorker_w1 stopped! [2023-02-24 14:15:24,886][00980] Component RolloutWorker_w5 stopped! [2023-02-24 14:15:24,881][26268] Loop rollout_proc1_evt_loop terminating... [2023-02-24 14:15:24,881][26290] Loop rollout_proc5_evt_loop terminating... [2023-02-24 14:15:24,899][00980] Component RolloutWorker_w4 stopped! [2023-02-24 14:15:24,905][26280] Stopping RolloutWorker_w4... [2023-02-24 14:15:24,906][26280] Loop rollout_proc4_evt_loop terminating... [2023-02-24 14:15:24,911][26282] Stopping RolloutWorker_w6... [2023-02-24 14:15:24,911][26282] Loop rollout_proc6_evt_loop terminating... [2023-02-24 14:15:24,880][26270] Stopping RolloutWorker_w0... [2023-02-24 14:15:24,914][26270] Loop rollout_proc0_evt_loop terminating... [2023-02-24 14:15:24,910][00980] Component RolloutWorker_w6 stopped! [2023-02-24 14:15:24,931][26278] Stopping RolloutWorker_w2... [2023-02-24 14:15:24,932][26278] Loop rollout_proc2_evt_loop terminating... [2023-02-24 14:15:24,931][00980] Component RolloutWorker_w2 stopped! [2023-02-24 14:15:28,114][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... [2023-02-24 14:15:28,273][26253] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth [2023-02-24 14:15:28,279][26253] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... [2023-02-24 14:15:28,479][00980] Component LearnerWorker_p0 stopped! [2023-02-24 14:15:28,483][00980] Waiting for process learner_proc0 to stop... [2023-02-24 14:15:28,485][26253] Stopping LearnerWorker_p0... [2023-02-24 14:15:28,486][26253] Loop learner_proc0_evt_loop terminating... [2023-02-24 14:15:29,668][00980] Waiting for process inference_proc0-0 to join... [2023-02-24 14:15:29,670][00980] Waiting for process rollout_proc0 to join... [2023-02-24 14:15:29,672][00980] Waiting for process rollout_proc1 to join... [2023-02-24 14:15:29,674][00980] Waiting for process rollout_proc2 to join... [2023-02-24 14:15:29,678][00980] Waiting for process rollout_proc3 to join... [2023-02-24 14:15:29,680][00980] Waiting for process rollout_proc4 to join... [2023-02-24 14:15:29,682][00980] Waiting for process rollout_proc5 to join... [2023-02-24 14:15:29,685][00980] Waiting for process rollout_proc6 to join... [2023-02-24 14:15:29,687][00980] Waiting for process rollout_proc7 to join... [2023-02-24 14:15:29,689][00980] Batcher 0 profile tree view: batching: 0.0539, releasing_batches: 0.0311 [2023-02-24 14:15:29,691][00980] InferenceWorker_p0-w0 profile tree view: update_model: 0.0124 wait_policy: 0.0012 wait_policy_total: 6.5012 one_step: 0.0023 handle_policy_step: 3.1320 deserialize: 0.0360, stack: 0.0068, obs_to_device_normalize: 0.2785, forward: 2.5249, send_messages: 0.0576 prepare_outputs: 0.1663 to_cpu: 0.0961 [2023-02-24 14:15:29,693][00980] Learner 0 profile tree view: misc: 0.0000, prepare_batch: 6.1913 train: 1.6359 epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0004, kl_divergence: 0.0013, after_optimizer: 0.0296 calculate_losses: 0.2418 losses_init: 0.0000, forward_head: 0.1123, bptt_initial: 0.1049, tail: 0.0029, advantages_returns: 0.0009, losses: 0.0165 bptt: 0.0040 bptt_forward_core: 0.0039 update: 1.3507 clip: 0.0144 [2023-02-24 14:15:29,696][00980] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.0009, enqueue_policy_requests: 0.4267, env_step: 2.1053, overhead: 0.0685, complete_rollouts: 0.0505 save_policy_outputs: 0.0325 split_output_tensors: 0.0155 [2023-02-24 14:15:29,698][00980] RolloutWorker_w7 profile tree view: wait_for_trajectories: 0.0007, enqueue_policy_requests: 0.4189, env_step: 1.8447, overhead: 0.0353, complete_rollouts: 0.0094 save_policy_outputs: 0.0314 split_output_tensors: 0.0157 [2023-02-24 14:15:29,702][00980] Loop Runner_EvtLoop terminating... [2023-02-24 14:15:29,706][00980] Runner profile tree view: main_loop: 36.0757 [2023-02-24 14:15:29,709][00980] Collected {0: 4030464}, FPS: 227.1 [2023-02-24 14:15:29,766][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:15:29,767][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:15:29,770][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:15:29,772][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:15:29,773][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:15:29,777][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:15:29,779][00980] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:15:29,785][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:15:29,787][00980] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2023-02-24 14:15:29,790][00980] Adding new argument 'hf_repository'=None that is not in the saved config file! [2023-02-24 14:15:29,792][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:15:29,794][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:15:29,796][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:15:29,798][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:15:29,799][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:15:29,822][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:15:29,824][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:15:29,839][00980] ConvEncoder: input_channels=3 [2023-02-24 14:15:29,891][00980] Conv encoder output size: 512 [2023-02-24 14:15:29,893][00980] Policy head output size: 512 [2023-02-24 14:15:29,920][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... [2023-02-24 14:15:30,396][00980] Num frames 100... [2023-02-24 14:15:30,515][00980] Num frames 200... [2023-02-24 14:15:30,642][00980] Num frames 300... [2023-02-24 14:15:30,770][00980] Num frames 400... [2023-02-24 14:15:30,891][00980] Num frames 500... [2023-02-24 14:15:31,011][00980] Num frames 600... [2023-02-24 14:15:31,130][00980] Num frames 700... [2023-02-24 14:15:31,247][00980] Num frames 800... [2023-02-24 14:15:31,380][00980] Num frames 900... [2023-02-24 14:15:31,500][00980] Num frames 1000... [2023-02-24 14:15:31,622][00980] Num frames 1100... [2023-02-24 14:15:31,753][00980] Num frames 1200... [2023-02-24 14:15:31,868][00980] Num frames 1300... [2023-02-24 14:15:31,937][00980] Avg episode rewards: #0: 32.120, true rewards: #0: 13.120 [2023-02-24 14:15:31,941][00980] Avg episode reward: 32.120, avg true_objective: 13.120 [2023-02-24 14:15:32,038][00980] Num frames 1400... [2023-02-24 14:15:32,150][00980] Num frames 1500... [2023-02-24 14:15:32,267][00980] Num frames 1600... [2023-02-24 14:15:32,426][00980] Num frames 1700... [2023-02-24 14:15:32,547][00980] Num frames 1800... [2023-02-24 14:15:32,662][00980] Num frames 1900... [2023-02-24 14:15:32,779][00980] Num frames 2000... [2023-02-24 14:15:32,896][00980] Num frames 2100... [2023-02-24 14:15:33,028][00980] Num frames 2200... [2023-02-24 14:15:33,171][00980] Avg episode rewards: #0: 26.360, true rewards: #0: 11.360 [2023-02-24 14:15:33,172][00980] Avg episode reward: 26.360, avg true_objective: 11.360 [2023-02-24 14:15:33,210][00980] Num frames 2300... [2023-02-24 14:15:33,326][00980] Num frames 2400... [2023-02-24 14:15:33,457][00980] Num frames 2500... [2023-02-24 14:15:33,571][00980] Num frames 2600... [2023-02-24 14:15:33,687][00980] Num frames 2700... [2023-02-24 14:15:33,804][00980] Num frames 2800... [2023-02-24 14:15:33,921][00980] Num frames 2900... [2023-02-24 14:15:34,041][00980] Num frames 3000... [2023-02-24 14:15:34,159][00980] Num frames 3100... [2023-02-24 14:15:34,273][00980] Num frames 3200... [2023-02-24 14:15:34,400][00980] Num frames 3300... [2023-02-24 14:15:34,516][00980] Num frames 3400... [2023-02-24 14:15:34,640][00980] Num frames 3500... [2023-02-24 14:15:34,761][00980] Num frames 3600... [2023-02-24 14:15:34,880][00980] Num frames 3700... [2023-02-24 14:15:35,005][00980] Num frames 3800... [2023-02-24 14:15:35,126][00980] Num frames 3900... [2023-02-24 14:15:35,241][00980] Num frames 4000... [2023-02-24 14:15:35,361][00980] Num frames 4100... [2023-02-24 14:15:35,490][00980] Num frames 4200... [2023-02-24 14:15:35,608][00980] Num frames 4300... [2023-02-24 14:15:35,744][00980] Avg episode rewards: #0: 36.906, true rewards: #0: 14.573 [2023-02-24 14:15:35,747][00980] Avg episode reward: 36.906, avg true_objective: 14.573 [2023-02-24 14:15:35,781][00980] Num frames 4400... [2023-02-24 14:15:35,898][00980] Num frames 4500... [2023-02-24 14:15:36,012][00980] Num frames 4600... [2023-02-24 14:15:36,127][00980] Num frames 4700... [2023-02-24 14:15:36,241][00980] Num frames 4800... [2023-02-24 14:15:36,362][00980] Num frames 4900... [2023-02-24 14:15:36,488][00980] Num frames 5000... [2023-02-24 14:15:36,611][00980] Num frames 5100... [2023-02-24 14:15:36,733][00980] Num frames 5200... [2023-02-24 14:15:36,851][00980] Num frames 5300... [2023-02-24 14:15:36,973][00980] Num frames 5400... [2023-02-24 14:15:37,089][00980] Num frames 5500... [2023-02-24 14:15:37,211][00980] Num frames 5600... [2023-02-24 14:15:37,290][00980] Avg episode rewards: #0: 34.550, true rewards: #0: 14.050 [2023-02-24 14:15:37,292][00980] Avg episode reward: 34.550, avg true_objective: 14.050 [2023-02-24 14:15:37,387][00980] Num frames 5700... [2023-02-24 14:15:37,516][00980] Num frames 5800... [2023-02-24 14:15:37,633][00980] Num frames 5900... [2023-02-24 14:15:37,749][00980] Num frames 6000... [2023-02-24 14:15:37,864][00980] Num frames 6100... [2023-02-24 14:15:37,987][00980] Num frames 6200... [2023-02-24 14:15:38,103][00980] Num frames 6300... [2023-02-24 14:15:38,225][00980] Num frames 6400... [2023-02-24 14:15:38,353][00980] Num frames 6500... [2023-02-24 14:15:38,486][00980] Num frames 6600... [2023-02-24 14:15:38,604][00980] Num frames 6700... [2023-02-24 14:15:38,729][00980] Num frames 6800... [2023-02-24 14:15:38,852][00980] Num frames 6900... [2023-02-24 14:15:38,971][00980] Num frames 7000... [2023-02-24 14:15:39,085][00980] Num frames 7100... [2023-02-24 14:15:39,209][00980] Num frames 7200... [2023-02-24 14:15:39,329][00980] Num frames 7300... [2023-02-24 14:15:39,455][00980] Num frames 7400... [2023-02-24 14:15:39,636][00980] Num frames 7500... [2023-02-24 14:15:39,812][00980] Num frames 7600... [2023-02-24 14:15:39,987][00980] Num frames 7700... [2023-02-24 14:15:40,080][00980] Avg episode rewards: #0: 38.440, true rewards: #0: 15.440 [2023-02-24 14:15:40,085][00980] Avg episode reward: 38.440, avg true_objective: 15.440 [2023-02-24 14:15:40,231][00980] Num frames 7800... [2023-02-24 14:15:40,392][00980] Num frames 7900... [2023-02-24 14:15:40,558][00980] Num frames 8000... [2023-02-24 14:15:40,738][00980] Num frames 8100... [2023-02-24 14:15:40,913][00980] Num frames 8200... [2023-02-24 14:15:41,077][00980] Num frames 8300... [2023-02-24 14:15:41,241][00980] Num frames 8400... [2023-02-24 14:15:41,422][00980] Num frames 8500... [2023-02-24 14:15:41,602][00980] Num frames 8600... [2023-02-24 14:15:41,767][00980] Num frames 8700... [2023-02-24 14:15:41,940][00980] Num frames 8800... [2023-02-24 14:15:42,110][00980] Num frames 8900... [2023-02-24 14:15:42,289][00980] Num frames 9000... [2023-02-24 14:15:42,463][00980] Num frames 9100... [2023-02-24 14:15:42,635][00980] Num frames 9200... [2023-02-24 14:15:42,803][00980] Num frames 9300... [2023-02-24 14:15:42,946][00980] Avg episode rewards: #0: 39.086, true rewards: #0: 15.587 [2023-02-24 14:15:42,949][00980] Avg episode reward: 39.086, avg true_objective: 15.587 [2023-02-24 14:15:43,029][00980] Num frames 9400... [2023-02-24 14:15:43,183][00980] Num frames 9500... [2023-02-24 14:15:43,304][00980] Num frames 9600... [2023-02-24 14:15:43,420][00980] Num frames 9700... [2023-02-24 14:15:43,533][00980] Num frames 9800... [2023-02-24 14:15:43,655][00980] Num frames 9900... [2023-02-24 14:15:43,779][00980] Num frames 10000... [2023-02-24 14:15:43,897][00980] Num frames 10100... [2023-02-24 14:15:44,021][00980] Avg episode rewards: #0: 35.645, true rewards: #0: 14.503 [2023-02-24 14:15:44,023][00980] Avg episode reward: 35.645, avg true_objective: 14.503 [2023-02-24 14:15:44,080][00980] Num frames 10200... [2023-02-24 14:15:44,204][00980] Num frames 10300... [2023-02-24 14:15:44,320][00980] Num frames 10400... [2023-02-24 14:15:44,439][00980] Num frames 10500... [2023-02-24 14:15:44,561][00980] Num frames 10600... [2023-02-24 14:15:44,685][00980] Num frames 10700... [2023-02-24 14:15:44,805][00980] Num frames 10800... [2023-02-24 14:15:44,890][00980] Avg episode rewards: #0: 33.030, true rewards: #0: 13.530 [2023-02-24 14:15:44,892][00980] Avg episode reward: 33.030, avg true_objective: 13.530 [2023-02-24 14:15:44,983][00980] Num frames 10900... [2023-02-24 14:15:45,100][00980] Num frames 11000... [2023-02-24 14:15:45,217][00980] Num frames 11100... [2023-02-24 14:15:45,335][00980] Num frames 11200... [2023-02-24 14:15:45,451][00980] Num frames 11300... [2023-02-24 14:15:45,575][00980] Num frames 11400... [2023-02-24 14:15:45,700][00980] Num frames 11500... [2023-02-24 14:15:45,823][00980] Num frames 11600... [2023-02-24 14:15:45,942][00980] Num frames 11700... [2023-02-24 14:15:46,060][00980] Num frames 11800... [2023-02-24 14:15:46,186][00980] Num frames 11900... [2023-02-24 14:15:46,306][00980] Num frames 12000... [2023-02-24 14:15:46,428][00980] Num frames 12100... [2023-02-24 14:15:46,545][00980] Num frames 12200... [2023-02-24 14:15:46,675][00980] Num frames 12300... [2023-02-24 14:15:46,792][00980] Num frames 12400... [2023-02-24 14:15:46,916][00980] Num frames 12500... [2023-02-24 14:15:47,032][00980] Num frames 12600... [2023-02-24 14:15:47,152][00980] Num frames 12700... [2023-02-24 14:15:47,277][00980] Num frames 12800... [2023-02-24 14:15:47,398][00980] Num frames 12900... [2023-02-24 14:15:47,490][00980] Avg episode rewards: #0: 36.026, true rewards: #0: 14.360 [2023-02-24 14:15:47,491][00980] Avg episode reward: 36.026, avg true_objective: 14.360 [2023-02-24 14:15:47,582][00980] Num frames 13000... [2023-02-24 14:15:47,716][00980] Num frames 13100... [2023-02-24 14:15:47,833][00980] Num frames 13200... [2023-02-24 14:15:47,956][00980] Num frames 13300... [2023-02-24 14:15:48,076][00980] Num frames 13400... [2023-02-24 14:15:48,190][00980] Num frames 13500... [2023-02-24 14:15:48,313][00980] Num frames 13600... [2023-02-24 14:15:48,435][00980] Num frames 13700... [2023-02-24 14:15:48,560][00980] Num frames 13800... [2023-02-24 14:15:48,680][00980] Num frames 13900... [2023-02-24 14:15:48,804][00980] Num frames 14000... [2023-02-24 14:15:48,921][00980] Num frames 14100... [2023-02-24 14:15:49,040][00980] Num frames 14200... [2023-02-24 14:15:49,157][00980] Num frames 14300... [2023-02-24 14:15:49,279][00980] Num frames 14400... [2023-02-24 14:15:49,411][00980] Num frames 14500... [2023-02-24 14:15:49,532][00980] Avg episode rewards: #0: 36.556, true rewards: #0: 14.556 [2023-02-24 14:15:49,535][00980] Avg episode reward: 36.556, avg true_objective: 14.556 [2023-02-24 14:17:18,905][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2023-02-24 14:17:18,985][00980] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2023-02-24 14:17:18,989][00980] Overriding arg 'num_workers' with value 1 passed from command line [2023-02-24 14:17:18,992][00980] Adding new argument 'no_render'=True that is not in the saved config file! [2023-02-24 14:17:18,996][00980] Adding new argument 'save_video'=True that is not in the saved config file! [2023-02-24 14:17:18,999][00980] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2023-02-24 14:17:19,001][00980] Adding new argument 'video_name'=None that is not in the saved config file! [2023-02-24 14:17:19,003][00980] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2023-02-24 14:17:19,006][00980] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2023-02-24 14:17:19,007][00980] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2023-02-24 14:17:19,012][00980] Adding new argument 'hf_repository'='mnavas/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2023-02-24 14:17:19,013][00980] Adding new argument 'policy_index'=0 that is not in the saved config file! [2023-02-24 14:17:19,014][00980] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2023-02-24 14:17:19,015][00980] Adding new argument 'train_script'=None that is not in the saved config file! [2023-02-24 14:17:19,016][00980] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2023-02-24 14:17:19,018][00980] Using frameskip 1 and render_action_repeat=4 for evaluation [2023-02-24 14:17:19,052][00980] RunningMeanStd input shape: (3, 72, 128) [2023-02-24 14:17:19,055][00980] RunningMeanStd input shape: (1,) [2023-02-24 14:17:19,074][00980] ConvEncoder: input_channels=3 [2023-02-24 14:17:19,143][00980] Conv encoder output size: 512 [2023-02-24 14:17:19,145][00980] Policy head output size: 512 [2023-02-24 14:17:19,181][00980] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000984_4030464.pth... [2023-02-24 14:17:19,838][00980] Num frames 100... [2023-02-24 14:17:20,012][00980] Num frames 200... [2023-02-24 14:17:20,171][00980] Num frames 300... [2023-02-24 14:17:20,352][00980] Num frames 400... [2023-02-24 14:17:20,438][00980] Avg episode rewards: #0: 6.160, true rewards: #0: 4.160 [2023-02-24 14:17:20,439][00980] Avg episode reward: 6.160, avg true_objective: 4.160 [2023-02-24 14:17:20,580][00980] Num frames 500... [2023-02-24 14:17:20,745][00980] Num frames 600... [2023-02-24 14:17:20,914][00980] Num frames 700... [2023-02-24 14:17:21,087][00980] Num frames 800... [2023-02-24 14:17:21,258][00980] Num frames 900... [2023-02-24 14:17:21,441][00980] Num frames 1000... [2023-02-24 14:17:21,622][00980] Num frames 1100... [2023-02-24 14:17:21,792][00980] Num frames 1200... [2023-02-24 14:17:21,958][00980] Num frames 1300... [2023-02-24 14:17:22,089][00980] Num frames 1400... [2023-02-24 14:17:22,210][00980] Num frames 1500... [2023-02-24 14:17:22,333][00980] Num frames 1600... [2023-02-24 14:17:22,450][00980] Num frames 1700... [2023-02-24 14:17:22,567][00980] Num frames 1800... [2023-02-24 14:17:22,680][00980] Num frames 1900... [2023-02-24 14:17:22,797][00980] Num frames 2000... [2023-02-24 14:17:22,912][00980] Num frames 2100... [2023-02-24 14:17:23,025][00980] Num frames 2200... [2023-02-24 14:17:23,148][00980] Num frames 2300... [2023-02-24 14:17:23,270][00980] Num frames 2400... [2023-02-24 14:17:23,398][00980] Num frames 2500... [2023-02-24 14:17:23,475][00980] Avg episode rewards: #0: 29.079, true rewards: #0: 12.580 [2023-02-24 14:17:23,478][00980] Avg episode reward: 29.079, avg true_objective: 12.580 [2023-02-24 14:17:23,575][00980] Num frames 2600... [2023-02-24 14:17:23,691][00980] Num frames 2700... [2023-02-24 14:17:23,831][00980] Avg episode rewards: #0: 20.906, true rewards: #0: 9.240 [2023-02-24 14:17:23,833][00980] Avg episode reward: 20.906, avg true_objective: 9.240 [2023-02-24 14:17:23,868][00980] Num frames 2800... [2023-02-24 14:17:23,988][00980] Num frames 2900... [2023-02-24 14:17:24,105][00980] Num frames 3000... [2023-02-24 14:17:24,223][00980] Num frames 3100... [2023-02-24 14:17:24,347][00980] Num frames 3200... [2023-02-24 14:17:24,463][00980] Num frames 3300... [2023-02-24 14:17:24,578][00980] Num frames 3400... [2023-02-24 14:17:24,692][00980] Num frames 3500... [2023-02-24 14:17:24,812][00980] Num frames 3600... [2023-02-24 14:17:24,929][00980] Num frames 3700... [2023-02-24 14:17:25,002][00980] Avg episode rewards: #0: 21.285, true rewards: #0: 9.285 [2023-02-24 14:17:25,003][00980] Avg episode reward: 21.285, avg true_objective: 9.285 [2023-02-24 14:17:25,104][00980] Num frames 3800... [2023-02-24 14:17:25,229][00980] Num frames 3900... [2023-02-24 14:17:25,358][00980] Num frames 4000... [2023-02-24 14:17:25,475][00980] Num frames 4100... [2023-02-24 14:17:25,593][00980] Num frames 4200... [2023-02-24 14:17:25,712][00980] Num frames 4300... [2023-02-24 14:17:25,829][00980] Num frames 4400... [2023-02-24 14:17:25,953][00980] Num frames 4500... [2023-02-24 14:17:26,071][00980] Num frames 4600... [2023-02-24 14:17:26,194][00980] Num frames 4700... [2023-02-24 14:17:26,308][00980] Num frames 4800... [2023-02-24 14:17:26,431][00980] Num frames 4900... [2023-02-24 14:17:26,549][00980] Num frames 5000... [2023-02-24 14:17:26,623][00980] Avg episode rewards: #0: 24.032, true rewards: #0: 10.032 [2023-02-24 14:17:26,627][00980] Avg episode reward: 24.032, avg true_objective: 10.032 [2023-02-24 14:17:26,723][00980] Num frames 5100... [2023-02-24 14:17:26,837][00980] Num frames 5200... [2023-02-24 14:17:26,959][00980] Num frames 5300... [2023-02-24 14:17:27,078][00980] Num frames 5400... [2023-02-24 14:17:27,197][00980] Num frames 5500... [2023-02-24 14:17:27,320][00980] Num frames 5600... [2023-02-24 14:17:27,438][00980] Num frames 5700... [2023-02-24 14:17:27,560][00980] Num frames 5800... [2023-02-24 14:17:27,707][00980] Avg episode rewards: #0: 23.467, true rewards: #0: 9.800 [2023-02-24 14:17:27,709][00980] Avg episode reward: 23.467, avg true_objective: 9.800 [2023-02-24 14:17:27,735][00980] Num frames 5900... [2023-02-24 14:17:27,855][00980] Num frames 6000... [2023-02-24 14:17:27,971][00980] Num frames 6100... [2023-02-24 14:17:28,088][00980] Num frames 6200... [2023-02-24 14:17:28,208][00980] Num frames 6300... [2023-02-24 14:17:28,326][00980] Num frames 6400... [2023-02-24 14:17:28,447][00980] Num frames 6500... [2023-02-24 14:17:28,572][00980] Num frames 6600... [2023-02-24 14:17:28,748][00980] Avg episode rewards: #0: 22.854, true rewards: #0: 9.569 [2023-02-24 14:17:28,751][00980] Avg episode reward: 22.854, avg true_objective: 9.569 [2023-02-24 14:17:28,755][00980] Num frames 6700... [2023-02-24 14:17:28,880][00980] Num frames 6800... [2023-02-24 14:17:28,994][00980] Num frames 6900... [2023-02-24 14:17:29,113][00980] Num frames 7000... [2023-02-24 14:17:29,237][00980] Num frames 7100... [2023-02-24 14:17:29,355][00980] Num frames 7200... [2023-02-24 14:17:29,470][00980] Avg episode rewards: #0: 21.427, true rewards: #0: 9.052 [2023-02-24 14:17:29,473][00980] Avg episode reward: 21.427, avg true_objective: 9.052 [2023-02-24 14:17:29,547][00980] Num frames 7300... [2023-02-24 14:17:29,672][00980] Num frames 7400... [2023-02-24 14:17:29,787][00980] Num frames 7500... [2023-02-24 14:17:29,913][00980] Num frames 7600... [2023-02-24 14:17:30,036][00980] Num frames 7700... [2023-02-24 14:17:30,152][00980] Num frames 7800... [2023-02-24 14:17:30,268][00980] Num frames 7900... [2023-02-24 14:17:30,391][00980] Num frames 8000... [2023-02-24 14:17:30,516][00980] Num frames 8100... [2023-02-24 14:17:30,644][00980] Num frames 8200... [2023-02-24 14:17:30,764][00980] Num frames 8300... [2023-02-24 14:17:30,883][00980] Num frames 8400... [2023-02-24 14:17:31,008][00980] Num frames 8500... [2023-02-24 14:17:31,128][00980] Num frames 8600... [2023-02-24 14:17:31,250][00980] Num frames 8700... [2023-02-24 14:17:31,399][00980] Avg episode rewards: #0: 23.308, true rewards: #0: 9.752 [2023-02-24 14:17:31,400][00980] Avg episode reward: 23.308, avg true_objective: 9.752 [2023-02-24 14:17:31,435][00980] Num frames 8800... [2023-02-24 14:17:31,553][00980] Num frames 8900... [2023-02-24 14:17:31,671][00980] Num frames 9000... [2023-02-24 14:17:31,794][00980] Num frames 9100... [2023-02-24 14:17:31,913][00980] Num frames 9200... [2023-02-24 14:17:32,045][00980] Num frames 9300... [2023-02-24 14:17:32,220][00980] Num frames 9400... [2023-02-24 14:17:32,391][00980] Num frames 9500... [2023-02-24 14:17:32,585][00980] Num frames 9600... [2023-02-24 14:17:32,749][00980] Num frames 9700... [2023-02-24 14:17:32,909][00980] Num frames 9800... [2023-02-24 14:17:33,124][00980] Avg episode rewards: #0: 23.597, true rewards: #0: 9.897 [2023-02-24 14:17:33,127][00980] Avg episode reward: 23.597, avg true_objective: 9.897 [2023-02-24 14:17:33,136][00980] Num frames 9900... [2023-02-24 14:18:34,473][00980] Replay video saved to /content/train_dir/default_experiment/replay.mp4!