diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,973 @@ +[2023-02-25 03:12:16,593][00684] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-25 03:12:16,597][00684] Rollout worker 0 uses device cpu +[2023-02-25 03:12:16,599][00684] Rollout worker 1 uses device cpu +[2023-02-25 03:12:16,600][00684] Rollout worker 2 uses device cpu +[2023-02-25 03:12:16,601][00684] Rollout worker 3 uses device cpu +[2023-02-25 03:12:16,603][00684] Rollout worker 4 uses device cpu +[2023-02-25 03:12:16,605][00684] Rollout worker 5 uses device cpu +[2023-02-25 03:12:16,606][00684] Rollout worker 6 uses device cpu +[2023-02-25 03:12:16,608][00684] Rollout worker 7 uses device cpu +[2023-02-25 03:12:16,876][00684] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 03:12:16,879][00684] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-25 03:12:16,931][00684] Starting all processes... +[2023-02-25 03:12:16,934][00684] Starting process learner_proc0 +[2023-02-25 03:12:17,035][00684] Starting all processes... +[2023-02-25 03:12:17,068][00684] Starting process inference_proc0-0 +[2023-02-25 03:12:17,069][00684] Starting process rollout_proc0 +[2023-02-25 03:12:17,072][00684] Starting process rollout_proc1 +[2023-02-25 03:12:17,101][00684] Starting process rollout_proc2 +[2023-02-25 03:12:17,101][00684] Starting process rollout_proc3 +[2023-02-25 03:12:17,101][00684] Starting process rollout_proc4 +[2023-02-25 03:12:17,101][00684] Starting process rollout_proc5 +[2023-02-25 03:12:17,102][00684] Starting process rollout_proc6 +[2023-02-25 03:12:17,102][00684] Starting process rollout_proc7 +[2023-02-25 03:12:29,112][10724] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 03:12:29,114][10724] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-25 03:12:29,528][10739] Worker 0 uses CPU cores [0] +[2023-02-25 03:12:29,558][10745] Worker 2 uses CPU cores [0] +[2023-02-25 03:12:29,614][10749] Worker 5 uses CPU cores [1] +[2023-02-25 03:12:29,772][10740] Worker 1 uses CPU cores [1] +[2023-02-25 03:12:29,830][10738] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 03:12:29,832][10738] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-25 03:12:29,841][10747] Worker 3 uses CPU cores [1] +[2023-02-25 03:12:29,876][10746] Worker 4 uses CPU cores [0] +[2023-02-25 03:12:29,896][10750] Worker 7 uses CPU cores [1] +[2023-02-25 03:12:30,041][10748] Worker 6 uses CPU cores [0] +[2023-02-25 03:12:30,366][10738] Num visible devices: 1 +[2023-02-25 03:12:30,368][10724] Num visible devices: 1 +[2023-02-25 03:12:30,370][10724] Starting seed is not provided +[2023-02-25 03:12:30,370][10724] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 03:12:30,370][10724] Initializing actor-critic model on device cuda:0 +[2023-02-25 03:12:30,370][10724] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 03:12:30,372][10724] RunningMeanStd input shape: (1,) +[2023-02-25 03:12:30,394][10724] ConvEncoder: input_channels=3 +[2023-02-25 03:12:30,728][10724] Conv encoder output size: 512 +[2023-02-25 03:12:30,728][10724] Policy head output size: 512 +[2023-02-25 03:12:30,782][10724] Created Actor Critic model with architecture: +[2023-02-25 03:12:30,782][10724] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-25 03:12:36,864][00684] Heartbeat connected on Batcher_0 +[2023-02-25 03:12:36,877][00684] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-25 03:12:36,893][00684] Heartbeat connected on RolloutWorker_w0 +[2023-02-25 03:12:36,898][00684] Heartbeat connected on RolloutWorker_w1 +[2023-02-25 03:12:36,902][00684] Heartbeat connected on RolloutWorker_w2 +[2023-02-25 03:12:36,913][00684] Heartbeat connected on RolloutWorker_w4 +[2023-02-25 03:12:36,914][00684] Heartbeat connected on RolloutWorker_w3 +[2023-02-25 03:12:36,917][00684] Heartbeat connected on RolloutWorker_w5 +[2023-02-25 03:12:36,921][00684] Heartbeat connected on RolloutWorker_w6 +[2023-02-25 03:12:36,926][00684] Heartbeat connected on RolloutWorker_w7 +[2023-02-25 03:12:38,481][10724] Using optimizer +[2023-02-25 03:12:38,482][10724] No checkpoints found +[2023-02-25 03:12:38,482][10724] Did not load from checkpoint, starting from scratch! +[2023-02-25 03:12:38,482][10724] Initialized policy 0 weights for model version 0 +[2023-02-25 03:12:38,487][10724] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-25 03:12:38,494][10724] LearnerWorker_p0 finished initialization! +[2023-02-25 03:12:38,495][00684] Heartbeat connected on LearnerWorker_p0 +[2023-02-25 03:12:38,707][10738] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 03:12:38,708][10738] RunningMeanStd input shape: (1,) +[2023-02-25 03:12:38,721][10738] ConvEncoder: input_channels=3 +[2023-02-25 03:12:38,821][10738] Conv encoder output size: 512 +[2023-02-25 03:12:38,821][10738] Policy head output size: 512 +[2023-02-25 03:12:38,900][00684] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 03:12:41,239][00684] Inference worker 0-0 is ready! +[2023-02-25 03:12:41,250][00684] All inference workers are ready! Signal rollout workers to start! +[2023-02-25 03:12:41,406][10748] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,414][10739] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,396][10745] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,437][10750] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,439][10749] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,441][10740] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,450][10747] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:41,463][10746] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:12:43,334][10745] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,336][10739] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,347][10748] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,439][10750] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,453][10747] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,460][10740] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,468][10749] Decorrelating experience for 0 frames... +[2023-02-25 03:12:43,900][00684] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 03:12:44,655][10739] Decorrelating experience for 32 frames... +[2023-02-25 03:12:44,678][10745] Decorrelating experience for 32 frames... +[2023-02-25 03:12:44,692][10750] Decorrelating experience for 32 frames... +[2023-02-25 03:12:44,708][10747] Decorrelating experience for 32 frames... +[2023-02-25 03:12:44,921][10746] Decorrelating experience for 0 frames... +[2023-02-25 03:12:46,136][10748] Decorrelating experience for 32 frames... +[2023-02-25 03:12:46,403][10740] Decorrelating experience for 32 frames... +[2023-02-25 03:12:46,433][10749] Decorrelating experience for 32 frames... +[2023-02-25 03:12:46,464][10739] Decorrelating experience for 64 frames... +[2023-02-25 03:12:46,470][10745] Decorrelating experience for 64 frames... +[2023-02-25 03:12:46,696][10750] Decorrelating experience for 64 frames... +[2023-02-25 03:12:47,810][10747] Decorrelating experience for 64 frames... +[2023-02-25 03:12:47,949][10740] Decorrelating experience for 64 frames... +[2023-02-25 03:12:48,022][10749] Decorrelating experience for 64 frames... +[2023-02-25 03:12:48,152][10746] Decorrelating experience for 32 frames... +[2023-02-25 03:12:48,474][10748] Decorrelating experience for 64 frames... +[2023-02-25 03:12:48,490][10739] Decorrelating experience for 96 frames... +[2023-02-25 03:12:48,497][10745] Decorrelating experience for 96 frames... +[2023-02-25 03:12:48,901][00684] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 03:12:49,020][10747] Decorrelating experience for 96 frames... +[2023-02-25 03:12:49,093][10750] Decorrelating experience for 96 frames... +[2023-02-25 03:12:49,230][10740] Decorrelating experience for 96 frames... +[2023-02-25 03:12:49,662][10746] Decorrelating experience for 64 frames... +[2023-02-25 03:12:49,969][10749] Decorrelating experience for 96 frames... +[2023-02-25 03:12:50,047][10748] Decorrelating experience for 96 frames... +[2023-02-25 03:12:50,314][10746] Decorrelating experience for 96 frames... +[2023-02-25 03:12:53,900][00684] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 60.5. Samples: 908. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-25 03:12:53,903][00684] Avg episode reward: [(0, '1.184')] +[2023-02-25 03:12:55,020][10724] Signal inference workers to stop experience collection... +[2023-02-25 03:12:55,038][10738] InferenceWorker_p0-w0: stopping experience collection +[2023-02-25 03:12:57,711][10724] Signal inference workers to resume experience collection... +[2023-02-25 03:12:57,713][10738] InferenceWorker_p0-w0: resuming experience collection +[2023-02-25 03:12:58,901][00684] Fps is (10 sec: 409.6, 60 sec: 204.8, 300 sec: 204.8). Total num frames: 4096. Throughput: 0: 117.1. Samples: 2342. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-02-25 03:12:58,905][00684] Avg episode reward: [(0, '1.904')] +[2023-02-25 03:13:03,900][00684] Fps is (10 sec: 2048.0, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 20480. Throughput: 0: 186.9. Samples: 4672. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-25 03:13:03,905][00684] Avg episode reward: [(0, '3.482')] +[2023-02-25 03:13:08,900][00684] Fps is (10 sec: 3276.8, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 36864. Throughput: 0: 317.2. Samples: 9516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:13:08,908][00684] Avg episode reward: [(0, '3.869')] +[2023-02-25 03:13:09,468][10738] Updated weights for policy 0, policy_version 10 (0.0552) +[2023-02-25 03:13:13,900][00684] Fps is (10 sec: 4096.0, 60 sec: 1755.4, 300 sec: 1755.4). Total num frames: 61440. Throughput: 0: 370.3. Samples: 12962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:13:13,904][00684] Avg episode reward: [(0, '4.234')] +[2023-02-25 03:13:18,245][10738] Updated weights for policy 0, policy_version 20 (0.0026) +[2023-02-25 03:13:18,901][00684] Fps is (10 sec: 4505.4, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 81920. Throughput: 0: 498.8. Samples: 19954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:13:18,906][00684] Avg episode reward: [(0, '4.389')] +[2023-02-25 03:13:23,900][00684] Fps is (10 sec: 3276.8, 60 sec: 2093.5, 300 sec: 2093.5). Total num frames: 94208. Throughput: 0: 544.4. Samples: 24500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:13:23,904][00684] Avg episode reward: [(0, '4.421')] +[2023-02-25 03:13:28,900][00684] Fps is (10 sec: 3277.0, 60 sec: 2293.8, 300 sec: 2293.8). Total num frames: 114688. Throughput: 0: 595.4. Samples: 26792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:13:28,903][00684] Avg episode reward: [(0, '4.384')] +[2023-02-25 03:13:28,915][10724] Saving new best policy, reward=4.384! +[2023-02-25 03:13:30,528][10738] Updated weights for policy 0, policy_version 30 (0.0048) +[2023-02-25 03:13:33,905][00684] Fps is (10 sec: 4094.1, 60 sec: 2457.4, 300 sec: 2457.4). Total num frames: 135168. Throughput: 0: 741.5. Samples: 33370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:13:33,916][00684] Avg episode reward: [(0, '4.446')] +[2023-02-25 03:13:33,919][10724] Saving new best policy, reward=4.446! +[2023-02-25 03:13:38,908][00684] Fps is (10 sec: 4093.0, 60 sec: 2593.8, 300 sec: 2593.8). Total num frames: 155648. Throughput: 0: 864.2. Samples: 39804. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:13:38,916][00684] Avg episode reward: [(0, '4.443')] +[2023-02-25 03:13:40,466][10738] Updated weights for policy 0, policy_version 40 (0.0013) +[2023-02-25 03:13:43,900][00684] Fps is (10 sec: 3688.1, 60 sec: 2867.2, 300 sec: 2646.6). Total num frames: 172032. Throughput: 0: 881.5. Samples: 42010. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:13:43,904][00684] Avg episode reward: [(0, '4.415')] +[2023-02-25 03:13:48,900][00684] Fps is (10 sec: 3279.2, 60 sec: 3140.3, 300 sec: 2691.7). Total num frames: 188416. Throughput: 0: 930.9. Samples: 46564. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:13:48,907][00684] Avg episode reward: [(0, '4.374')] +[2023-02-25 03:13:51,910][10738] Updated weights for policy 0, policy_version 50 (0.0012) +[2023-02-25 03:13:53,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 2839.9). Total num frames: 212992. Throughput: 0: 975.5. Samples: 53414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:13:53,902][00684] Avg episode reward: [(0, '4.397')] +[2023-02-25 03:13:58,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 975.4. Samples: 56856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:13:58,908][00684] Avg episode reward: [(0, '4.462')] +[2023-02-25 03:13:58,919][10724] Saving new best policy, reward=4.462! +[2023-02-25 03:14:03,903][00684] Fps is (10 sec: 2866.5, 60 sec: 3686.3, 300 sec: 2843.0). Total num frames: 241664. Throughput: 0: 907.3. Samples: 60782. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:03,907][00684] Avg episode reward: [(0, '4.507')] +[2023-02-25 03:14:03,912][10724] Saving new best policy, reward=4.507! +[2023-02-25 03:14:04,736][10738] Updated weights for policy 0, policy_version 60 (0.0017) +[2023-02-25 03:14:08,903][00684] Fps is (10 sec: 2047.4, 60 sec: 3549.7, 300 sec: 2776.1). Total num frames: 249856. Throughput: 0: 871.8. Samples: 63732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:08,906][00684] Avg episode reward: [(0, '4.458')] +[2023-02-25 03:14:08,917][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth... +[2023-02-25 03:14:13,900][00684] Fps is (10 sec: 2458.2, 60 sec: 3413.3, 300 sec: 2802.5). Total num frames: 266240. Throughput: 0: 858.2. Samples: 65410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:14:13,902][00684] Avg episode reward: [(0, '4.465')] +[2023-02-25 03:14:17,747][10738] Updated weights for policy 0, policy_version 70 (0.0023) +[2023-02-25 03:14:18,901][00684] Fps is (10 sec: 4097.1, 60 sec: 3481.6, 300 sec: 2908.2). Total num frames: 290816. Throughput: 0: 852.4. Samples: 71724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:14:18,903][00684] Avg episode reward: [(0, '4.602')] +[2023-02-25 03:14:18,912][10724] Saving new best policy, reward=4.602! +[2023-02-25 03:14:23,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 2964.7). Total num frames: 311296. Throughput: 0: 856.8. Samples: 78352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:23,903][00684] Avg episode reward: [(0, '4.461')] +[2023-02-25 03:14:28,850][10738] Updated weights for policy 0, policy_version 80 (0.0028) +[2023-02-25 03:14:28,900][00684] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 2978.9). Total num frames: 327680. Throughput: 0: 855.7. Samples: 80518. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:14:28,904][00684] Avg episode reward: [(0, '4.354')] +[2023-02-25 03:14:33,901][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.9, 300 sec: 2991.9). Total num frames: 344064. Throughput: 0: 854.8. Samples: 85028. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:33,907][00684] Avg episode reward: [(0, '4.342')] +[2023-02-25 03:14:38,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3482.0, 300 sec: 3037.9). Total num frames: 364544. Throughput: 0: 852.4. Samples: 91770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:38,905][00684] Avg episode reward: [(0, '4.408')] +[2023-02-25 03:14:39,268][10738] Updated weights for policy 0, policy_version 90 (0.0019) +[2023-02-25 03:14:43,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3080.2). Total num frames: 385024. Throughput: 0: 852.3. Samples: 95210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:43,903][00684] Avg episode reward: [(0, '4.442')] +[2023-02-25 03:14:48,902][00684] Fps is (10 sec: 3685.9, 60 sec: 3549.8, 300 sec: 3087.7). Total num frames: 401408. Throughput: 0: 875.8. Samples: 100190. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:48,909][00684] Avg episode reward: [(0, '4.586')] +[2023-02-25 03:14:51,176][10738] Updated weights for policy 0, policy_version 100 (0.0039) +[2023-02-25 03:14:53,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3094.8). Total num frames: 417792. Throughput: 0: 914.7. Samples: 104890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:14:53,902][00684] Avg episode reward: [(0, '4.507')] +[2023-02-25 03:14:58,900][00684] Fps is (10 sec: 4096.6, 60 sec: 3549.9, 300 sec: 3159.8). Total num frames: 442368. Throughput: 0: 954.0. Samples: 108340. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:14:58,909][00684] Avg episode reward: [(0, '4.405')] +[2023-02-25 03:15:00,391][10738] Updated weights for policy 0, policy_version 110 (0.0016) +[2023-02-25 03:15:03,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3686.5, 300 sec: 3192.1). Total num frames: 462848. Throughput: 0: 968.0. Samples: 115282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:03,903][00684] Avg episode reward: [(0, '4.505')] +[2023-02-25 03:15:08,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3167.6). Total num frames: 475136. Throughput: 0: 921.7. Samples: 119830. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:15:08,906][00684] Avg episode reward: [(0, '4.511')] +[2023-02-25 03:15:13,051][10738] Updated weights for policy 0, policy_version 120 (0.0026) +[2023-02-25 03:15:13,901][00684] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3197.5). Total num frames: 495616. Throughput: 0: 921.4. Samples: 121980. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:15:13,902][00684] Avg episode reward: [(0, '4.640')] +[2023-02-25 03:15:13,913][10724] Saving new best policy, reward=4.640! +[2023-02-25 03:15:18,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3225.6). Total num frames: 516096. Throughput: 0: 961.6. Samples: 128302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:15:18,902][00684] Avg episode reward: [(0, '4.604')] +[2023-02-25 03:15:22,028][10738] Updated weights for policy 0, policy_version 130 (0.0013) +[2023-02-25 03:15:23,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3252.0). Total num frames: 536576. Throughput: 0: 957.9. Samples: 134876. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:15:23,907][00684] Avg episode reward: [(0, '4.503')] +[2023-02-25 03:15:28,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3252.7). Total num frames: 552960. Throughput: 0: 930.5. Samples: 137084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:28,903][00684] Avg episode reward: [(0, '4.521')] +[2023-02-25 03:15:33,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3253.4). Total num frames: 569344. Throughput: 0: 917.8. Samples: 141488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:33,909][00684] Avg episode reward: [(0, '4.579')] +[2023-02-25 03:15:34,416][10738] Updated weights for policy 0, policy_version 140 (0.0017) +[2023-02-25 03:15:38,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3299.6). Total num frames: 593920. Throughput: 0: 966.5. Samples: 148382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:15:38,902][00684] Avg episode reward: [(0, '4.542')] +[2023-02-25 03:15:43,665][10738] Updated weights for policy 0, policy_version 150 (0.0018) +[2023-02-25 03:15:43,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3321.1). Total num frames: 614400. Throughput: 0: 968.3. Samples: 151914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:43,906][00684] Avg episode reward: [(0, '4.535')] +[2023-02-25 03:15:48,901][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3298.4). Total num frames: 626688. Throughput: 0: 925.1. Samples: 156912. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:48,907][00684] Avg episode reward: [(0, '4.429')] +[2023-02-25 03:15:53,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3318.8). Total num frames: 647168. Throughput: 0: 931.7. Samples: 161756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:15:53,909][00684] Avg episode reward: [(0, '4.431')] +[2023-02-25 03:15:55,697][10738] Updated weights for policy 0, policy_version 160 (0.0012) +[2023-02-25 03:15:58,901][00684] Fps is (10 sec: 4095.9, 60 sec: 3754.6, 300 sec: 3338.2). Total num frames: 667648. Throughput: 0: 960.7. Samples: 165210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:15:58,908][00684] Avg episode reward: [(0, '4.584')] +[2023-02-25 03:16:03,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3356.7). Total num frames: 688128. Throughput: 0: 975.0. Samples: 172176. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:16:03,908][00684] Avg episode reward: [(0, '4.682')] +[2023-02-25 03:16:03,910][10724] Saving new best policy, reward=4.682! +[2023-02-25 03:16:05,406][10738] Updated weights for policy 0, policy_version 170 (0.0012) +[2023-02-25 03:16:08,900][00684] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3354.8). Total num frames: 704512. Throughput: 0: 926.9. Samples: 176586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:16:08,902][00684] Avg episode reward: [(0, '4.631')] +[2023-02-25 03:16:08,920][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth... +[2023-02-25 03:16:13,902][00684] Fps is (10 sec: 3276.3, 60 sec: 3754.6, 300 sec: 3353.0). Total num frames: 720896. Throughput: 0: 925.8. Samples: 178746. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:16:13,910][00684] Avg episode reward: [(0, '4.590')] +[2023-02-25 03:16:18,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3332.7). Total num frames: 733184. Throughput: 0: 924.6. Samples: 183094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:16:18,905][00684] Avg episode reward: [(0, '4.454')] +[2023-02-25 03:16:19,660][10738] Updated weights for policy 0, policy_version 180 (0.0022) +[2023-02-25 03:16:23,900][00684] Fps is (10 sec: 2867.6, 60 sec: 3549.9, 300 sec: 3331.4). Total num frames: 749568. Throughput: 0: 871.2. Samples: 187588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:16:23,916][00684] Avg episode reward: [(0, '4.519')] +[2023-02-25 03:16:28,901][00684] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3312.4). Total num frames: 761856. Throughput: 0: 841.2. Samples: 189770. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:16:28,908][00684] Avg episode reward: [(0, '4.585')] +[2023-02-25 03:16:33,148][10738] Updated weights for policy 0, policy_version 190 (0.0039) +[2023-02-25 03:16:33,903][00684] Fps is (10 sec: 2866.5, 60 sec: 3481.5, 300 sec: 3311.6). Total num frames: 778240. Throughput: 0: 829.3. Samples: 194232. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:16:33,906][00684] Avg episode reward: [(0, '4.844')] +[2023-02-25 03:16:33,911][10724] Saving new best policy, reward=4.844! +[2023-02-25 03:16:38,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3345.1). Total num frames: 802816. Throughput: 0: 869.6. Samples: 200886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:16:38,903][00684] Avg episode reward: [(0, '4.890')] +[2023-02-25 03:16:38,918][10724] Saving new best policy, reward=4.890! +[2023-02-25 03:16:42,147][10738] Updated weights for policy 0, policy_version 200 (0.0015) +[2023-02-25 03:16:43,901][00684] Fps is (10 sec: 4506.5, 60 sec: 3481.6, 300 sec: 3360.4). Total num frames: 823296. Throughput: 0: 867.5. Samples: 204248. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:16:43,905][00684] Avg episode reward: [(0, '4.621')] +[2023-02-25 03:16:48,901][00684] Fps is (10 sec: 3686.2, 60 sec: 3549.8, 300 sec: 3358.7). Total num frames: 839680. Throughput: 0: 825.9. Samples: 209340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:16:48,907][00684] Avg episode reward: [(0, '4.538')] +[2023-02-25 03:16:53,901][00684] Fps is (10 sec: 3276.9, 60 sec: 3481.6, 300 sec: 3357.1). Total num frames: 856064. Throughput: 0: 831.4. Samples: 213998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:16:53,908][00684] Avg episode reward: [(0, '4.601')] +[2023-02-25 03:16:54,623][10738] Updated weights for policy 0, policy_version 210 (0.0019) +[2023-02-25 03:16:58,900][00684] Fps is (10 sec: 3686.6, 60 sec: 3481.6, 300 sec: 3371.3). Total num frames: 876544. Throughput: 0: 860.4. Samples: 217462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:16:58,903][00684] Avg episode reward: [(0, '4.448')] +[2023-02-25 03:17:03,900][00684] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3385.0). Total num frames: 897024. Throughput: 0: 918.2. Samples: 224412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:03,902][00684] Avg episode reward: [(0, '4.440')] +[2023-02-25 03:17:03,923][10738] Updated weights for policy 0, policy_version 220 (0.0025) +[2023-02-25 03:17:08,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3383.0). Total num frames: 913408. Throughput: 0: 920.0. Samples: 228986. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:08,910][00684] Avg episode reward: [(0, '4.531')] +[2023-02-25 03:17:13,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3381.1). Total num frames: 929792. Throughput: 0: 920.7. Samples: 231202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:17:13,904][00684] Avg episode reward: [(0, '4.494')] +[2023-02-25 03:17:15,971][10738] Updated weights for policy 0, policy_version 230 (0.0036) +[2023-02-25 03:17:18,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3408.5). Total num frames: 954368. Throughput: 0: 961.3. Samples: 237488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:17:18,903][00684] Avg episode reward: [(0, '4.482')] +[2023-02-25 03:17:23,904][00684] Fps is (10 sec: 4504.1, 60 sec: 3754.4, 300 sec: 3420.5). Total num frames: 974848. Throughput: 0: 963.8. Samples: 244262. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:23,907][00684] Avg episode reward: [(0, '4.529')] +[2023-02-25 03:17:26,212][10738] Updated weights for policy 0, policy_version 240 (0.0016) +[2023-02-25 03:17:28,907][00684] Fps is (10 sec: 3274.7, 60 sec: 3754.3, 300 sec: 3403.8). Total num frames: 987136. Throughput: 0: 936.5. Samples: 246398. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:17:28,914][00684] Avg episode reward: [(0, '4.559')] +[2023-02-25 03:17:33,900][00684] Fps is (10 sec: 3277.9, 60 sec: 3823.1, 300 sec: 3415.6). Total num frames: 1007616. Throughput: 0: 923.2. Samples: 250884. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:33,908][00684] Avg episode reward: [(0, '4.622')] +[2023-02-25 03:17:37,434][10738] Updated weights for policy 0, policy_version 250 (0.0019) +[2023-02-25 03:17:38,900][00684] Fps is (10 sec: 4098.6, 60 sec: 3754.7, 300 sec: 3485.1). Total num frames: 1028096. Throughput: 0: 966.3. Samples: 257482. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:17:38,903][00684] Avg episode reward: [(0, '4.766')] +[2023-02-25 03:17:43,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3554.5). Total num frames: 1048576. Throughput: 0: 964.4. Samples: 260860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:43,903][00684] Avg episode reward: [(0, '4.908')] +[2023-02-25 03:17:43,910][10724] Saving new best policy, reward=4.908! +[2023-02-25 03:17:48,392][10738] Updated weights for policy 0, policy_version 260 (0.0024) +[2023-02-25 03:17:48,901][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3610.0). Total num frames: 1064960. Throughput: 0: 923.2. Samples: 265956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:48,905][00684] Avg episode reward: [(0, '5.070')] +[2023-02-25 03:17:48,923][10724] Saving new best policy, reward=5.070! +[2023-02-25 03:17:53,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1081344. Throughput: 0: 921.9. Samples: 270472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:17:53,903][00684] Avg episode reward: [(0, '4.888')] +[2023-02-25 03:17:58,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1101824. Throughput: 0: 949.5. Samples: 273928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:17:58,907][00684] Avg episode reward: [(0, '4.373')] +[2023-02-25 03:17:58,993][10738] Updated weights for policy 0, policy_version 270 (0.0020) +[2023-02-25 03:18:03,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1126400. Throughput: 0: 964.7. Samples: 280900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:18:03,906][00684] Avg episode reward: [(0, '4.430')] +[2023-02-25 03:18:08,901][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1138688. Throughput: 0: 914.8. Samples: 285426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:08,910][00684] Avg episode reward: [(0, '4.464')] +[2023-02-25 03:18:08,925][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000278_1138688.pth... +[2023-02-25 03:18:09,129][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000061_249856.pth +[2023-02-25 03:18:11,112][10738] Updated weights for policy 0, policy_version 280 (0.0014) +[2023-02-25 03:18:13,901][00684] Fps is (10 sec: 2867.2, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 1155072. Throughput: 0: 913.1. Samples: 287480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:13,903][00684] Avg episode reward: [(0, '4.678')] +[2023-02-25 03:18:18,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 1179648. Throughput: 0: 952.6. Samples: 293750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:18:18,908][00684] Avg episode reward: [(0, '4.773')] +[2023-02-25 03:18:20,535][10738] Updated weights for policy 0, policy_version 290 (0.0013) +[2023-02-25 03:18:23,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3754.9, 300 sec: 3679.5). Total num frames: 1200128. Throughput: 0: 956.0. Samples: 300500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:23,904][00684] Avg episode reward: [(0, '4.699')] +[2023-02-25 03:18:28,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3755.1, 300 sec: 3651.7). Total num frames: 1212416. Throughput: 0: 922.2. Samples: 302358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:28,909][00684] Avg episode reward: [(0, '4.841')] +[2023-02-25 03:18:33,900][00684] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3624.0). Total num frames: 1224704. Throughput: 0: 885.7. Samples: 305814. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:18:33,903][00684] Avg episode reward: [(0, '4.724')] +[2023-02-25 03:18:35,677][10738] Updated weights for policy 0, policy_version 300 (0.0013) +[2023-02-25 03:18:38,900][00684] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 1236992. Throughput: 0: 868.4. Samples: 309552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:18:38,907][00684] Avg episode reward: [(0, '4.774')] +[2023-02-25 03:18:43,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 1257472. Throughput: 0: 865.1. Samples: 312858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:18:43,903][00684] Avg episode reward: [(0, '4.744')] +[2023-02-25 03:18:45,703][10738] Updated weights for policy 0, policy_version 310 (0.0017) +[2023-02-25 03:18:48,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1277952. Throughput: 0: 863.2. Samples: 319742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:18:48,908][00684] Avg episode reward: [(0, '4.617')] +[2023-02-25 03:18:53,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3610.0). Total num frames: 1294336. Throughput: 0: 860.0. Samples: 324124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:53,902][00684] Avg episode reward: [(0, '4.588')] +[2023-02-25 03:18:58,265][10738] Updated weights for policy 0, policy_version 320 (0.0030) +[2023-02-25 03:18:58,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 1310720. Throughput: 0: 862.1. Samples: 326276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:18:58,903][00684] Avg episode reward: [(0, '4.577')] +[2023-02-25 03:19:03,901][00684] Fps is (10 sec: 4095.8, 60 sec: 3481.6, 300 sec: 3679.5). Total num frames: 1335296. Throughput: 0: 869.5. Samples: 332876. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:03,903][00684] Avg episode reward: [(0, '4.786')] +[2023-02-25 03:19:07,247][10738] Updated weights for policy 0, policy_version 330 (0.0014) +[2023-02-25 03:19:08,904][00684] Fps is (10 sec: 4504.0, 60 sec: 3617.9, 300 sec: 3693.3). Total num frames: 1355776. Throughput: 0: 863.4. Samples: 339358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:08,907][00684] Avg episode reward: [(0, '4.774')] +[2023-02-25 03:19:13,900][00684] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3651.7). Total num frames: 1368064. Throughput: 0: 870.2. Samples: 341516. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:19:13,903][00684] Avg episode reward: [(0, '4.671')] +[2023-02-25 03:19:18,901][00684] Fps is (10 sec: 3277.9, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 1388544. Throughput: 0: 889.2. Samples: 345830. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:18,903][00684] Avg episode reward: [(0, '4.441')] +[2023-02-25 03:19:19,858][10738] Updated weights for policy 0, policy_version 340 (0.0038) +[2023-02-25 03:19:23,901][00684] Fps is (10 sec: 4095.8, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 1409024. Throughput: 0: 960.4. Samples: 352770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:23,907][00684] Avg episode reward: [(0, '4.452')] +[2023-02-25 03:19:28,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1429504. Throughput: 0: 962.8. Samples: 356184. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:19:28,909][00684] Avg episode reward: [(0, '4.497')] +[2023-02-25 03:19:29,653][10738] Updated weights for policy 0, policy_version 350 (0.0019) +[2023-02-25 03:19:33,900][00684] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 1445888. Throughput: 0: 914.1. Samples: 360878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:19:33,907][00684] Avg episode reward: [(0, '4.352')] +[2023-02-25 03:19:38,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1462272. Throughput: 0: 928.1. Samples: 365888. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:19:38,906][00684] Avg episode reward: [(0, '4.462')] +[2023-02-25 03:19:41,214][10738] Updated weights for policy 0, policy_version 360 (0.0012) +[2023-02-25 03:19:43,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 1486848. Throughput: 0: 955.5. Samples: 369274. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:43,906][00684] Avg episode reward: [(0, '4.568')] +[2023-02-25 03:19:48,903][00684] Fps is (10 sec: 4504.5, 60 sec: 3822.8, 300 sec: 3693.3). Total num frames: 1507328. Throughput: 0: 964.3. Samples: 376272. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:48,910][00684] Avg episode reward: [(0, '4.557')] +[2023-02-25 03:19:51,718][10738] Updated weights for policy 0, policy_version 370 (0.0032) +[2023-02-25 03:19:53,908][00684] Fps is (10 sec: 3274.4, 60 sec: 3754.2, 300 sec: 3651.6). Total num frames: 1519616. Throughput: 0: 914.6. Samples: 380520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:19:53,912][00684] Avg episode reward: [(0, '4.748')] +[2023-02-25 03:19:58,900][00684] Fps is (10 sec: 2867.9, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 1536000. Throughput: 0: 913.2. Samples: 382610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:19:58,908][00684] Avg episode reward: [(0, '4.792')] +[2023-02-25 03:20:02,863][10738] Updated weights for policy 0, policy_version 380 (0.0017) +[2023-02-25 03:20:03,900][00684] Fps is (10 sec: 4099.0, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 1560576. Throughput: 0: 958.6. Samples: 388966. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:20:03,908][00684] Avg episode reward: [(0, '4.753')] +[2023-02-25 03:20:08,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3754.9, 300 sec: 3679.5). Total num frames: 1581056. Throughput: 0: 947.8. Samples: 395422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:08,905][00684] Avg episode reward: [(0, '4.741')] +[2023-02-25 03:20:08,923][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000386_1581056.pth... +[2023-02-25 03:20:09,060][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth +[2023-02-25 03:20:13,905][00684] Fps is (10 sec: 3275.3, 60 sec: 3754.4, 300 sec: 3651.6). Total num frames: 1593344. Throughput: 0: 918.3. Samples: 397510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:13,916][00684] Avg episode reward: [(0, '4.656')] +[2023-02-25 03:20:14,694][10738] Updated weights for policy 0, policy_version 390 (0.0019) +[2023-02-25 03:20:18,901][00684] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 1609728. Throughput: 0: 913.2. Samples: 401974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:20:18,903][00684] Avg episode reward: [(0, '4.762')] +[2023-02-25 03:20:23,904][00684] Fps is (10 sec: 4096.4, 60 sec: 3754.5, 300 sec: 3665.5). Total num frames: 1634304. Throughput: 0: 954.1. Samples: 408824. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:23,908][00684] Avg episode reward: [(0, '4.495')] +[2023-02-25 03:20:24,471][10738] Updated weights for policy 0, policy_version 400 (0.0026) +[2023-02-25 03:20:28,904][00684] Fps is (10 sec: 4504.1, 60 sec: 3754.4, 300 sec: 3679.4). Total num frames: 1654784. Throughput: 0: 956.6. Samples: 412324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:28,907][00684] Avg episode reward: [(0, '4.568')] +[2023-02-25 03:20:33,900][00684] Fps is (10 sec: 3687.7, 60 sec: 3754.7, 300 sec: 3651.7). Total num frames: 1671168. Throughput: 0: 907.6. Samples: 417112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:33,910][00684] Avg episode reward: [(0, '4.618')] +[2023-02-25 03:20:36,550][10738] Updated weights for policy 0, policy_version 410 (0.0022) +[2023-02-25 03:20:38,901][00684] Fps is (10 sec: 2868.2, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 1683456. Throughput: 0: 916.0. Samples: 421734. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:20:38,904][00684] Avg episode reward: [(0, '4.960')] +[2023-02-25 03:20:43,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1699840. Throughput: 0: 917.1. Samples: 423878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:43,907][00684] Avg episode reward: [(0, '5.017')] +[2023-02-25 03:20:48,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3413.5, 300 sec: 3610.0). Total num frames: 1712128. Throughput: 0: 871.5. Samples: 428182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:48,906][00684] Avg episode reward: [(0, '4.977')] +[2023-02-25 03:20:50,516][10738] Updated weights for policy 0, policy_version 420 (0.0011) +[2023-02-25 03:20:53,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3482.0, 300 sec: 3596.2). Total num frames: 1728512. Throughput: 0: 825.8. Samples: 432584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:20:53,903][00684] Avg episode reward: [(0, '4.810')] +[2023-02-25 03:20:58,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 1744896. Throughput: 0: 826.4. Samples: 434696. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:20:58,902][00684] Avg episode reward: [(0, '4.618')] +[2023-02-25 03:21:01,696][10738] Updated weights for policy 0, policy_version 430 (0.0022) +[2023-02-25 03:21:03,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 1769472. Throughput: 0: 875.9. Samples: 441388. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:21:03,903][00684] Avg episode reward: [(0, '4.804')] +[2023-02-25 03:21:08,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 1789952. Throughput: 0: 865.2. Samples: 447754. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:21:08,903][00684] Avg episode reward: [(0, '4.976')] +[2023-02-25 03:21:12,988][10738] Updated weights for policy 0, policy_version 440 (0.0018) +[2023-02-25 03:21:13,902][00684] Fps is (10 sec: 3276.3, 60 sec: 3481.8, 300 sec: 3623.9). Total num frames: 1802240. Throughput: 0: 835.5. Samples: 449922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:21:13,906][00684] Avg episode reward: [(0, '4.829')] +[2023-02-25 03:21:18,901][00684] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1822720. Throughput: 0: 827.7. Samples: 454360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:21:18,903][00684] Avg episode reward: [(0, '4.925')] +[2023-02-25 03:21:23,373][10738] Updated weights for policy 0, policy_version 450 (0.0017) +[2023-02-25 03:21:23,900][00684] Fps is (10 sec: 4096.6, 60 sec: 3481.8, 300 sec: 3665.6). Total num frames: 1843200. Throughput: 0: 877.5. Samples: 461220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:21:23,909][00684] Avg episode reward: [(0, '4.644')] +[2023-02-25 03:21:28,901][00684] Fps is (10 sec: 4096.0, 60 sec: 3481.8, 300 sec: 3679.5). Total num frames: 1863680. Throughput: 0: 905.2. Samples: 464610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:21:28,903][00684] Avg episode reward: [(0, '4.526')] +[2023-02-25 03:21:33,906][00684] Fps is (10 sec: 3275.0, 60 sec: 3413.0, 300 sec: 3637.7). Total num frames: 1875968. Throughput: 0: 910.9. Samples: 469178. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:21:33,909][00684] Avg episode reward: [(0, '4.840')] +[2023-02-25 03:21:35,562][10738] Updated weights for policy 0, policy_version 460 (0.0045) +[2023-02-25 03:21:38,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 1896448. Throughput: 0: 924.4. Samples: 474184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:21:38,904][00684] Avg episode reward: [(0, '4.891')] +[2023-02-25 03:21:43,900][00684] Fps is (10 sec: 4098.2, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 1916928. Throughput: 0: 955.0. Samples: 477672. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:21:43,907][00684] Avg episode reward: [(0, '4.818')] +[2023-02-25 03:21:44,931][10738] Updated weights for policy 0, policy_version 470 (0.0015) +[2023-02-25 03:21:48,901][00684] Fps is (10 sec: 4095.7, 60 sec: 3754.6, 300 sec: 3665.6). Total num frames: 1937408. Throughput: 0: 956.7. Samples: 484442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:21:48,904][00684] Avg episode reward: [(0, '4.765')] +[2023-02-25 03:21:53,903][00684] Fps is (10 sec: 3685.6, 60 sec: 3754.5, 300 sec: 3651.7). Total num frames: 1953792. Throughput: 0: 911.6. Samples: 488778. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:21:53,905][00684] Avg episode reward: [(0, '4.740')] +[2023-02-25 03:21:57,545][10738] Updated weights for policy 0, policy_version 480 (0.0043) +[2023-02-25 03:21:58,900][00684] Fps is (10 sec: 3277.1, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 1970176. Throughput: 0: 912.5. Samples: 490984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:21:58,909][00684] Avg episode reward: [(0, '4.806')] +[2023-02-25 03:22:03,900][00684] Fps is (10 sec: 4096.9, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 1994752. Throughput: 0: 962.1. Samples: 497654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:03,906][00684] Avg episode reward: [(0, '4.698')] +[2023-02-25 03:22:06,355][10738] Updated weights for policy 0, policy_version 490 (0.0022) +[2023-02-25 03:22:08,901][00684] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3679.5). Total num frames: 2015232. Throughput: 0: 950.6. Samples: 503998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:22:08,909][00684] Avg episode reward: [(0, '4.665')] +[2023-02-25 03:22:08,918][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000492_2015232.pth... +[2023-02-25 03:22:09,066][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000278_1138688.pth +[2023-02-25 03:22:13,904][00684] Fps is (10 sec: 3275.6, 60 sec: 3754.5, 300 sec: 3637.8). Total num frames: 2027520. Throughput: 0: 922.9. Samples: 506144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:13,907][00684] Avg episode reward: [(0, '4.836')] +[2023-02-25 03:22:18,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3624.0). Total num frames: 2043904. Throughput: 0: 920.4. Samples: 510592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:18,908][00684] Avg episode reward: [(0, '4.714')] +[2023-02-25 03:22:18,997][10738] Updated weights for policy 0, policy_version 500 (0.0015) +[2023-02-25 03:22:23,901][00684] Fps is (10 sec: 4097.4, 60 sec: 3754.7, 300 sec: 3665.7). Total num frames: 2068480. Throughput: 0: 963.5. Samples: 517542. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:23,908][00684] Avg episode reward: [(0, '4.515')] +[2023-02-25 03:22:28,248][10738] Updated weights for policy 0, policy_version 510 (0.0014) +[2023-02-25 03:22:28,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 2088960. Throughput: 0: 961.1. Samples: 520922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:22:28,906][00684] Avg episode reward: [(0, '4.679')] +[2023-02-25 03:22:33,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3755.0, 300 sec: 3637.8). Total num frames: 2101248. Throughput: 0: 916.8. Samples: 525698. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:22:33,910][00684] Avg episode reward: [(0, '4.766')] +[2023-02-25 03:22:38,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2121728. Throughput: 0: 929.7. Samples: 530612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:22:38,906][00684] Avg episode reward: [(0, '4.823')] +[2023-02-25 03:22:40,392][10738] Updated weights for policy 0, policy_version 520 (0.0039) +[2023-02-25 03:22:43,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 2146304. Throughput: 0: 957.2. Samples: 534056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:43,906][00684] Avg episode reward: [(0, '4.762')] +[2023-02-25 03:22:48,901][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 2162688. Throughput: 0: 960.6. Samples: 540880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:22:48,906][00684] Avg episode reward: [(0, '4.607')] +[2023-02-25 03:22:50,625][10738] Updated weights for policy 0, policy_version 530 (0.0012) +[2023-02-25 03:22:53,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.8, 300 sec: 3651.7). Total num frames: 2179072. Throughput: 0: 911.6. Samples: 545022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:22:53,906][00684] Avg episode reward: [(0, '4.664')] +[2023-02-25 03:22:58,901][00684] Fps is (10 sec: 2457.6, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 2187264. Throughput: 0: 901.0. Samples: 546684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:22:58,908][00684] Avg episode reward: [(0, '4.664')] +[2023-02-25 03:23:03,900][00684] Fps is (10 sec: 2457.6, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 2203648. Throughput: 0: 890.4. Samples: 550658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:23:03,906][00684] Avg episode reward: [(0, '4.715')] +[2023-02-25 03:23:05,416][10738] Updated weights for policy 0, policy_version 540 (0.0029) +[2023-02-25 03:23:08,901][00684] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3623.9). Total num frames: 2224128. Throughput: 0: 875.3. Samples: 556930. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:23:08,902][00684] Avg episode reward: [(0, '4.818')] +[2023-02-25 03:23:13,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3550.1, 300 sec: 3596.1). Total num frames: 2240512. Throughput: 0: 859.2. Samples: 559584. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:23:13,906][00684] Avg episode reward: [(0, '4.723')] +[2023-02-25 03:23:16,749][10738] Updated weights for policy 0, policy_version 550 (0.0019) +[2023-02-25 03:23:18,901][00684] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2256896. Throughput: 0: 850.4. Samples: 563968. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:23:18,908][00684] Avg episode reward: [(0, '4.679')] +[2023-02-25 03:23:23,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 2277376. Throughput: 0: 875.6. Samples: 570014. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:23:23,903][00684] Avg episode reward: [(0, '4.747')] +[2023-02-25 03:23:26,717][10738] Updated weights for policy 0, policy_version 560 (0.0039) +[2023-02-25 03:23:28,900][00684] Fps is (10 sec: 4505.7, 60 sec: 3549.9, 300 sec: 3651.7). Total num frames: 2301952. Throughput: 0: 876.9. Samples: 573518. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:23:28,903][00684] Avg episode reward: [(0, '4.756')] +[2023-02-25 03:23:33,902][00684] Fps is (10 sec: 4095.5, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2318336. Throughput: 0: 851.9. Samples: 579216. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:23:33,907][00684] Avg episode reward: [(0, '4.691')] +[2023-02-25 03:23:38,901][00684] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3637.8). Total num frames: 2330624. Throughput: 0: 857.3. Samples: 583600. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:23:38,905][00684] Avg episode reward: [(0, '4.717')] +[2023-02-25 03:23:39,209][10738] Updated weights for policy 0, policy_version 570 (0.0036) +[2023-02-25 03:23:43,900][00684] Fps is (10 sec: 3686.8, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 2355200. Throughput: 0: 888.1. Samples: 586648. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:23:43,903][00684] Avg episode reward: [(0, '4.584')] +[2023-02-25 03:23:47,930][10738] Updated weights for policy 0, policy_version 580 (0.0012) +[2023-02-25 03:23:48,900][00684] Fps is (10 sec: 4915.4, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2379776. Throughput: 0: 954.8. Samples: 593624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:23:48,903][00684] Avg episode reward: [(0, '4.699')] +[2023-02-25 03:23:53,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2392064. Throughput: 0: 931.8. Samples: 598860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:23:53,903][00684] Avg episode reward: [(0, '4.633')] +[2023-02-25 03:23:58,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2408448. Throughput: 0: 921.0. Samples: 601030. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:23:58,904][00684] Avg episode reward: [(0, '4.664')] +[2023-02-25 03:24:00,510][10738] Updated weights for policy 0, policy_version 590 (0.0030) +[2023-02-25 03:24:03,901][00684] Fps is (10 sec: 3686.2, 60 sec: 3754.6, 300 sec: 3637.8). Total num frames: 2428928. Throughput: 0: 951.6. Samples: 606788. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:03,908][00684] Avg episode reward: [(0, '4.616')] +[2023-02-25 03:24:08,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3679.5). Total num frames: 2453504. Throughput: 0: 968.7. Samples: 613604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:24:08,902][00684] Avg episode reward: [(0, '4.602')] +[2023-02-25 03:24:08,919][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000599_2453504.pth... +[2023-02-25 03:24:09,048][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000386_1581056.pth +[2023-02-25 03:24:09,616][10738] Updated weights for policy 0, policy_version 600 (0.0014) +[2023-02-25 03:24:13,900][00684] Fps is (10 sec: 4096.2, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 2469888. Throughput: 0: 945.5. Samples: 616066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:24:13,910][00684] Avg episode reward: [(0, '4.808')] +[2023-02-25 03:24:18,901][00684] Fps is (10 sec: 2867.1, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2482176. Throughput: 0: 914.6. Samples: 620374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:24:18,905][00684] Avg episode reward: [(0, '4.789')] +[2023-02-25 03:24:22,153][10738] Updated weights for policy 0, policy_version 610 (0.0015) +[2023-02-25 03:24:23,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2502656. Throughput: 0: 953.1. Samples: 626490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:24:23,903][00684] Avg episode reward: [(0, '4.893')] +[2023-02-25 03:24:28,900][00684] Fps is (10 sec: 4505.7, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 2527232. Throughput: 0: 963.1. Samples: 629988. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:28,904][00684] Avg episode reward: [(0, '4.750')] +[2023-02-25 03:24:32,041][10738] Updated weights for policy 0, policy_version 620 (0.0022) +[2023-02-25 03:24:33,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 2543616. Throughput: 0: 932.8. Samples: 635600. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:33,910][00684] Avg episode reward: [(0, '4.579')] +[2023-02-25 03:24:38,901][00684] Fps is (10 sec: 2867.0, 60 sec: 3754.6, 300 sec: 3623.9). Total num frames: 2555904. Throughput: 0: 914.3. Samples: 640002. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:38,903][00684] Avg episode reward: [(0, '4.739')] +[2023-02-25 03:24:43,528][10738] Updated weights for policy 0, policy_version 630 (0.0022) +[2023-02-25 03:24:43,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2580480. Throughput: 0: 936.4. Samples: 643166. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:43,909][00684] Avg episode reward: [(0, '4.858')] +[2023-02-25 03:24:48,901][00684] Fps is (10 sec: 4915.4, 60 sec: 3754.6, 300 sec: 3679.5). Total num frames: 2605056. Throughput: 0: 962.8. Samples: 650112. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:24:48,907][00684] Avg episode reward: [(0, '5.058')] +[2023-02-25 03:24:53,906][00684] Fps is (10 sec: 3684.3, 60 sec: 3754.3, 300 sec: 3665.5). Total num frames: 2617344. Throughput: 0: 923.6. Samples: 655172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:24:53,913][00684] Avg episode reward: [(0, '4.850')] +[2023-02-25 03:24:54,206][10738] Updated weights for policy 0, policy_version 640 (0.0034) +[2023-02-25 03:24:58,900][00684] Fps is (10 sec: 2867.3, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2633728. Throughput: 0: 915.9. Samples: 657282. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:24:58,908][00684] Avg episode reward: [(0, '4.931')] +[2023-02-25 03:25:03,900][00684] Fps is (10 sec: 3688.5, 60 sec: 3754.7, 300 sec: 3637.8). Total num frames: 2654208. Throughput: 0: 949.1. Samples: 663082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:25:03,905][00684] Avg episode reward: [(0, '4.890')] +[2023-02-25 03:25:04,929][10738] Updated weights for policy 0, policy_version 650 (0.0023) +[2023-02-25 03:25:08,901][00684] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2674688. Throughput: 0: 953.6. Samples: 669404. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:25:08,903][00684] Avg episode reward: [(0, '5.026')] +[2023-02-25 03:25:13,902][00684] Fps is (10 sec: 3276.3, 60 sec: 3618.0, 300 sec: 3651.7). Total num frames: 2686976. Throughput: 0: 917.5. Samples: 671278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:25:13,904][00684] Avg episode reward: [(0, '4.885')] +[2023-02-25 03:25:18,902][00684] Fps is (10 sec: 2457.2, 60 sec: 3618.1, 300 sec: 3610.1). Total num frames: 2699264. Throughput: 0: 867.9. Samples: 674658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 03:25:18,906][00684] Avg episode reward: [(0, '4.974')] +[2023-02-25 03:25:20,386][10738] Updated weights for policy 0, policy_version 660 (0.0048) +[2023-02-25 03:25:23,900][00684] Fps is (10 sec: 2458.0, 60 sec: 3481.6, 300 sec: 3582.3). Total num frames: 2711552. Throughput: 0: 859.0. Samples: 678658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:25:23,906][00684] Avg episode reward: [(0, '4.874')] +[2023-02-25 03:25:28,901][00684] Fps is (10 sec: 3686.9, 60 sec: 3481.6, 300 sec: 3610.0). Total num frames: 2736128. Throughput: 0: 865.4. Samples: 682108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:25:28,904][00684] Avg episode reward: [(0, '4.721')] +[2023-02-25 03:25:30,200][10738] Updated weights for policy 0, policy_version 670 (0.0018) +[2023-02-25 03:25:33,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3637.8). Total num frames: 2756608. Throughput: 0: 865.0. Samples: 689038. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:25:33,906][00684] Avg episode reward: [(0, '4.744')] +[2023-02-25 03:25:38,901][00684] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2772992. Throughput: 0: 858.4. Samples: 693794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:25:38,905][00684] Avg episode reward: [(0, '4.722')] +[2023-02-25 03:25:42,522][10738] Updated weights for policy 0, policy_version 680 (0.0026) +[2023-02-25 03:25:43,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3651.7). Total num frames: 2789376. Throughput: 0: 860.4. Samples: 696000. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:25:43,903][00684] Avg episode reward: [(0, '4.970')] +[2023-02-25 03:25:48,902][00684] Fps is (10 sec: 4095.5, 60 sec: 3481.5, 300 sec: 3679.4). Total num frames: 2813952. Throughput: 0: 870.2. Samples: 702244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:25:48,905][00684] Avg episode reward: [(0, '4.776')] +[2023-02-25 03:25:51,416][10738] Updated weights for policy 0, policy_version 690 (0.0013) +[2023-02-25 03:25:53,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3618.5, 300 sec: 3693.3). Total num frames: 2834432. Throughput: 0: 881.2. Samples: 709060. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:25:53,906][00684] Avg episode reward: [(0, '4.684')] +[2023-02-25 03:25:58,901][00684] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3651.7). Total num frames: 2846720. Throughput: 0: 888.3. Samples: 711252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:25:58,907][00684] Avg episode reward: [(0, '4.859')] +[2023-02-25 03:26:03,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3637.8). Total num frames: 2863104. Throughput: 0: 912.1. Samples: 715702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:26:03,908][00684] Avg episode reward: [(0, '4.802')] +[2023-02-25 03:26:04,017][10738] Updated weights for policy 0, policy_version 700 (0.0018) +[2023-02-25 03:26:08,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 2887680. Throughput: 0: 976.2. Samples: 722586. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:26:08,903][00684] Avg episode reward: [(0, '4.867')] +[2023-02-25 03:26:08,911][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000705_2887680.pth... +[2023-02-25 03:26:09,037][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000492_2015232.pth +[2023-02-25 03:26:12,555][10738] Updated weights for policy 0, policy_version 710 (0.0018) +[2023-02-25 03:26:13,900][00684] Fps is (10 sec: 4915.2, 60 sec: 3754.8, 300 sec: 3693.3). Total num frames: 2912256. Throughput: 0: 976.3. Samples: 726040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:26:13,903][00684] Avg episode reward: [(0, '5.064')] +[2023-02-25 03:26:18,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3665.6). Total num frames: 2924544. Throughput: 0: 938.8. Samples: 731284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:26:18,908][00684] Avg episode reward: [(0, '4.940')] +[2023-02-25 03:26:23,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3651.7). Total num frames: 2940928. Throughput: 0: 937.1. Samples: 735964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:26:23,902][00684] Avg episode reward: [(0, '4.950')] +[2023-02-25 03:26:24,785][10738] Updated weights for policy 0, policy_version 720 (0.0019) +[2023-02-25 03:26:28,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3693.4). Total num frames: 2965504. Throughput: 0: 969.2. Samples: 739614. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:26:28,903][00684] Avg episode reward: [(0, '5.028')] +[2023-02-25 03:26:33,741][10738] Updated weights for policy 0, policy_version 730 (0.0026) +[2023-02-25 03:26:33,900][00684] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2990080. Throughput: 0: 990.4. Samples: 746812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:26:33,906][00684] Avg episode reward: [(0, '5.051')] +[2023-02-25 03:26:38,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3679.5). Total num frames: 3002368. Throughput: 0: 942.5. Samples: 751474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:26:38,902][00684] Avg episode reward: [(0, '5.111')] +[2023-02-25 03:26:38,967][10724] Saving new best policy, reward=5.111! +[2023-02-25 03:26:43,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 3018752. Throughput: 0: 942.2. Samples: 753650. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) +[2023-02-25 03:26:43,907][00684] Avg episode reward: [(0, '5.387')] +[2023-02-25 03:26:43,956][10724] Saving new best policy, reward=5.387! +[2023-02-25 03:26:45,798][10738] Updated weights for policy 0, policy_version 740 (0.0013) +[2023-02-25 03:26:48,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3693.4). Total num frames: 3043328. Throughput: 0: 988.1. Samples: 760168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:26:48,908][00684] Avg episode reward: [(0, '5.705')] +[2023-02-25 03:26:48,922][10724] Saving new best policy, reward=5.705! +[2023-02-25 03:26:53,908][00684] Fps is (10 sec: 4502.2, 60 sec: 3822.5, 300 sec: 3707.1). Total num frames: 3063808. Throughput: 0: 984.2. Samples: 766882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:26:53,915][00684] Avg episode reward: [(0, '5.614')] +[2023-02-25 03:26:55,606][10738] Updated weights for policy 0, policy_version 750 (0.0025) +[2023-02-25 03:26:58,903][00684] Fps is (10 sec: 3685.5, 60 sec: 3891.0, 300 sec: 3679.4). Total num frames: 3080192. Throughput: 0: 956.9. Samples: 769104. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:26:58,907][00684] Avg episode reward: [(0, '5.734')] +[2023-02-25 03:26:58,927][10724] Saving new best policy, reward=5.734! +[2023-02-25 03:27:03,900][00684] Fps is (10 sec: 3279.3, 60 sec: 3891.2, 300 sec: 3665.6). Total num frames: 3096576. Throughput: 0: 939.5. Samples: 773560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:27:03,903][00684] Avg episode reward: [(0, '5.876')] +[2023-02-25 03:27:03,906][10724] Saving new best policy, reward=5.876! +[2023-02-25 03:27:06,748][10738] Updated weights for policy 0, policy_version 760 (0.0023) +[2023-02-25 03:27:08,900][00684] Fps is (10 sec: 4097.0, 60 sec: 3891.2, 300 sec: 3707.3). Total num frames: 3121152. Throughput: 0: 989.6. Samples: 780496. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:27:08,906][00684] Avg episode reward: [(0, '5.371')] +[2023-02-25 03:27:13,901][00684] Fps is (10 sec: 4505.5, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 3141632. Throughput: 0: 987.1. Samples: 784034. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:27:13,905][00684] Avg episode reward: [(0, '5.193')] +[2023-02-25 03:27:17,170][10738] Updated weights for policy 0, policy_version 770 (0.0016) +[2023-02-25 03:27:18,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3693.3). Total num frames: 3158016. Throughput: 0: 936.9. Samples: 788972. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:27:18,908][00684] Avg episode reward: [(0, '5.310')] +[2023-02-25 03:27:23,900][00684] Fps is (10 sec: 2867.3, 60 sec: 3822.9, 300 sec: 3665.6). Total num frames: 3170304. Throughput: 0: 930.8. Samples: 793358. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:27:23,903][00684] Avg episode reward: [(0, '5.416')] +[2023-02-25 03:27:28,905][00684] Fps is (10 sec: 2865.9, 60 sec: 3686.1, 300 sec: 3679.4). Total num frames: 3186688. Throughput: 0: 933.8. Samples: 795674. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:27:28,909][00684] Avg episode reward: [(0, '5.229')] +[2023-02-25 03:27:31,060][10738] Updated weights for policy 0, policy_version 780 (0.0024) +[2023-02-25 03:27:33,906][00684] Fps is (10 sec: 3274.9, 60 sec: 3549.5, 300 sec: 3665.5). Total num frames: 3203072. Throughput: 0: 896.8. Samples: 800528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:27:33,909][00684] Avg episode reward: [(0, '5.247')] +[2023-02-25 03:27:38,900][00684] Fps is (10 sec: 3278.3, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3219456. Throughput: 0: 850.6. Samples: 805152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:27:38,903][00684] Avg episode reward: [(0, '5.219')] +[2023-02-25 03:27:43,373][10738] Updated weights for policy 0, policy_version 790 (0.0033) +[2023-02-25 03:27:43,900][00684] Fps is (10 sec: 3278.7, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3235840. Throughput: 0: 851.7. Samples: 807428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:27:43,909][00684] Avg episode reward: [(0, '5.387')] +[2023-02-25 03:27:48,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 3260416. Throughput: 0: 902.1. Samples: 814154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:27:48,903][00684] Avg episode reward: [(0, '5.213')] +[2023-02-25 03:27:51,955][10738] Updated weights for policy 0, policy_version 800 (0.0012) +[2023-02-25 03:27:53,901][00684] Fps is (10 sec: 4505.4, 60 sec: 3618.6, 300 sec: 3707.2). Total num frames: 3280896. Throughput: 0: 897.9. Samples: 820904. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:27:53,906][00684] Avg episode reward: [(0, '5.020')] +[2023-02-25 03:27:58,902][00684] Fps is (10 sec: 3685.7, 60 sec: 3618.2, 300 sec: 3707.2). Total num frames: 3297280. Throughput: 0: 867.1. Samples: 823056. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-25 03:27:58,906][00684] Avg episode reward: [(0, '5.024')] +[2023-02-25 03:28:03,900][00684] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 3313664. Throughput: 0: 860.9. Samples: 827712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:03,903][00684] Avg episode reward: [(0, '4.766')] +[2023-02-25 03:28:04,253][10738] Updated weights for policy 0, policy_version 810 (0.0020) +[2023-02-25 03:28:08,900][00684] Fps is (10 sec: 4096.8, 60 sec: 3618.1, 300 sec: 3721.1). Total num frames: 3338240. Throughput: 0: 920.2. Samples: 834766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:08,905][00684] Avg episode reward: [(0, '4.915')] +[2023-02-25 03:28:08,927][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000815_3338240.pth... +[2023-02-25 03:28:09,072][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000599_2453504.pth +[2023-02-25 03:28:13,620][10738] Updated weights for policy 0, policy_version 820 (0.0015) +[2023-02-25 03:28:13,902][00684] Fps is (10 sec: 4504.7, 60 sec: 3618.0, 300 sec: 3735.0). Total num frames: 3358720. Throughput: 0: 945.2. Samples: 838204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:13,905][00684] Avg episode reward: [(0, '5.100')] +[2023-02-25 03:28:18,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3707.2). Total num frames: 3371008. Throughput: 0: 942.0. Samples: 842914. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:18,909][00684] Avg episode reward: [(0, '4.992')] +[2023-02-25 03:28:23,902][00684] Fps is (10 sec: 3277.0, 60 sec: 3686.3, 300 sec: 3693.3). Total num frames: 3391488. Throughput: 0: 952.3. Samples: 848006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:23,904][00684] Avg episode reward: [(0, '5.062')] +[2023-02-25 03:28:25,438][10738] Updated weights for policy 0, policy_version 830 (0.0021) +[2023-02-25 03:28:28,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3754.9, 300 sec: 3707.2). Total num frames: 3411968. Throughput: 0: 978.0. Samples: 851436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:28,903][00684] Avg episode reward: [(0, '4.711')] +[2023-02-25 03:28:33,900][00684] Fps is (10 sec: 4096.5, 60 sec: 3823.3, 300 sec: 3735.0). Total num frames: 3432448. Throughput: 0: 982.9. Samples: 858384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:28:33,908][00684] Avg episode reward: [(0, '4.744')] +[2023-02-25 03:28:35,311][10738] Updated weights for policy 0, policy_version 840 (0.0025) +[2023-02-25 03:28:38,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3448832. Throughput: 0: 932.1. Samples: 862848. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:38,907][00684] Avg episode reward: [(0, '4.620')] +[2023-02-25 03:28:43,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3693.3). Total num frames: 3469312. Throughput: 0: 934.6. Samples: 865112. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:28:43,909][00684] Avg episode reward: [(0, '4.645')] +[2023-02-25 03:28:46,374][10738] Updated weights for policy 0, policy_version 850 (0.0038) +[2023-02-25 03:28:48,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 3489792. Throughput: 0: 984.6. Samples: 872020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:28:48,905][00684] Avg episode reward: [(0, '4.665')] +[2023-02-25 03:28:53,901][00684] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3735.0). Total num frames: 3510272. Throughput: 0: 969.4. Samples: 878390. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:28:53,903][00684] Avg episode reward: [(0, '4.650')] +[2023-02-25 03:28:56,887][10738] Updated weights for policy 0, policy_version 860 (0.0014) +[2023-02-25 03:28:58,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3721.1). Total num frames: 3526656. Throughput: 0: 942.5. Samples: 880614. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-25 03:28:58,906][00684] Avg episode reward: [(0, '4.650')] +[2023-02-25 03:29:03,901][00684] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 3547136. Throughput: 0: 944.8. Samples: 885432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:29:03,906][00684] Avg episode reward: [(0, '4.670')] +[2023-02-25 03:29:07,375][10738] Updated weights for policy 0, policy_version 870 (0.0015) +[2023-02-25 03:29:08,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 3567616. Throughput: 0: 987.6. Samples: 892448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:29:08,902][00684] Avg episode reward: [(0, '4.474')] +[2023-02-25 03:29:13,900][00684] Fps is (10 sec: 4096.1, 60 sec: 3823.1, 300 sec: 3748.9). Total num frames: 3588096. Throughput: 0: 986.3. Samples: 895820. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:13,903][00684] Avg episode reward: [(0, '4.474')] +[2023-02-25 03:29:18,905][00684] Fps is (10 sec: 3275.3, 60 sec: 3822.6, 300 sec: 3721.1). Total num frames: 3600384. Throughput: 0: 934.2. Samples: 900426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:18,909][00684] Avg episode reward: [(0, '4.525')] +[2023-02-25 03:29:19,014][10738] Updated weights for policy 0, policy_version 880 (0.0017) +[2023-02-25 03:29:23,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3754.8, 300 sec: 3693.3). Total num frames: 3616768. Throughput: 0: 929.4. Samples: 904670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:23,908][00684] Avg episode reward: [(0, '4.417')] +[2023-02-25 03:29:28,900][00684] Fps is (10 sec: 3278.3, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 3633152. Throughput: 0: 921.1. Samples: 906562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-25 03:29:28,906][00684] Avg episode reward: [(0, '4.414')] +[2023-02-25 03:29:30,966][10738] Updated weights for policy 0, policy_version 890 (0.0025) +[2023-02-25 03:29:33,902][00684] Fps is (10 sec: 3685.9, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 3653632. Throughput: 0: 913.8. Samples: 913144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:29:33,904][00684] Avg episode reward: [(0, '4.600')] +[2023-02-25 03:29:38,903][00684] Fps is (10 sec: 3275.9, 60 sec: 3618.0, 300 sec: 3679.4). Total num frames: 3665920. Throughput: 0: 865.1. Samples: 917324. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-25 03:29:38,907][00684] Avg episode reward: [(0, '4.577')] +[2023-02-25 03:29:43,901][00684] Fps is (10 sec: 2457.7, 60 sec: 3481.6, 300 sec: 3637.8). Total num frames: 3678208. Throughput: 0: 853.5. Samples: 919022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:43,909][00684] Avg episode reward: [(0, '4.673')] +[2023-02-25 03:29:46,093][10738] Updated weights for policy 0, policy_version 900 (0.0015) +[2023-02-25 03:29:48,900][00684] Fps is (10 sec: 2868.0, 60 sec: 3413.3, 300 sec: 3651.8). Total num frames: 3694592. Throughput: 0: 836.6. Samples: 923078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:48,906][00684] Avg episode reward: [(0, '4.643')] +[2023-02-25 03:29:53,900][00684] Fps is (10 sec: 3686.7, 60 sec: 3413.3, 300 sec: 3665.6). Total num frames: 3715072. Throughput: 0: 824.7. Samples: 929560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:29:53,903][00684] Avg episode reward: [(0, '4.650')] +[2023-02-25 03:29:56,373][10738] Updated weights for policy 0, policy_version 910 (0.0015) +[2023-02-25 03:29:58,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3651.7). Total num frames: 3731456. Throughput: 0: 811.2. Samples: 932326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:29:58,902][00684] Avg episode reward: [(0, '4.647')] +[2023-02-25 03:30:03,900][00684] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3637.8). Total num frames: 3747840. Throughput: 0: 807.1. Samples: 936744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:30:03,906][00684] Avg episode reward: [(0, '4.710')] +[2023-02-25 03:30:08,038][10738] Updated weights for policy 0, policy_version 920 (0.0028) +[2023-02-25 03:30:08,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 3679.5). Total num frames: 3772416. Throughput: 0: 851.2. Samples: 942974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:30:08,907][00684] Avg episode reward: [(0, '4.653')] +[2023-02-25 03:30:08,919][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000921_3772416.pth... +[2023-02-25 03:30:09,055][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000705_2887680.pth +[2023-02-25 03:30:13,900][00684] Fps is (10 sec: 4505.6, 60 sec: 3413.3, 300 sec: 3707.2). Total num frames: 3792896. Throughput: 0: 886.4. Samples: 946452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:30:13,903][00684] Avg episode reward: [(0, '4.579')] +[2023-02-25 03:30:18,306][10738] Updated weights for policy 0, policy_version 930 (0.0018) +[2023-02-25 03:30:18,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3481.9, 300 sec: 3721.1). Total num frames: 3809280. Throughput: 0: 867.5. Samples: 952180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:30:18,906][00684] Avg episode reward: [(0, '4.557')] +[2023-02-25 03:30:23,901][00684] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3693.3). Total num frames: 3825664. Throughput: 0: 872.4. Samples: 956578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:30:23,908][00684] Avg episode reward: [(0, '4.610')] +[2023-02-25 03:30:28,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 3846144. Throughput: 0: 900.6. Samples: 959550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-25 03:30:28,903][00684] Avg episode reward: [(0, '4.774')] +[2023-02-25 03:30:29,459][10738] Updated weights for policy 0, policy_version 940 (0.0022) +[2023-02-25 03:30:33,900][00684] Fps is (10 sec: 4505.7, 60 sec: 3618.2, 300 sec: 3721.1). Total num frames: 3870720. Throughput: 0: 969.0. Samples: 966684. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:30:33,902][00684] Avg episode reward: [(0, '4.831')] +[2023-02-25 03:30:38,900][00684] Fps is (10 sec: 4096.0, 60 sec: 3686.6, 300 sec: 3721.1). Total num frames: 3887104. Throughput: 0: 946.6. Samples: 972158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:30:38,909][00684] Avg episode reward: [(0, '4.905')] +[2023-02-25 03:30:39,917][10738] Updated weights for policy 0, policy_version 950 (0.0017) +[2023-02-25 03:30:43,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 3899392. Throughput: 0: 933.9. Samples: 974352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-25 03:30:43,908][00684] Avg episode reward: [(0, '5.266')] +[2023-02-25 03:30:48,900][00684] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 3923968. Throughput: 0: 967.3. Samples: 980274. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:30:48,910][00684] Avg episode reward: [(0, '5.269')] +[2023-02-25 03:30:50,332][10738] Updated weights for policy 0, policy_version 960 (0.0021) +[2023-02-25 03:30:53,900][00684] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3735.0). Total num frames: 3948544. Throughput: 0: 982.6. Samples: 987192. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-25 03:30:53,907][00684] Avg episode reward: [(0, '4.902')] +[2023-02-25 03:30:58,902][00684] Fps is (10 sec: 3685.9, 60 sec: 3822.8, 300 sec: 3721.1). Total num frames: 3960832. Throughput: 0: 964.7. Samples: 989864. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-25 03:30:58,904][00684] Avg episode reward: [(0, '4.536')] +[2023-02-25 03:31:01,694][10738] Updated weights for policy 0, policy_version 970 (0.0024) +[2023-02-25 03:31:03,900][00684] Fps is (10 sec: 2867.2, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 3977216. Throughput: 0: 935.5. Samples: 994278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:31:03,910][00684] Avg episode reward: [(0, '4.528')] +[2023-02-25 03:31:08,900][00684] Fps is (10 sec: 4096.6, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 4001792. Throughput: 0: 979.6. Samples: 1000658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-25 03:31:08,909][00684] Avg episode reward: [(0, '4.740')] +[2023-02-25 03:31:09,481][10724] Stopping Batcher_0... +[2023-02-25 03:31:09,481][10724] Loop batcher_evt_loop terminating... +[2023-02-25 03:31:09,484][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 03:31:09,482][00684] Component Batcher_0 stopped! +[2023-02-25 03:31:09,539][00684] Component RolloutWorker_w6 stopped! +[2023-02-25 03:31:09,545][10748] Stopping RolloutWorker_w6... +[2023-02-25 03:31:09,549][10745] Stopping RolloutWorker_w2... +[2023-02-25 03:31:09,550][00684] Component RolloutWorker_w2 stopped! +[2023-02-25 03:31:09,556][10748] Loop rollout_proc6_evt_loop terminating... +[2023-02-25 03:31:09,556][10745] Loop rollout_proc2_evt_loop terminating... +[2023-02-25 03:31:09,558][00684] Component RolloutWorker_w0 stopped! +[2023-02-25 03:31:09,561][00684] Component RolloutWorker_w4 stopped! +[2023-02-25 03:31:09,559][10746] Stopping RolloutWorker_w4... +[2023-02-25 03:31:09,572][10738] Weights refcount: 2 0 +[2023-02-25 03:31:09,559][10739] Stopping RolloutWorker_w0... +[2023-02-25 03:31:09,578][10739] Loop rollout_proc0_evt_loop terminating... +[2023-02-25 03:31:09,566][10746] Loop rollout_proc4_evt_loop terminating... +[2023-02-25 03:31:09,579][10738] Stopping InferenceWorker_p0-w0... +[2023-02-25 03:31:09,579][10738] Loop inference_proc0-0_evt_loop terminating... +[2023-02-25 03:31:09,578][00684] Component InferenceWorker_p0-w0 stopped! +[2023-02-25 03:31:09,581][00684] Component RolloutWorker_w7 stopped! +[2023-02-25 03:31:09,586][00684] Component RolloutWorker_w1 stopped! +[2023-02-25 03:31:09,587][10740] Stopping RolloutWorker_w1... +[2023-02-25 03:31:09,596][10749] Stopping RolloutWorker_w5... +[2023-02-25 03:31:09,593][00684] Component RolloutWorker_w5 stopped! +[2023-02-25 03:31:09,578][10750] Stopping RolloutWorker_w7... +[2023-02-25 03:31:09,604][00684] Component RolloutWorker_w3 stopped! +[2023-02-25 03:31:09,604][10747] Stopping RolloutWorker_w3... +[2023-02-25 03:31:09,599][10740] Loop rollout_proc1_evt_loop terminating... +[2023-02-25 03:31:09,601][10750] Loop rollout_proc7_evt_loop terminating... +[2023-02-25 03:31:09,607][10747] Loop rollout_proc3_evt_loop terminating... +[2023-02-25 03:31:09,609][10749] Loop rollout_proc5_evt_loop terminating... +[2023-02-25 03:31:09,672][10724] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000815_3338240.pth +[2023-02-25 03:31:09,685][10724] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 03:31:09,840][00684] Component LearnerWorker_p0 stopped! +[2023-02-25 03:31:09,847][00684] Waiting for process learner_proc0 to stop... +[2023-02-25 03:31:09,850][10724] Stopping LearnerWorker_p0... +[2023-02-25 03:31:09,856][10724] Loop learner_proc0_evt_loop terminating... +[2023-02-25 03:31:11,709][00684] Waiting for process inference_proc0-0 to join... +[2023-02-25 03:31:12,105][00684] Waiting for process rollout_proc0 to join... +[2023-02-25 03:31:12,447][00684] Waiting for process rollout_proc1 to join... +[2023-02-25 03:31:12,449][00684] Waiting for process rollout_proc2 to join... +[2023-02-25 03:31:12,450][00684] Waiting for process rollout_proc3 to join... +[2023-02-25 03:31:12,454][00684] Waiting for process rollout_proc4 to join... +[2023-02-25 03:31:12,459][00684] Waiting for process rollout_proc5 to join... +[2023-02-25 03:31:12,461][00684] Waiting for process rollout_proc6 to join... +[2023-02-25 03:31:12,462][00684] Waiting for process rollout_proc7 to join... +[2023-02-25 03:31:12,463][00684] Batcher 0 profile tree view: +batching: 26.1172, releasing_batches: 0.0235 +[2023-02-25 03:31:12,464][00684] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 543.8694 +update_model: 8.3784 + weight_update: 0.0028 +one_step: 0.0046 + handle_policy_step: 512.2967 + deserialize: 14.4449, stack: 3.0640, obs_to_device_normalize: 114.9805, forward: 245.1865, send_messages: 26.5766 + prepare_outputs: 82.8011 + to_cpu: 51.6852 +[2023-02-25 03:31:12,465][00684] Learner 0 profile tree view: +misc: 0.0055, prepare_batch: 16.5850 +train: 76.8693 + epoch_init: 0.0057, minibatch_init: 0.0060, losses_postprocess: 0.5953, kl_divergence: 0.5442, after_optimizer: 33.0519 + calculate_losses: 27.7165 + losses_init: 0.0050, forward_head: 1.8492, bptt_initial: 18.2454, tail: 0.9933, advantages_returns: 0.2640, losses: 3.6414 + bptt: 2.4031 + bptt_forward_core: 2.2952 + update: 14.3705 + clip: 1.3783 +[2023-02-25 03:31:12,467][00684] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3985, enqueue_policy_requests: 148.0838, env_step: 831.2662, overhead: 20.1279, complete_rollouts: 6.9592 +save_policy_outputs: 20.1943 + split_output_tensors: 9.6090 +[2023-02-25 03:31:12,470][00684] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.2843, enqueue_policy_requests: 145.8879, env_step: 831.6847, overhead: 20.7200, complete_rollouts: 6.9964 +save_policy_outputs: 19.6809 + split_output_tensors: 9.4047 +[2023-02-25 03:31:12,472][00684] Loop Runner_EvtLoop terminating... +[2023-02-25 03:31:12,475][00684] Runner profile tree view: +main_loop: 1135.5441 +[2023-02-25 03:31:12,476][00684] Collected {0: 4005888}, FPS: 3527.7 +[2023-02-25 03:31:12,593][00684] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 03:31:12,595][00684] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-25 03:31:12,597][00684] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-25 03:31:12,599][00684] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-25 03:31:12,601][00684] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 03:31:12,603][00684] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-25 03:31:12,605][00684] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 03:31:12,607][00684] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-25 03:31:12,608][00684] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-25 03:31:12,609][00684] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-25 03:31:12,610][00684] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-25 03:31:12,612][00684] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-25 03:31:12,614][00684] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-25 03:31:12,616][00684] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-25 03:31:12,617][00684] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-25 03:31:12,642][00684] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-25 03:31:12,646][00684] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 03:31:12,649][00684] RunningMeanStd input shape: (1,) +[2023-02-25 03:31:12,668][00684] ConvEncoder: input_channels=3 +[2023-02-25 03:31:13,334][00684] Conv encoder output size: 512 +[2023-02-25 03:31:13,337][00684] Policy head output size: 512 +[2023-02-25 03:31:16,340][00684] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 03:31:17,967][00684] Num frames 100... +[2023-02-25 03:31:18,089][00684] Num frames 200... +[2023-02-25 03:31:18,215][00684] Num frames 300... +[2023-02-25 03:31:18,374][00684] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-25 03:31:18,376][00684] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-25 03:31:18,399][00684] Num frames 400... +[2023-02-25 03:31:18,519][00684] Num frames 500... +[2023-02-25 03:31:18,637][00684] Num frames 600... +[2023-02-25 03:31:18,761][00684] Num frames 700... +[2023-02-25 03:31:18,882][00684] Num frames 800... +[2023-02-25 03:31:18,975][00684] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2023-02-25 03:31:18,977][00684] Avg episode reward: 4.660, avg true_objective: 4.160 +[2023-02-25 03:31:19,060][00684] Num frames 900... +[2023-02-25 03:31:19,184][00684] Num frames 1000... +[2023-02-25 03:31:19,302][00684] Num frames 1100... +[2023-02-25 03:31:19,414][00684] Num frames 1200... +[2023-02-25 03:31:19,494][00684] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053 +[2023-02-25 03:31:19,496][00684] Avg episode reward: 4.387, avg true_objective: 4.053 +[2023-02-25 03:31:19,593][00684] Num frames 1300... +[2023-02-25 03:31:19,720][00684] Num frames 1400... +[2023-02-25 03:31:19,846][00684] Num frames 1500... +[2023-02-25 03:31:19,966][00684] Num frames 1600... +[2023-02-25 03:31:20,059][00684] Avg episode rewards: #0: 4.580, true rewards: #0: 4.080 +[2023-02-25 03:31:20,060][00684] Avg episode reward: 4.580, avg true_objective: 4.080 +[2023-02-25 03:31:20,146][00684] Num frames 1700... +[2023-02-25 03:31:20,269][00684] Num frames 1800... +[2023-02-25 03:31:20,391][00684] Num frames 1900... +[2023-02-25 03:31:20,510][00684] Num frames 2000... +[2023-02-25 03:31:20,676][00684] Avg episode rewards: #0: 4.760, true rewards: #0: 4.160 +[2023-02-25 03:31:20,678][00684] Avg episode reward: 4.760, avg true_objective: 4.160 +[2023-02-25 03:31:20,704][00684] Num frames 2100... +[2023-02-25 03:31:20,827][00684] Num frames 2200... +[2023-02-25 03:31:20,954][00684] Num frames 2300... +[2023-02-25 03:31:21,078][00684] Num frames 2400... +[2023-02-25 03:31:21,208][00684] Avg episode rewards: #0: 4.607, true rewards: #0: 4.107 +[2023-02-25 03:31:21,209][00684] Avg episode reward: 4.607, avg true_objective: 4.107 +[2023-02-25 03:31:21,253][00684] Num frames 2500... +[2023-02-25 03:31:21,375][00684] Num frames 2600... +[2023-02-25 03:31:21,492][00684] Num frames 2700... +[2023-02-25 03:31:21,610][00684] Num frames 2800... +[2023-02-25 03:31:21,724][00684] Avg episode rewards: #0: 4.497, true rewards: #0: 4.069 +[2023-02-25 03:31:21,727][00684] Avg episode reward: 4.497, avg true_objective: 4.069 +[2023-02-25 03:31:21,800][00684] Num frames 2900... +[2023-02-25 03:31:21,913][00684] Num frames 3000... +[2023-02-25 03:31:22,032][00684] Num frames 3100... +[2023-02-25 03:31:22,153][00684] Num frames 3200... +[2023-02-25 03:31:22,245][00684] Avg episode rewards: #0: 4.415, true rewards: #0: 4.040 +[2023-02-25 03:31:22,246][00684] Avg episode reward: 4.415, avg true_objective: 4.040 +[2023-02-25 03:31:22,335][00684] Num frames 3300... +[2023-02-25 03:31:22,449][00684] Num frames 3400... +[2023-02-25 03:31:22,570][00684] Num frames 3500... +[2023-02-25 03:31:22,688][00684] Num frames 3600... +[2023-02-25 03:31:22,798][00684] Avg episode rewards: #0: 4.609, true rewards: #0: 4.053 +[2023-02-25 03:31:22,800][00684] Avg episode reward: 4.609, avg true_objective: 4.053 +[2023-02-25 03:31:22,867][00684] Num frames 3700... +[2023-02-25 03:31:22,987][00684] Num frames 3800... +[2023-02-25 03:31:23,113][00684] Num frames 3900... +[2023-02-25 03:31:23,235][00684] Num frames 4000... +[2023-02-25 03:31:23,412][00684] Avg episode rewards: #0: 4.696, true rewards: #0: 4.096 +[2023-02-25 03:31:23,414][00684] Avg episode reward: 4.696, avg true_objective: 4.096 +[2023-02-25 03:31:44,419][00684] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-25 03:33:48,199][00684] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-25 03:33:48,202][00684] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-25 03:33:48,204][00684] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-25 03:33:48,207][00684] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-25 03:33:48,213][00684] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-25 03:33:48,214][00684] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-25 03:33:48,215][00684] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-25 03:33:48,217][00684] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-25 03:33:48,218][00684] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-25 03:33:48,219][00684] Adding new argument 'hf_repository'='menoua/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-25 03:33:48,220][00684] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-25 03:33:48,222][00684] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-25 03:33:48,223][00684] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-25 03:33:48,224][00684] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-25 03:33:48,226][00684] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-25 03:33:48,255][00684] RunningMeanStd input shape: (3, 72, 128) +[2023-02-25 03:33:48,258][00684] RunningMeanStd input shape: (1,) +[2023-02-25 03:33:48,278][00684] ConvEncoder: input_channels=3 +[2023-02-25 03:33:48,341][00684] Conv encoder output size: 512 +[2023-02-25 03:33:48,345][00684] Policy head output size: 512 +[2023-02-25 03:33:48,380][00684] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-25 03:33:49,045][00684] Num frames 100... +[2023-02-25 03:33:49,213][00684] Num frames 200... +[2023-02-25 03:33:49,379][00684] Num frames 300... +[2023-02-25 03:33:49,548][00684] Num frames 400... +[2023-02-25 03:33:49,634][00684] Avg episode rewards: #0: 4.160, true rewards: #0: 4.160 +[2023-02-25 03:33:49,637][00684] Avg episode reward: 4.160, avg true_objective: 4.160 +[2023-02-25 03:33:49,776][00684] Num frames 500... +[2023-02-25 03:33:49,950][00684] Num frames 600... +[2023-02-25 03:33:50,128][00684] Num frames 700... +[2023-02-25 03:33:50,308][00684] Num frames 800... +[2023-02-25 03:33:50,363][00684] Avg episode rewards: #0: 4.000, true rewards: #0: 4.000 +[2023-02-25 03:33:50,365][00684] Avg episode reward: 4.000, avg true_objective: 4.000 +[2023-02-25 03:33:50,530][00684] Num frames 900... +[2023-02-25 03:33:50,700][00684] Num frames 1000... +[2023-02-25 03:33:50,837][00684] Num frames 1100... +[2023-02-25 03:33:50,950][00684] Num frames 1200... +[2023-02-25 03:33:51,059][00684] Avg episode rewards: #0: 4.493, true rewards: #0: 4.160 +[2023-02-25 03:33:51,063][00684] Avg episode reward: 4.493, avg true_objective: 4.160 +[2023-02-25 03:33:51,124][00684] Num frames 1300... +[2023-02-25 03:33:51,251][00684] Num frames 1400... +[2023-02-25 03:33:51,374][00684] Num frames 1500... +[2023-02-25 03:33:51,491][00684] Num frames 1600... +[2023-02-25 03:33:51,582][00684] Avg episode rewards: #0: 4.330, true rewards: #0: 4.080 +[2023-02-25 03:33:51,584][00684] Avg episode reward: 4.330, avg true_objective: 4.080 +[2023-02-25 03:33:51,661][00684] Num frames 1700... +[2023-02-25 03:33:51,770][00684] Num frames 1800... +[2023-02-25 03:33:51,882][00684] Num frames 1900... +[2023-02-25 03:33:51,999][00684] Num frames 2000... +[2023-02-25 03:33:52,079][00684] Avg episode rewards: #0: 4.232, true rewards: #0: 4.032 +[2023-02-25 03:33:52,081][00684] Avg episode reward: 4.232, avg true_objective: 4.032 +[2023-02-25 03:33:52,192][00684] Num frames 2100... +[2023-02-25 03:33:52,315][00684] Num frames 2200... +[2023-02-25 03:33:52,438][00684] Num frames 2300... +[2023-02-25 03:33:52,554][00684] Num frames 2400... +[2023-02-25 03:33:52,606][00684] Avg episode rewards: #0: 4.167, true rewards: #0: 4.000 +[2023-02-25 03:33:52,608][00684] Avg episode reward: 4.167, avg true_objective: 4.000 +[2023-02-25 03:33:52,728][00684] Num frames 2500... +[2023-02-25 03:33:52,845][00684] Num frames 2600... +[2023-02-25 03:33:52,969][00684] Num frames 2700... +[2023-02-25 03:33:53,082][00684] Num frames 2800... +[2023-02-25 03:33:53,156][00684] Avg episode rewards: #0: 4.309, true rewards: #0: 4.023 +[2023-02-25 03:33:53,158][00684] Avg episode reward: 4.309, avg true_objective: 4.023 +[2023-02-25 03:33:53,261][00684] Num frames 2900... +[2023-02-25 03:33:53,385][00684] Num frames 3000... +[2023-02-25 03:33:53,503][00684] Num frames 3100... +[2023-02-25 03:33:53,636][00684] Num frames 3200... +[2023-02-25 03:33:53,688][00684] Avg episode rewards: #0: 4.250, true rewards: #0: 4.000 +[2023-02-25 03:33:53,689][00684] Avg episode reward: 4.250, avg true_objective: 4.000 +[2023-02-25 03:33:53,805][00684] Num frames 3300... +[2023-02-25 03:33:53,926][00684] Num frames 3400... +[2023-02-25 03:33:54,048][00684] Num frames 3500... +[2023-02-25 03:33:54,203][00684] Avg episode rewards: #0: 4.204, true rewards: #0: 3.982 +[2023-02-25 03:33:54,206][00684] Avg episode reward: 4.204, avg true_objective: 3.982 +[2023-02-25 03:33:54,231][00684] Num frames 3600... +[2023-02-25 03:33:54,350][00684] Num frames 3700... +[2023-02-25 03:33:54,468][00684] Num frames 3800... +[2023-02-25 03:33:54,589][00684] Num frames 3900... +[2023-02-25 03:33:54,714][00684] Num frames 4000... +[2023-02-25 03:33:54,810][00684] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032 +[2023-02-25 03:33:54,812][00684] Avg episode reward: 4.332, avg true_objective: 4.032 +[2023-02-25 03:34:14,342][00684] Replay video saved to /content/train_dir/default_experiment/replay.mp4!