diff --git "a/sf_log.txt" "b/sf_log.txt" --- "a/sf_log.txt" +++ "b/sf_log.txt" @@ -14320,3 +14320,7315 @@ [2024-03-29 16:44:26,841][00497] Updated weights for policy 0, policy_version 41965 (0.0019) [2024-03-29 16:44:28,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42052.2, 300 sec: 42376.3). Total num frames: 687652864. Throughput: 0: 42609.8. Samples: 569876960. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) [2024-03-29 16:44:28,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 16:44:30,026][00497] Updated weights for policy 0, policy_version 41975 (0.0022) +[2024-03-29 16:44:33,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 687849472. Throughput: 0: 42238.2. Samples: 569985880. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 16:44:33,841][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 16:44:34,463][00497] Updated weights for policy 0, policy_version 41985 (0.0025) +[2024-03-29 16:44:38,466][00497] Updated weights for policy 0, policy_version 41995 (0.0020) +[2024-03-29 16:44:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42598.4, 300 sec: 42431.8). Total num frames: 688062464. Throughput: 0: 42332.1. Samples: 570263420. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 16:44:38,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 16:44:42,010][00497] Updated weights for policy 0, policy_version 42005 (0.0028) +[2024-03-29 16:44:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 42376.2). Total num frames: 688275456. Throughput: 0: 42427.6. Samples: 570513860. Policy #0 lag: (min: 0.0, avg: 19.6, max: 40.0) +[2024-03-29 16:44:43,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 16:44:45,350][00497] Updated weights for policy 0, policy_version 42015 (0.0025) +[2024-03-29 16:44:48,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 688504832. Throughput: 0: 42608.4. Samples: 570626660. Policy #0 lag: (min: 0.0, avg: 19.6, max: 40.0) +[2024-03-29 16:44:48,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 16:44:49,672][00497] Updated weights for policy 0, policy_version 42025 (0.0029) +[2024-03-29 16:44:53,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.4, 300 sec: 42431.8). Total num frames: 688685056. Throughput: 0: 42668.5. Samples: 570901180. Policy #0 lag: (min: 0.0, avg: 19.6, max: 40.0) +[2024-03-29 16:44:53,840][00126] Avg episode reward: [(0, '0.463')] +[2024-03-29 16:44:54,102][00497] Updated weights for policy 0, policy_version 42035 (0.0022) +[2024-03-29 16:44:54,154][00476] Signal inference workers to stop experience collection... (20300 times) +[2024-03-29 16:44:54,195][00497] InferenceWorker_p0-w0: stopping experience collection (20300 times) +[2024-03-29 16:44:54,374][00476] Signal inference workers to resume experience collection... (20300 times) +[2024-03-29 16:44:54,374][00497] InferenceWorker_p0-w0: resuming experience collection (20300 times) +[2024-03-29 16:44:57,576][00497] Updated weights for policy 0, policy_version 42045 (0.0025) +[2024-03-29 16:44:58,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 688914432. Throughput: 0: 42472.0. Samples: 571141960. Policy #0 lag: (min: 0.0, avg: 19.6, max: 40.0) +[2024-03-29 16:44:58,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 16:45:01,038][00497] Updated weights for policy 0, policy_version 42055 (0.0028) +[2024-03-29 16:45:03,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 689127424. Throughput: 0: 42493.6. Samples: 571254400. Policy #0 lag: (min: 0.0, avg: 19.6, max: 40.0) +[2024-03-29 16:45:03,841][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:45:03,862][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042061_689127424.pth... +[2024-03-29 16:45:04,154][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000041439_678936576.pth +[2024-03-29 16:45:05,512][00497] Updated weights for policy 0, policy_version 42065 (0.0020) +[2024-03-29 16:45:08,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.4, 300 sec: 42376.3). Total num frames: 689307648. Throughput: 0: 42366.3. Samples: 571522900. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 16:45:08,840][00126] Avg episode reward: [(0, '0.406')] +[2024-03-29 16:45:09,766][00497] Updated weights for policy 0, policy_version 42075 (0.0023) +[2024-03-29 16:45:13,268][00497] Updated weights for policy 0, policy_version 42085 (0.0023) +[2024-03-29 16:45:13,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 689537024. Throughput: 0: 42346.3. Samples: 571782540. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 16:45:13,840][00126] Avg episode reward: [(0, '0.375')] +[2024-03-29 16:45:16,352][00497] Updated weights for policy 0, policy_version 42095 (0.0024) +[2024-03-29 16:45:18,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42598.4, 300 sec: 42376.3). Total num frames: 689766400. Throughput: 0: 42504.5. Samples: 571898580. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 16:45:18,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 16:45:20,707][00497] Updated weights for policy 0, policy_version 42105 (0.0019) +[2024-03-29 16:45:23,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42325.3, 300 sec: 42431.8). Total num frames: 689963008. Throughput: 0: 42247.0. Samples: 572164540. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 16:45:23,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 16:45:24,899][00497] Updated weights for policy 0, policy_version 42115 (0.0017) +[2024-03-29 16:45:28,584][00497] Updated weights for policy 0, policy_version 42125 (0.0019) +[2024-03-29 16:45:28,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 42431.8). Total num frames: 690176000. Throughput: 0: 42449.3. Samples: 572424080. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 16:45:28,840][00126] Avg episode reward: [(0, '0.455')] +[2024-03-29 16:45:29,498][00476] Signal inference workers to stop experience collection... (20350 times) +[2024-03-29 16:45:29,499][00476] Signal inference workers to resume experience collection... (20350 times) +[2024-03-29 16:45:29,539][00497] InferenceWorker_p0-w0: stopping experience collection (20350 times) +[2024-03-29 16:45:29,539][00497] InferenceWorker_p0-w0: resuming experience collection (20350 times) +[2024-03-29 16:45:31,688][00497] Updated weights for policy 0, policy_version 42135 (0.0026) +[2024-03-29 16:45:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 42376.3). Total num frames: 690405376. Throughput: 0: 42617.3. Samples: 572544440. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:33,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 16:45:36,152][00497] Updated weights for policy 0, policy_version 42145 (0.0021) +[2024-03-29 16:45:38,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 690585600. Throughput: 0: 42279.1. Samples: 572803740. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:38,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:45:40,271][00497] Updated weights for policy 0, policy_version 42155 (0.0022) +[2024-03-29 16:45:43,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 690814976. Throughput: 0: 42567.9. Samples: 573057520. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:43,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 16:45:43,941][00497] Updated weights for policy 0, policy_version 42165 (0.0025) +[2024-03-29 16:45:47,482][00497] Updated weights for policy 0, policy_version 42175 (0.0027) +[2024-03-29 16:45:48,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42325.3, 300 sec: 42431.8). Total num frames: 691044352. Throughput: 0: 42632.5. Samples: 573172860. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:48,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 16:45:51,747][00497] Updated weights for policy 0, policy_version 42185 (0.0021) +[2024-03-29 16:45:53,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 691224576. Throughput: 0: 42318.2. Samples: 573427220. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:53,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 16:45:55,983][00497] Updated weights for policy 0, policy_version 42195 (0.0018) +[2024-03-29 16:45:58,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.2, 300 sec: 42376.2). Total num frames: 691437568. Throughput: 0: 42406.2. Samples: 573690820. Policy #0 lag: (min: 0.0, avg: 23.8, max: 41.0) +[2024-03-29 16:45:58,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 16:45:59,648][00497] Updated weights for policy 0, policy_version 42205 (0.0025) +[2024-03-29 16:46:02,882][00497] Updated weights for policy 0, policy_version 42215 (0.0031) +[2024-03-29 16:46:03,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42598.5, 300 sec: 42487.3). Total num frames: 691683328. Throughput: 0: 42577.8. Samples: 573814580. Policy #0 lag: (min: 1.0, avg: 21.4, max: 41.0) +[2024-03-29 16:46:03,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 16:46:07,279][00497] Updated weights for policy 0, policy_version 42225 (0.0018) +[2024-03-29 16:46:07,299][00476] Signal inference workers to stop experience collection... (20400 times) +[2024-03-29 16:46:07,299][00476] Signal inference workers to resume experience collection... (20400 times) +[2024-03-29 16:46:07,322][00497] InferenceWorker_p0-w0: stopping experience collection (20400 times) +[2024-03-29 16:46:07,344][00497] InferenceWorker_p0-w0: resuming experience collection (20400 times) +[2024-03-29 16:46:08,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42598.3, 300 sec: 42376.2). Total num frames: 691863552. Throughput: 0: 42131.1. Samples: 574060440. Policy #0 lag: (min: 1.0, avg: 21.4, max: 41.0) +[2024-03-29 16:46:08,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 16:46:11,704][00497] Updated weights for policy 0, policy_version 42235 (0.0018) +[2024-03-29 16:46:13,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42325.3, 300 sec: 42431.8). Total num frames: 692076544. Throughput: 0: 42334.3. Samples: 574329120. Policy #0 lag: (min: 1.0, avg: 21.4, max: 41.0) +[2024-03-29 16:46:13,840][00126] Avg episode reward: [(0, '0.481')] +[2024-03-29 16:46:15,121][00497] Updated weights for policy 0, policy_version 42245 (0.0026) +[2024-03-29 16:46:18,271][00497] Updated weights for policy 0, policy_version 42255 (0.0023) +[2024-03-29 16:46:18,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42598.4, 300 sec: 42487.3). Total num frames: 692322304. Throughput: 0: 42281.0. Samples: 574447080. Policy #0 lag: (min: 1.0, avg: 21.4, max: 41.0) +[2024-03-29 16:46:18,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:46:22,721][00497] Updated weights for policy 0, policy_version 42265 (0.0019) +[2024-03-29 16:46:23,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.4, 300 sec: 42431.8). Total num frames: 692502528. Throughput: 0: 42093.8. Samples: 574697960. Policy #0 lag: (min: 1.0, avg: 21.4, max: 41.0) +[2024-03-29 16:46:23,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:46:27,203][00497] Updated weights for policy 0, policy_version 42275 (0.0018) +[2024-03-29 16:46:28,839][00126] Fps is (10 sec: 37682.9, 60 sec: 42052.3, 300 sec: 42376.2). Total num frames: 692699136. Throughput: 0: 42424.0. Samples: 574966600. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:28,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:46:30,621][00497] Updated weights for policy 0, policy_version 42285 (0.0020) +[2024-03-29 16:46:33,823][00497] Updated weights for policy 0, policy_version 42295 (0.0023) +[2024-03-29 16:46:33,839][00126] Fps is (10 sec: 45874.6, 60 sec: 42598.4, 300 sec: 42487.3). Total num frames: 692961280. Throughput: 0: 42266.6. Samples: 575074860. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:33,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:46:38,090][00497] Updated weights for policy 0, policy_version 42305 (0.0023) +[2024-03-29 16:46:38,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42598.3, 300 sec: 42431.8). Total num frames: 693141504. Throughput: 0: 42351.5. Samples: 575333040. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:38,840][00126] Avg episode reward: [(0, '0.462')] +[2024-03-29 16:46:42,958][00497] Updated weights for policy 0, policy_version 42315 (0.0018) +[2024-03-29 16:46:43,839][00126] Fps is (10 sec: 37683.3, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 693338112. Throughput: 0: 42537.7. Samples: 575605020. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:43,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 16:46:45,151][00476] Signal inference workers to stop experience collection... (20450 times) +[2024-03-29 16:46:45,191][00497] InferenceWorker_p0-w0: stopping experience collection (20450 times) +[2024-03-29 16:46:45,367][00476] Signal inference workers to resume experience collection... (20450 times) +[2024-03-29 16:46:45,367][00497] InferenceWorker_p0-w0: resuming experience collection (20450 times) +[2024-03-29 16:46:45,925][00497] Updated weights for policy 0, policy_version 42325 (0.0021) +[2024-03-29 16:46:48,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 693583872. Throughput: 0: 42168.8. Samples: 575712180. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:48,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 16:46:49,390][00497] Updated weights for policy 0, policy_version 42335 (0.0027) +[2024-03-29 16:46:53,551][00497] Updated weights for policy 0, policy_version 42345 (0.0027) +[2024-03-29 16:46:53,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42598.3, 300 sec: 42376.2). Total num frames: 693780480. Throughput: 0: 42254.2. Samples: 575961880. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:46:53,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 16:46:58,508][00497] Updated weights for policy 0, policy_version 42355 (0.0029) +[2024-03-29 16:46:58,839][00126] Fps is (10 sec: 37683.2, 60 sec: 42052.2, 300 sec: 42376.2). Total num frames: 693960704. Throughput: 0: 42379.5. Samples: 576236200. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 16:46:58,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:47:01,652][00497] Updated weights for policy 0, policy_version 42365 (0.0024) +[2024-03-29 16:47:03,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 694222848. Throughput: 0: 42341.7. Samples: 576352460. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 16:47:03,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 16:47:04,067][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042373_694239232.pth... +[2024-03-29 16:47:04,389][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000041750_684032000.pth +[2024-03-29 16:47:04,934][00497] Updated weights for policy 0, policy_version 42375 (0.0024) +[2024-03-29 16:47:08,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 694403072. Throughput: 0: 42046.2. Samples: 576590040. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 16:47:08,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 16:47:09,381][00497] Updated weights for policy 0, policy_version 42385 (0.0022) +[2024-03-29 16:47:13,839][00126] Fps is (10 sec: 36044.7, 60 sec: 41779.1, 300 sec: 42265.1). Total num frames: 694583296. Throughput: 0: 42073.7. Samples: 576859920. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 16:47:13,841][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 16:47:14,000][00497] Updated weights for policy 0, policy_version 42395 (0.0027) +[2024-03-29 16:47:17,256][00497] Updated weights for policy 0, policy_version 42405 (0.0023) +[2024-03-29 16:47:17,286][00476] Signal inference workers to stop experience collection... (20500 times) +[2024-03-29 16:47:17,320][00497] InferenceWorker_p0-w0: stopping experience collection (20500 times) +[2024-03-29 16:47:17,509][00476] Signal inference workers to resume experience collection... (20500 times) +[2024-03-29 16:47:17,510][00497] InferenceWorker_p0-w0: resuming experience collection (20500 times) +[2024-03-29 16:47:18,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42052.2, 300 sec: 42431.8). Total num frames: 694845440. Throughput: 0: 42451.6. Samples: 576985180. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 16:47:18,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 16:47:20,235][00497] Updated weights for policy 0, policy_version 42415 (0.0027) +[2024-03-29 16:47:23,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 695042048. Throughput: 0: 42169.4. Samples: 577230660. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:23,840][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 16:47:24,893][00497] Updated weights for policy 0, policy_version 42425 (0.0026) +[2024-03-29 16:47:28,839][00126] Fps is (10 sec: 37683.5, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 695222272. Throughput: 0: 42129.9. Samples: 577500860. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:28,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:47:29,261][00497] Updated weights for policy 0, policy_version 42435 (0.0026) +[2024-03-29 16:47:32,453][00497] Updated weights for policy 0, policy_version 42445 (0.0028) +[2024-03-29 16:47:33,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.4, 300 sec: 42431.8). Total num frames: 695484416. Throughput: 0: 42616.5. Samples: 577629920. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:33,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 16:47:35,418][00497] Updated weights for policy 0, policy_version 42455 (0.0019) +[2024-03-29 16:47:38,839][00126] Fps is (10 sec: 47513.6, 60 sec: 42598.5, 300 sec: 42376.2). Total num frames: 695697408. Throughput: 0: 42340.1. Samples: 577867180. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:38,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:47:40,048][00497] Updated weights for policy 0, policy_version 42465 (0.0021) +[2024-03-29 16:47:43,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 695877632. Throughput: 0: 42321.0. Samples: 578140640. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:43,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 16:47:44,725][00497] Updated weights for policy 0, policy_version 42475 (0.0018) +[2024-03-29 16:47:47,721][00497] Updated weights for policy 0, policy_version 42485 (0.0027) +[2024-03-29 16:47:48,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42325.3, 300 sec: 42431.8). Total num frames: 696123392. Throughput: 0: 42683.1. Samples: 578273200. Policy #0 lag: (min: 0.0, avg: 23.9, max: 41.0) +[2024-03-29 16:47:48,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 16:47:50,303][00476] Signal inference workers to stop experience collection... (20550 times) +[2024-03-29 16:47:50,378][00476] Signal inference workers to resume experience collection... (20550 times) +[2024-03-29 16:47:50,380][00497] InferenceWorker_p0-w0: stopping experience collection (20550 times) +[2024-03-29 16:47:50,415][00497] InferenceWorker_p0-w0: resuming experience collection (20550 times) +[2024-03-29 16:47:50,943][00497] Updated weights for policy 0, policy_version 42495 (0.0035) +[2024-03-29 16:47:53,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 42265.2). Total num frames: 696320000. Throughput: 0: 42287.1. Samples: 578492960. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 16:47:53,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 16:47:55,531][00497] Updated weights for policy 0, policy_version 42505 (0.0027) +[2024-03-29 16:47:58,839][00126] Fps is (10 sec: 39322.1, 60 sec: 42598.5, 300 sec: 42320.7). Total num frames: 696516608. Throughput: 0: 42483.3. Samples: 578771660. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 16:47:58,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:48:00,218][00497] Updated weights for policy 0, policy_version 42515 (0.0025) +[2024-03-29 16:48:03,506][00497] Updated weights for policy 0, policy_version 42525 (0.0025) +[2024-03-29 16:48:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 42320.7). Total num frames: 696729600. Throughput: 0: 42556.9. Samples: 578900240. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 16:48:03,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 16:48:06,537][00497] Updated weights for policy 0, policy_version 42535 (0.0024) +[2024-03-29 16:48:08,839][00126] Fps is (10 sec: 45875.0, 60 sec: 42871.5, 300 sec: 42320.7). Total num frames: 696975360. Throughput: 0: 42057.8. Samples: 579123260. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 16:48:08,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 16:48:11,285][00497] Updated weights for policy 0, policy_version 42545 (0.0023) +[2024-03-29 16:48:13,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42598.5, 300 sec: 42265.2). Total num frames: 697139200. Throughput: 0: 42116.9. Samples: 579396120. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 16:48:13,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 16:48:15,783][00497] Updated weights for policy 0, policy_version 42555 (0.0024) +[2024-03-29 16:48:18,839][00126] Fps is (10 sec: 39321.4, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 697368576. Throughput: 0: 42324.8. Samples: 579534540. Policy #0 lag: (min: 1.0, avg: 18.1, max: 41.0) +[2024-03-29 16:48:18,840][00126] Avg episode reward: [(0, '0.419')] +[2024-03-29 16:48:19,067][00497] Updated weights for policy 0, policy_version 42565 (0.0032) +[2024-03-29 16:48:22,089][00497] Updated weights for policy 0, policy_version 42575 (0.0028) +[2024-03-29 16:48:23,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42598.4, 300 sec: 42265.1). Total num frames: 697597952. Throughput: 0: 41956.3. Samples: 579755220. Policy #0 lag: (min: 1.0, avg: 18.1, max: 41.0) +[2024-03-29 16:48:23,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 16:48:26,796][00497] Updated weights for policy 0, policy_version 42585 (0.0021) +[2024-03-29 16:48:28,213][00476] Signal inference workers to stop experience collection... (20600 times) +[2024-03-29 16:48:28,214][00476] Signal inference workers to resume experience collection... (20600 times) +[2024-03-29 16:48:28,240][00497] InferenceWorker_p0-w0: stopping experience collection (20600 times) +[2024-03-29 16:48:28,240][00497] InferenceWorker_p0-w0: resuming experience collection (20600 times) +[2024-03-29 16:48:28,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42871.5, 300 sec: 42265.2). Total num frames: 697794560. Throughput: 0: 41960.9. Samples: 580028880. Policy #0 lag: (min: 1.0, avg: 18.1, max: 41.0) +[2024-03-29 16:48:28,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 16:48:31,310][00497] Updated weights for policy 0, policy_version 42595 (0.0024) +[2024-03-29 16:48:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.1, 300 sec: 42320.7). Total num frames: 697991168. Throughput: 0: 42109.8. Samples: 580168140. Policy #0 lag: (min: 1.0, avg: 18.1, max: 41.0) +[2024-03-29 16:48:33,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:48:34,598][00497] Updated weights for policy 0, policy_version 42605 (0.0023) +[2024-03-29 16:48:37,903][00497] Updated weights for policy 0, policy_version 42615 (0.0029) +[2024-03-29 16:48:38,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 698236928. Throughput: 0: 41974.2. Samples: 580381800. Policy #0 lag: (min: 1.0, avg: 18.1, max: 41.0) +[2024-03-29 16:48:38,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 16:48:42,333][00497] Updated weights for policy 0, policy_version 42625 (0.0035) +[2024-03-29 16:48:43,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 698417152. Throughput: 0: 41931.1. Samples: 580658560. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:48:43,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 16:48:46,965][00497] Updated weights for policy 0, policy_version 42635 (0.0020) +[2024-03-29 16:48:48,839][00126] Fps is (10 sec: 37683.4, 60 sec: 41506.2, 300 sec: 42265.2). Total num frames: 698613760. Throughput: 0: 42150.7. Samples: 580797020. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:48:48,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 16:48:50,051][00497] Updated weights for policy 0, policy_version 42645 (0.0021) +[2024-03-29 16:48:53,199][00497] Updated weights for policy 0, policy_version 42655 (0.0030) +[2024-03-29 16:48:53,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 698875904. Throughput: 0: 42372.4. Samples: 581030020. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:48:53,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:48:57,666][00497] Updated weights for policy 0, policy_version 42665 (0.0025) +[2024-03-29 16:48:58,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 699056128. Throughput: 0: 42148.0. Samples: 581292780. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:48:58,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 16:49:02,250][00497] Updated weights for policy 0, policy_version 42675 (0.0031) +[2024-03-29 16:49:03,839][00126] Fps is (10 sec: 37683.0, 60 sec: 42052.2, 300 sec: 42320.7). Total num frames: 699252736. Throughput: 0: 42199.9. Samples: 581433540. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:49:03,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 16:49:03,912][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042680_699269120.pth... +[2024-03-29 16:49:04,247][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042061_689127424.pth +[2024-03-29 16:49:05,658][00497] Updated weights for policy 0, policy_version 42685 (0.0029) +[2024-03-29 16:49:07,006][00476] Signal inference workers to stop experience collection... (20650 times) +[2024-03-29 16:49:07,006][00476] Signal inference workers to resume experience collection... (20650 times) +[2024-03-29 16:49:07,043][00497] InferenceWorker_p0-w0: stopping experience collection (20650 times) +[2024-03-29 16:49:07,044][00497] InferenceWorker_p0-w0: resuming experience collection (20650 times) +[2024-03-29 16:49:08,747][00497] Updated weights for policy 0, policy_version 42695 (0.0017) +[2024-03-29 16:49:08,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 699514880. Throughput: 0: 42513.4. Samples: 581668320. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 16:49:08,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 16:49:13,389][00497] Updated weights for policy 0, policy_version 42705 (0.0018) +[2024-03-29 16:49:13,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42598.3, 300 sec: 42320.7). Total num frames: 699695104. Throughput: 0: 42058.6. Samples: 581921520. Policy #0 lag: (min: 1.0, avg: 24.1, max: 42.0) +[2024-03-29 16:49:13,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 16:49:17,714][00497] Updated weights for policy 0, policy_version 42715 (0.0022) +[2024-03-29 16:49:18,839][00126] Fps is (10 sec: 36045.1, 60 sec: 41779.2, 300 sec: 42209.6). Total num frames: 699875328. Throughput: 0: 41993.9. Samples: 582057860. Policy #0 lag: (min: 1.0, avg: 24.1, max: 42.0) +[2024-03-29 16:49:18,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:49:21,333][00497] Updated weights for policy 0, policy_version 42725 (0.0032) +[2024-03-29 16:49:23,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 700137472. Throughput: 0: 42737.4. Samples: 582304980. Policy #0 lag: (min: 1.0, avg: 24.1, max: 42.0) +[2024-03-29 16:49:23,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 16:49:24,224][00497] Updated weights for policy 0, policy_version 42735 (0.0036) +[2024-03-29 16:49:28,745][00497] Updated weights for policy 0, policy_version 42745 (0.0035) +[2024-03-29 16:49:28,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 700334080. Throughput: 0: 42287.6. Samples: 582561500. Policy #0 lag: (min: 1.0, avg: 24.1, max: 42.0) +[2024-03-29 16:49:28,840][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:49:33,157][00497] Updated weights for policy 0, policy_version 42755 (0.0017) +[2024-03-29 16:49:33,839][00126] Fps is (10 sec: 37682.9, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 700514304. Throughput: 0: 42145.7. Samples: 582693580. Policy #0 lag: (min: 1.0, avg: 24.1, max: 42.0) +[2024-03-29 16:49:33,840][00126] Avg episode reward: [(0, '0.455')] +[2024-03-29 16:49:36,500][00497] Updated weights for policy 0, policy_version 42765 (0.0025) +[2024-03-29 16:49:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 42376.2). Total num frames: 700776448. Throughput: 0: 42612.5. Samples: 582947580. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:49:38,840][00126] Avg episode reward: [(0, '0.441')] +[2024-03-29 16:49:39,620][00497] Updated weights for policy 0, policy_version 42775 (0.0029) +[2024-03-29 16:49:42,654][00476] Signal inference workers to stop experience collection... (20700 times) +[2024-03-29 16:49:42,720][00497] InferenceWorker_p0-w0: stopping experience collection (20700 times) +[2024-03-29 16:49:42,730][00476] Signal inference workers to resume experience collection... (20700 times) +[2024-03-29 16:49:42,746][00497] InferenceWorker_p0-w0: resuming experience collection (20700 times) +[2024-03-29 16:49:43,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.3, 300 sec: 42265.2). Total num frames: 700973056. Throughput: 0: 42289.6. Samples: 583195820. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:49:43,842][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 16:49:44,098][00497] Updated weights for policy 0, policy_version 42785 (0.0025) +[2024-03-29 16:49:48,610][00497] Updated weights for policy 0, policy_version 42795 (0.0022) +[2024-03-29 16:49:48,839][00126] Fps is (10 sec: 37683.0, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 701153280. Throughput: 0: 42270.7. Samples: 583335720. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:49:48,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 16:49:51,714][00497] Updated weights for policy 0, policy_version 42805 (0.0022) +[2024-03-29 16:49:53,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 701415424. Throughput: 0: 42653.3. Samples: 583587720. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:49:53,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 16:49:54,843][00497] Updated weights for policy 0, policy_version 42815 (0.0033) +[2024-03-29 16:49:58,839][00126] Fps is (10 sec: 45875.5, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 701612032. Throughput: 0: 42439.7. Samples: 583831300. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:49:58,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 16:49:59,443][00497] Updated weights for policy 0, policy_version 42825 (0.0017) +[2024-03-29 16:50:03,839][00126] Fps is (10 sec: 37683.6, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 701792256. Throughput: 0: 42505.3. Samples: 583970600. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:50:03,840][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 16:50:04,067][00497] Updated weights for policy 0, policy_version 42835 (0.0023) +[2024-03-29 16:50:07,386][00497] Updated weights for policy 0, policy_version 42845 (0.0023) +[2024-03-29 16:50:08,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 42376.2). Total num frames: 702038016. Throughput: 0: 42703.9. Samples: 584226660. Policy #0 lag: (min: 1.0, avg: 17.9, max: 41.0) +[2024-03-29 16:50:08,840][00126] Avg episode reward: [(0, '0.483')] +[2024-03-29 16:50:10,551][00497] Updated weights for policy 0, policy_version 42855 (0.0024) +[2024-03-29 16:50:13,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42598.5, 300 sec: 42320.7). Total num frames: 702251008. Throughput: 0: 42150.6. Samples: 584458280. Policy #0 lag: (min: 1.0, avg: 17.9, max: 41.0) +[2024-03-29 16:50:13,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:50:15,170][00497] Updated weights for policy 0, policy_version 42865 (0.0027) +[2024-03-29 16:50:18,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42598.3, 300 sec: 42265.2). Total num frames: 702431232. Throughput: 0: 42310.2. Samples: 584597540. Policy #0 lag: (min: 1.0, avg: 17.9, max: 41.0) +[2024-03-29 16:50:18,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 16:50:19,469][00497] Updated weights for policy 0, policy_version 42875 (0.0019) +[2024-03-29 16:50:20,629][00476] Signal inference workers to stop experience collection... (20750 times) +[2024-03-29 16:50:20,703][00476] Signal inference workers to resume experience collection... (20750 times) +[2024-03-29 16:50:20,704][00497] InferenceWorker_p0-w0: stopping experience collection (20750 times) +[2024-03-29 16:50:20,730][00497] InferenceWorker_p0-w0: resuming experience collection (20750 times) +[2024-03-29 16:50:22,977][00497] Updated weights for policy 0, policy_version 42885 (0.0025) +[2024-03-29 16:50:23,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 702676992. Throughput: 0: 42711.5. Samples: 584869600. Policy #0 lag: (min: 1.0, avg: 17.9, max: 41.0) +[2024-03-29 16:50:23,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 16:50:26,105][00497] Updated weights for policy 0, policy_version 42895 (0.0020) +[2024-03-29 16:50:28,839][00126] Fps is (10 sec: 45875.5, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 702889984. Throughput: 0: 42185.9. Samples: 585094180. Policy #0 lag: (min: 1.0, avg: 17.9, max: 41.0) +[2024-03-29 16:50:28,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:50:30,625][00497] Updated weights for policy 0, policy_version 42905 (0.0023) +[2024-03-29 16:50:33,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 703070208. Throughput: 0: 42156.0. Samples: 585232740. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:33,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 16:50:35,126][00497] Updated weights for policy 0, policy_version 42915 (0.0022) +[2024-03-29 16:50:38,660][00497] Updated weights for policy 0, policy_version 42925 (0.0023) +[2024-03-29 16:50:38,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 42265.2). Total num frames: 703283200. Throughput: 0: 42259.6. Samples: 585489400. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:38,840][00126] Avg episode reward: [(0, '0.415')] +[2024-03-29 16:50:41,881][00497] Updated weights for policy 0, policy_version 42935 (0.0024) +[2024-03-29 16:50:43,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.4, 300 sec: 42265.2). Total num frames: 703512576. Throughput: 0: 41855.1. Samples: 585714780. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:43,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 16:50:46,432][00497] Updated weights for policy 0, policy_version 42945 (0.0023) +[2024-03-29 16:50:48,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42598.5, 300 sec: 42320.7). Total num frames: 703709184. Throughput: 0: 41806.7. Samples: 585851900. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:48,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 16:50:50,723][00497] Updated weights for policy 0, policy_version 42955 (0.0018) +[2024-03-29 16:50:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41506.2, 300 sec: 42265.2). Total num frames: 703905792. Throughput: 0: 42268.0. Samples: 586128720. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:53,840][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 16:50:54,284][00497] Updated weights for policy 0, policy_version 42965 (0.0027) +[2024-03-29 16:50:54,317][00476] Signal inference workers to stop experience collection... (20800 times) +[2024-03-29 16:50:54,337][00497] InferenceWorker_p0-w0: stopping experience collection (20800 times) +[2024-03-29 16:50:54,531][00476] Signal inference workers to resume experience collection... (20800 times) +[2024-03-29 16:50:54,532][00497] InferenceWorker_p0-w0: resuming experience collection (20800 times) +[2024-03-29 16:50:57,309][00497] Updated weights for policy 0, policy_version 42975 (0.0025) +[2024-03-29 16:50:58,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 704151552. Throughput: 0: 41854.2. Samples: 586341720. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 16:50:58,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 16:51:01,928][00497] Updated weights for policy 0, policy_version 42985 (0.0026) +[2024-03-29 16:51:03,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 704348160. Throughput: 0: 41838.7. Samples: 586480280. Policy #0 lag: (min: 0.0, avg: 22.5, max: 42.0) +[2024-03-29 16:51:03,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 16:51:04,164][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042991_704364544.pth... +[2024-03-29 16:51:04,487][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042373_694239232.pth +[2024-03-29 16:51:06,228][00497] Updated weights for policy 0, policy_version 42995 (0.0016) +[2024-03-29 16:51:08,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 704528384. Throughput: 0: 41867.7. Samples: 586753640. Policy #0 lag: (min: 0.0, avg: 22.5, max: 42.0) +[2024-03-29 16:51:08,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 16:51:09,904][00497] Updated weights for policy 0, policy_version 43005 (0.0025) +[2024-03-29 16:51:12,972][00497] Updated weights for policy 0, policy_version 43015 (0.0032) +[2024-03-29 16:51:13,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 704790528. Throughput: 0: 41731.6. Samples: 586972100. Policy #0 lag: (min: 0.0, avg: 22.5, max: 42.0) +[2024-03-29 16:51:13,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 16:51:17,625][00497] Updated weights for policy 0, policy_version 43025 (0.0025) +[2024-03-29 16:51:18,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.4, 300 sec: 42265.2). Total num frames: 704970752. Throughput: 0: 41626.8. Samples: 587105940. Policy #0 lag: (min: 0.0, avg: 22.5, max: 42.0) +[2024-03-29 16:51:18,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 16:51:21,621][00497] Updated weights for policy 0, policy_version 43035 (0.0024) +[2024-03-29 16:51:23,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.2, 300 sec: 42265.2). Total num frames: 705167360. Throughput: 0: 42188.5. Samples: 587387880. Policy #0 lag: (min: 0.0, avg: 22.5, max: 42.0) +[2024-03-29 16:51:23,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 16:51:25,341][00497] Updated weights for policy 0, policy_version 43045 (0.0028) +[2024-03-29 16:51:28,566][00497] Updated weights for policy 0, policy_version 43055 (0.0022) +[2024-03-29 16:51:28,781][00476] Signal inference workers to stop experience collection... (20850 times) +[2024-03-29 16:51:28,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 705413120. Throughput: 0: 42160.0. Samples: 587611980. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:28,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:51:28,858][00497] InferenceWorker_p0-w0: stopping experience collection (20850 times) +[2024-03-29 16:51:28,869][00476] Signal inference workers to resume experience collection... (20850 times) +[2024-03-29 16:51:28,887][00497] InferenceWorker_p0-w0: resuming experience collection (20850 times) +[2024-03-29 16:51:32,977][00497] Updated weights for policy 0, policy_version 43065 (0.0016) +[2024-03-29 16:51:33,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 705609728. Throughput: 0: 42084.8. Samples: 587745720. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:33,840][00126] Avg episode reward: [(0, '0.636')] +[2024-03-29 16:51:36,956][00497] Updated weights for policy 0, policy_version 43075 (0.0025) +[2024-03-29 16:51:38,839][00126] Fps is (10 sec: 39321.2, 60 sec: 42052.2, 300 sec: 42265.2). Total num frames: 705806336. Throughput: 0: 42126.2. Samples: 588024400. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:38,840][00126] Avg episode reward: [(0, '0.461')] +[2024-03-29 16:51:40,851][00497] Updated weights for policy 0, policy_version 43085 (0.0026) +[2024-03-29 16:51:43,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 706052096. Throughput: 0: 42253.7. Samples: 588243140. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:43,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 16:51:44,119][00497] Updated weights for policy 0, policy_version 43095 (0.0030) +[2024-03-29 16:51:48,755][00497] Updated weights for policy 0, policy_version 43105 (0.0023) +[2024-03-29 16:51:48,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 706232320. Throughput: 0: 42018.2. Samples: 588371100. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:48,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:51:53,090][00497] Updated weights for policy 0, policy_version 43115 (0.0019) +[2024-03-29 16:51:53,839][00126] Fps is (10 sec: 36045.0, 60 sec: 41779.2, 300 sec: 42209.6). Total num frames: 706412544. Throughput: 0: 41888.0. Samples: 588638600. Policy #0 lag: (min: 0.0, avg: 21.3, max: 43.0) +[2024-03-29 16:51:53,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:51:56,546][00497] Updated weights for policy 0, policy_version 43125 (0.0028) +[2024-03-29 16:51:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 706658304. Throughput: 0: 42385.4. Samples: 588879440. Policy #0 lag: (min: 0.0, avg: 17.2, max: 41.0) +[2024-03-29 16:51:58,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 16:51:59,793][00497] Updated weights for policy 0, policy_version 43135 (0.0024) +[2024-03-29 16:52:03,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41779.1, 300 sec: 42209.6). Total num frames: 706854912. Throughput: 0: 41831.4. Samples: 588988360. Policy #0 lag: (min: 0.0, avg: 17.2, max: 41.0) +[2024-03-29 16:52:03,841][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 16:52:04,295][00497] Updated weights for policy 0, policy_version 43145 (0.0028) +[2024-03-29 16:52:06,698][00476] Signal inference workers to stop experience collection... (20900 times) +[2024-03-29 16:52:06,699][00476] Signal inference workers to resume experience collection... (20900 times) +[2024-03-29 16:52:06,723][00497] InferenceWorker_p0-w0: stopping experience collection (20900 times) +[2024-03-29 16:52:06,743][00497] InferenceWorker_p0-w0: resuming experience collection (20900 times) +[2024-03-29 16:52:08,661][00497] Updated weights for policy 0, policy_version 43155 (0.0022) +[2024-03-29 16:52:08,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 707051520. Throughput: 0: 41559.6. Samples: 589258060. Policy #0 lag: (min: 0.0, avg: 17.2, max: 41.0) +[2024-03-29 16:52:08,840][00126] Avg episode reward: [(0, '0.452')] +[2024-03-29 16:52:12,136][00497] Updated weights for policy 0, policy_version 43165 (0.0020) +[2024-03-29 16:52:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 42154.1). Total num frames: 707280896. Throughput: 0: 42122.1. Samples: 589507480. Policy #0 lag: (min: 0.0, avg: 17.2, max: 41.0) +[2024-03-29 16:52:13,840][00126] Avg episode reward: [(0, '0.440')] +[2024-03-29 16:52:15,586][00497] Updated weights for policy 0, policy_version 43175 (0.0021) +[2024-03-29 16:52:18,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42052.2, 300 sec: 42209.6). Total num frames: 707493888. Throughput: 0: 41725.4. Samples: 589623360. Policy #0 lag: (min: 0.0, avg: 17.2, max: 41.0) +[2024-03-29 16:52:18,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 16:52:19,851][00497] Updated weights for policy 0, policy_version 43185 (0.0017) +[2024-03-29 16:52:23,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 42265.1). Total num frames: 707690496. Throughput: 0: 41553.7. Samples: 589894320. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:23,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 16:52:24,214][00497] Updated weights for policy 0, policy_version 43195 (0.0028) +[2024-03-29 16:52:27,725][00497] Updated weights for policy 0, policy_version 43205 (0.0026) +[2024-03-29 16:52:28,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 707919872. Throughput: 0: 42030.7. Samples: 590134520. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:28,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 16:52:31,346][00497] Updated weights for policy 0, policy_version 43215 (0.0024) +[2024-03-29 16:52:33,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 708132864. Throughput: 0: 41661.8. Samples: 590245880. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:33,841][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 16:52:35,698][00497] Updated weights for policy 0, policy_version 43225 (0.0024) +[2024-03-29 16:52:38,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.3, 300 sec: 42154.1). Total num frames: 708313088. Throughput: 0: 41646.3. Samples: 590512680. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:38,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 16:52:39,972][00497] Updated weights for policy 0, policy_version 43235 (0.0034) +[2024-03-29 16:52:42,084][00476] Signal inference workers to stop experience collection... (20950 times) +[2024-03-29 16:52:42,124][00497] InferenceWorker_p0-w0: stopping experience collection (20950 times) +[2024-03-29 16:52:42,309][00476] Signal inference workers to resume experience collection... (20950 times) +[2024-03-29 16:52:42,310][00497] InferenceWorker_p0-w0: resuming experience collection (20950 times) +[2024-03-29 16:52:43,474][00497] Updated weights for policy 0, policy_version 43245 (0.0025) +[2024-03-29 16:52:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.2, 300 sec: 42098.6). Total num frames: 708542464. Throughput: 0: 42064.9. Samples: 590772360. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:43,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:52:46,789][00497] Updated weights for policy 0, policy_version 43255 (0.0023) +[2024-03-29 16:52:48,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 708755456. Throughput: 0: 42057.8. Samples: 590880960. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 16:52:48,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 16:52:51,319][00497] Updated weights for policy 0, policy_version 43265 (0.0022) +[2024-03-29 16:52:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 708935680. Throughput: 0: 41920.9. Samples: 591144500. Policy #0 lag: (min: 2.0, avg: 23.3, max: 43.0) +[2024-03-29 16:52:53,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 16:52:55,948][00497] Updated weights for policy 0, policy_version 43275 (0.0029) +[2024-03-29 16:52:58,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.1, 300 sec: 42098.6). Total num frames: 709148672. Throughput: 0: 42121.9. Samples: 591402960. Policy #0 lag: (min: 2.0, avg: 23.3, max: 43.0) +[2024-03-29 16:52:58,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 16:52:59,317][00497] Updated weights for policy 0, policy_version 43285 (0.0019) +[2024-03-29 16:53:02,656][00497] Updated weights for policy 0, policy_version 43295 (0.0025) +[2024-03-29 16:53:03,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42325.3, 300 sec: 42098.5). Total num frames: 709394432. Throughput: 0: 41915.9. Samples: 591509580. Policy #0 lag: (min: 2.0, avg: 23.3, max: 43.0) +[2024-03-29 16:53:03,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:53:03,861][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043298_709394432.pth... +[2024-03-29 16:53:04,186][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042680_699269120.pth +[2024-03-29 16:53:07,225][00497] Updated weights for policy 0, policy_version 43305 (0.0022) +[2024-03-29 16:53:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 709558272. Throughput: 0: 41171.3. Samples: 591747020. Policy #0 lag: (min: 2.0, avg: 23.3, max: 43.0) +[2024-03-29 16:53:08,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:53:11,828][00497] Updated weights for policy 0, policy_version 43315 (0.0023) +[2024-03-29 16:53:13,532][00476] Signal inference workers to stop experience collection... (21000 times) +[2024-03-29 16:53:13,569][00497] InferenceWorker_p0-w0: stopping experience collection (21000 times) +[2024-03-29 16:53:13,723][00476] Signal inference workers to resume experience collection... (21000 times) +[2024-03-29 16:53:13,724][00497] InferenceWorker_p0-w0: resuming experience collection (21000 times) +[2024-03-29 16:53:13,839][00126] Fps is (10 sec: 34406.7, 60 sec: 40960.1, 300 sec: 41931.9). Total num frames: 709738496. Throughput: 0: 41821.8. Samples: 592016500. Policy #0 lag: (min: 2.0, avg: 23.3, max: 43.0) +[2024-03-29 16:53:13,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 16:53:15,404][00497] Updated weights for policy 0, policy_version 43325 (0.0025) +[2024-03-29 16:53:18,774][00497] Updated weights for policy 0, policy_version 43335 (0.0019) +[2024-03-29 16:53:18,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 710000640. Throughput: 0: 41446.1. Samples: 592110960. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:18,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 16:53:23,185][00497] Updated weights for policy 0, policy_version 43345 (0.0022) +[2024-03-29 16:53:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 710180864. Throughput: 0: 41205.3. Samples: 592366920. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:23,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 16:53:27,488][00497] Updated weights for policy 0, policy_version 43355 (0.0027) +[2024-03-29 16:53:28,839][00126] Fps is (10 sec: 36044.8, 60 sec: 40686.9, 300 sec: 41931.9). Total num frames: 710361088. Throughput: 0: 41553.3. Samples: 592642260. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:28,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 16:53:30,987][00497] Updated weights for policy 0, policy_version 43365 (0.0020) +[2024-03-29 16:53:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 710623232. Throughput: 0: 41528.0. Samples: 592749720. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:33,840][00126] Avg episode reward: [(0, '0.662')] +[2024-03-29 16:53:34,021][00476] Saving new best policy, reward=0.662! +[2024-03-29 16:53:34,613][00497] Updated weights for policy 0, policy_version 43375 (0.0029) +[2024-03-29 16:53:38,839][00126] Fps is (10 sec: 44237.3, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 710803456. Throughput: 0: 40964.5. Samples: 592987900. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:38,841][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 16:53:39,069][00497] Updated weights for policy 0, policy_version 43385 (0.0025) +[2024-03-29 16:53:43,398][00497] Updated weights for policy 0, policy_version 43395 (0.0021) +[2024-03-29 16:53:43,839][00126] Fps is (10 sec: 36044.7, 60 sec: 40686.9, 300 sec: 41931.9). Total num frames: 710983680. Throughput: 0: 41146.2. Samples: 593254540. Policy #0 lag: (min: 0.0, avg: 21.6, max: 44.0) +[2024-03-29 16:53:43,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 16:53:45,381][00476] Signal inference workers to stop experience collection... (21050 times) +[2024-03-29 16:53:45,419][00497] InferenceWorker_p0-w0: stopping experience collection (21050 times) +[2024-03-29 16:53:45,574][00476] Signal inference workers to resume experience collection... (21050 times) +[2024-03-29 16:53:45,575][00497] InferenceWorker_p0-w0: resuming experience collection (21050 times) +[2024-03-29 16:53:46,706][00497] Updated weights for policy 0, policy_version 43405 (0.0020) +[2024-03-29 16:53:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 41876.4). Total num frames: 711229440. Throughput: 0: 41607.2. Samples: 593381900. Policy #0 lag: (min: 0.0, avg: 17.7, max: 41.0) +[2024-03-29 16:53:48,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 16:53:50,142][00497] Updated weights for policy 0, policy_version 43415 (0.0031) +[2024-03-29 16:53:53,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 711426048. Throughput: 0: 41482.6. Samples: 593613740. Policy #0 lag: (min: 0.0, avg: 17.7, max: 41.0) +[2024-03-29 16:53:53,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 16:53:55,017][00497] Updated weights for policy 0, policy_version 43425 (0.0024) +[2024-03-29 16:53:58,841][00126] Fps is (10 sec: 39315.7, 60 sec: 41232.0, 300 sec: 41931.7). Total num frames: 711622656. Throughput: 0: 41347.1. Samples: 593877180. Policy #0 lag: (min: 0.0, avg: 17.7, max: 41.0) +[2024-03-29 16:53:58,842][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:53:58,993][00497] Updated weights for policy 0, policy_version 43435 (0.0020) +[2024-03-29 16:54:02,653][00497] Updated weights for policy 0, policy_version 43445 (0.0021) +[2024-03-29 16:54:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 40686.9, 300 sec: 41765.3). Total num frames: 711835648. Throughput: 0: 42215.1. Samples: 594010640. Policy #0 lag: (min: 0.0, avg: 17.7, max: 41.0) +[2024-03-29 16:54:03,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 16:54:06,286][00497] Updated weights for policy 0, policy_version 43455 (0.0030) +[2024-03-29 16:54:08,839][00126] Fps is (10 sec: 44243.2, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 712065024. Throughput: 0: 41016.9. Samples: 594212680. Policy #0 lag: (min: 0.0, avg: 17.7, max: 41.0) +[2024-03-29 16:54:08,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 16:54:11,258][00497] Updated weights for policy 0, policy_version 43465 (0.0022) +[2024-03-29 16:54:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 712245248. Throughput: 0: 40646.7. Samples: 594471360. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:13,842][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 16:54:15,092][00497] Updated weights for policy 0, policy_version 43475 (0.0023) +[2024-03-29 16:54:18,278][00476] Signal inference workers to stop experience collection... (21100 times) +[2024-03-29 16:54:18,312][00497] InferenceWorker_p0-w0: stopping experience collection (21100 times) +[2024-03-29 16:54:18,501][00476] Signal inference workers to resume experience collection... (21100 times) +[2024-03-29 16:54:18,502][00497] InferenceWorker_p0-w0: resuming experience collection (21100 times) +[2024-03-29 16:54:18,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40687.0, 300 sec: 41709.8). Total num frames: 712441856. Throughput: 0: 41408.4. Samples: 594613100. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:18,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 16:54:19,003][00497] Updated weights for policy 0, policy_version 43485 (0.0029) +[2024-03-29 16:54:22,558][00497] Updated weights for policy 0, policy_version 43495 (0.0024) +[2024-03-29 16:54:23,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 712654848. Throughput: 0: 40851.1. Samples: 594826200. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:23,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 16:54:27,202][00497] Updated weights for policy 0, policy_version 43505 (0.0025) +[2024-03-29 16:54:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 712851456. Throughput: 0: 40817.3. Samples: 595091320. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:28,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 16:54:30,828][00497] Updated weights for policy 0, policy_version 43515 (0.0023) +[2024-03-29 16:54:33,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40413.8, 300 sec: 41598.7). Total num frames: 713048064. Throughput: 0: 40716.8. Samples: 595214160. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:33,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 16:54:34,881][00497] Updated weights for policy 0, policy_version 43525 (0.0030) +[2024-03-29 16:54:38,399][00497] Updated weights for policy 0, policy_version 43535 (0.0025) +[2024-03-29 16:54:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 713293824. Throughput: 0: 40895.5. Samples: 595454040. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 16:54:38,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 16:54:43,291][00497] Updated weights for policy 0, policy_version 43545 (0.0021) +[2024-03-29 16:54:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 40960.0, 300 sec: 41654.2). Total num frames: 713441280. Throughput: 0: 40322.6. Samples: 595691640. Policy #0 lag: (min: 0.0, avg: 22.5, max: 43.0) +[2024-03-29 16:54:43,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:54:46,873][00497] Updated weights for policy 0, policy_version 43555 (0.0016) +[2024-03-29 16:54:48,839][00126] Fps is (10 sec: 32768.2, 60 sec: 39867.7, 300 sec: 41376.6). Total num frames: 713621504. Throughput: 0: 40112.1. Samples: 595815680. Policy #0 lag: (min: 0.0, avg: 22.5, max: 43.0) +[2024-03-29 16:54:48,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 16:54:50,790][00476] Signal inference workers to stop experience collection... (21150 times) +[2024-03-29 16:54:50,850][00497] InferenceWorker_p0-w0: stopping experience collection (21150 times) +[2024-03-29 16:54:50,874][00476] Signal inference workers to resume experience collection... (21150 times) +[2024-03-29 16:54:50,881][00497] InferenceWorker_p0-w0: resuming experience collection (21150 times) +[2024-03-29 16:54:51,191][00497] Updated weights for policy 0, policy_version 43565 (0.0024) +[2024-03-29 16:54:53,840][00126] Fps is (10 sec: 44231.8, 60 sec: 40959.2, 300 sec: 41598.5). Total num frames: 713883648. Throughput: 0: 40854.0. Samples: 596051160. Policy #0 lag: (min: 0.0, avg: 22.5, max: 43.0) +[2024-03-29 16:54:53,841][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:54:54,745][00497] Updated weights for policy 0, policy_version 43575 (0.0024) +[2024-03-29 16:54:58,839][00126] Fps is (10 sec: 45875.2, 60 sec: 40961.0, 300 sec: 41654.2). Total num frames: 714080256. Throughput: 0: 40579.2. Samples: 596297420. Policy #0 lag: (min: 0.0, avg: 22.5, max: 43.0) +[2024-03-29 16:54:58,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 16:54:58,968][00497] Updated weights for policy 0, policy_version 43585 (0.0017) +[2024-03-29 16:55:03,047][00497] Updated weights for policy 0, policy_version 43595 (0.0035) +[2024-03-29 16:55:03,839][00126] Fps is (10 sec: 37687.5, 60 sec: 40413.9, 300 sec: 41432.1). Total num frames: 714260480. Throughput: 0: 40446.1. Samples: 596433180. Policy #0 lag: (min: 0.0, avg: 22.5, max: 43.0) +[2024-03-29 16:55:03,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 16:55:03,861][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043595_714260480.pth... +[2024-03-29 16:55:04,168][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000042991_704364544.pth +[2024-03-29 16:55:07,531][00497] Updated weights for policy 0, policy_version 43605 (0.0021) +[2024-03-29 16:55:08,839][00126] Fps is (10 sec: 39321.5, 60 sec: 40140.8, 300 sec: 41432.1). Total num frames: 714473472. Throughput: 0: 40876.4. Samples: 596665640. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:08,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 16:55:11,292][00497] Updated weights for policy 0, policy_version 43615 (0.0018) +[2024-03-29 16:55:13,841][00126] Fps is (10 sec: 40955.0, 60 sec: 40413.1, 300 sec: 41487.4). Total num frames: 714670080. Throughput: 0: 40026.9. Samples: 596892580. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:13,841][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 16:55:15,753][00497] Updated weights for policy 0, policy_version 43625 (0.0023) +[2024-03-29 16:55:18,839][00126] Fps is (10 sec: 40959.8, 60 sec: 40686.9, 300 sec: 41376.5). Total num frames: 714883072. Throughput: 0: 40361.3. Samples: 597030420. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:18,842][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 16:55:19,336][00497] Updated weights for policy 0, policy_version 43635 (0.0018) +[2024-03-29 16:55:23,839][00126] Fps is (10 sec: 39326.7, 60 sec: 40140.8, 300 sec: 41265.5). Total num frames: 715063296. Throughput: 0: 40447.2. Samples: 597274160. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:23,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 16:55:23,903][00497] Updated weights for policy 0, policy_version 43645 (0.0018) +[2024-03-29 16:55:24,740][00476] Signal inference workers to stop experience collection... (21200 times) +[2024-03-29 16:55:24,776][00497] InferenceWorker_p0-w0: stopping experience collection (21200 times) +[2024-03-29 16:55:24,956][00476] Signal inference workers to resume experience collection... (21200 times) +[2024-03-29 16:55:24,957][00497] InferenceWorker_p0-w0: resuming experience collection (21200 times) +[2024-03-29 16:55:27,623][00497] Updated weights for policy 0, policy_version 43655 (0.0035) +[2024-03-29 16:55:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 715309056. Throughput: 0: 40408.0. Samples: 597510000. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:28,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 16:55:31,628][00497] Updated weights for policy 0, policy_version 43665 (0.0021) +[2024-03-29 16:55:33,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40413.9, 300 sec: 41321.0). Total num frames: 715472896. Throughput: 0: 40599.5. Samples: 597642660. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:55:33,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 16:55:35,214][00497] Updated weights for policy 0, policy_version 43675 (0.0021) +[2024-03-29 16:55:38,839][00126] Fps is (10 sec: 37683.5, 60 sec: 39867.8, 300 sec: 41265.5). Total num frames: 715685888. Throughput: 0: 40755.3. Samples: 597885100. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:55:38,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 16:55:39,749][00497] Updated weights for policy 0, policy_version 43685 (0.0032) +[2024-03-29 16:55:43,464][00497] Updated weights for policy 0, policy_version 43695 (0.0020) +[2024-03-29 16:55:43,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41233.1, 300 sec: 41376.5). Total num frames: 715915264. Throughput: 0: 40576.0. Samples: 598123340. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:55:43,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 16:55:47,895][00497] Updated weights for policy 0, policy_version 43705 (0.0022) +[2024-03-29 16:55:48,839][00126] Fps is (10 sec: 39321.1, 60 sec: 40959.9, 300 sec: 41265.5). Total num frames: 716079104. Throughput: 0: 40465.7. Samples: 598254140. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:55:48,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:55:51,300][00497] Updated weights for policy 0, policy_version 43715 (0.0028) +[2024-03-29 16:55:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40414.7, 300 sec: 41209.9). Total num frames: 716308480. Throughput: 0: 40730.7. Samples: 598498520. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:55:53,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:55:55,601][00497] Updated weights for policy 0, policy_version 43725 (0.0019) +[2024-03-29 16:55:58,839][00126] Fps is (10 sec: 44237.3, 60 sec: 40686.9, 300 sec: 41265.5). Total num frames: 716521472. Throughput: 0: 41018.1. Samples: 598738340. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:55:58,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 16:55:59,268][00497] Updated weights for policy 0, policy_version 43735 (0.0022) +[2024-03-29 16:56:01,442][00476] Signal inference workers to stop experience collection... (21250 times) +[2024-03-29 16:56:01,443][00476] Signal inference workers to resume experience collection... (21250 times) +[2024-03-29 16:56:01,469][00497] InferenceWorker_p0-w0: stopping experience collection (21250 times) +[2024-03-29 16:56:01,490][00497] InferenceWorker_p0-w0: resuming experience collection (21250 times) +[2024-03-29 16:56:03,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40686.9, 300 sec: 41265.5). Total num frames: 716701696. Throughput: 0: 40989.3. Samples: 598874940. Policy #0 lag: (min: 0.0, avg: 21.0, max: 44.0) +[2024-03-29 16:56:03,840][00126] Avg episode reward: [(0, '0.489')] +[2024-03-29 16:56:03,920][00497] Updated weights for policy 0, policy_version 43745 (0.0022) +[2024-03-29 16:56:07,526][00497] Updated weights for policy 0, policy_version 43755 (0.0031) +[2024-03-29 16:56:08,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40960.0, 300 sec: 41154.4). Total num frames: 716931072. Throughput: 0: 40965.8. Samples: 599117620. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 16:56:08,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 16:56:11,776][00497] Updated weights for policy 0, policy_version 43765 (0.0024) +[2024-03-29 16:56:13,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41233.9, 300 sec: 41265.4). Total num frames: 717144064. Throughput: 0: 40850.2. Samples: 599348260. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 16:56:13,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 16:56:15,620][00497] Updated weights for policy 0, policy_version 43775 (0.0017) +[2024-03-29 16:56:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40960.0, 300 sec: 41265.5). Total num frames: 717340672. Throughput: 0: 40788.0. Samples: 599478120. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 16:56:18,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 16:56:19,808][00497] Updated weights for policy 0, policy_version 43785 (0.0024) +[2024-03-29 16:56:23,697][00497] Updated weights for policy 0, policy_version 43795 (0.0021) +[2024-03-29 16:56:23,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41233.1, 300 sec: 41098.8). Total num frames: 717537280. Throughput: 0: 40966.6. Samples: 599728600. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 16:56:23,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 16:56:27,966][00497] Updated weights for policy 0, policy_version 43805 (0.0024) +[2024-03-29 16:56:28,839][00126] Fps is (10 sec: 40960.1, 60 sec: 40687.0, 300 sec: 41154.4). Total num frames: 717750272. Throughput: 0: 41043.6. Samples: 599970300. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 16:56:28,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 16:56:31,943][00497] Updated weights for policy 0, policy_version 43815 (0.0030) +[2024-03-29 16:56:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.0, 300 sec: 41154.4). Total num frames: 717946880. Throughput: 0: 40686.3. Samples: 600085020. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:33,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 16:56:35,716][00476] Signal inference workers to stop experience collection... (21300 times) +[2024-03-29 16:56:35,778][00497] InferenceWorker_p0-w0: stopping experience collection (21300 times) +[2024-03-29 16:56:35,803][00476] Signal inference workers to resume experience collection... (21300 times) +[2024-03-29 16:56:35,806][00497] InferenceWorker_p0-w0: resuming experience collection (21300 times) +[2024-03-29 16:56:36,113][00497] Updated weights for policy 0, policy_version 43825 (0.0018) +[2024-03-29 16:56:38,839][00126] Fps is (10 sec: 36044.6, 60 sec: 40413.8, 300 sec: 40876.7). Total num frames: 718110720. Throughput: 0: 41020.0. Samples: 600344420. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:38,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 16:56:39,990][00497] Updated weights for policy 0, policy_version 43835 (0.0028) +[2024-03-29 16:56:43,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40413.9, 300 sec: 41043.3). Total num frames: 718340096. Throughput: 0: 40623.1. Samples: 600566380. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:43,840][00126] Avg episode reward: [(0, '0.502')] +[2024-03-29 16:56:44,139][00497] Updated weights for policy 0, policy_version 43845 (0.0020) +[2024-03-29 16:56:48,361][00497] Updated weights for policy 0, policy_version 43855 (0.0034) +[2024-03-29 16:56:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40960.1, 300 sec: 41098.8). Total num frames: 718536704. Throughput: 0: 39803.1. Samples: 600666080. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:48,841][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 16:56:52,356][00497] Updated weights for policy 0, policy_version 43865 (0.0018) +[2024-03-29 16:56:53,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40140.8, 300 sec: 40876.7). Total num frames: 718716928. Throughput: 0: 40351.1. Samples: 600933420. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:53,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 16:56:56,705][00497] Updated weights for policy 0, policy_version 43875 (0.0029) +[2024-03-29 16:56:58,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40413.8, 300 sec: 40987.8). Total num frames: 718946304. Throughput: 0: 40243.1. Samples: 601159200. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:56:58,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:57:00,615][00497] Updated weights for policy 0, policy_version 43885 (0.0024) +[2024-03-29 16:57:03,839][00126] Fps is (10 sec: 42598.2, 60 sec: 40686.9, 300 sec: 40987.8). Total num frames: 719142912. Throughput: 0: 40098.2. Samples: 601282540. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 16:57:03,841][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 16:57:04,069][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043894_719159296.pth... +[2024-03-29 16:57:04,442][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043298_709394432.pth +[2024-03-29 16:57:04,758][00497] Updated weights for policy 0, policy_version 43895 (0.0017) +[2024-03-29 16:57:08,499][00497] Updated weights for policy 0, policy_version 43905 (0.0022) +[2024-03-29 16:57:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40140.8, 300 sec: 40876.7). Total num frames: 719339520. Throughput: 0: 40045.8. Samples: 601530660. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 16:57:08,840][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 16:57:11,788][00476] Signal inference workers to stop experience collection... (21350 times) +[2024-03-29 16:57:11,813][00497] InferenceWorker_p0-w0: stopping experience collection (21350 times) +[2024-03-29 16:57:12,004][00476] Signal inference workers to resume experience collection... (21350 times) +[2024-03-29 16:57:12,004][00497] InferenceWorker_p0-w0: resuming experience collection (21350 times) +[2024-03-29 16:57:12,845][00497] Updated weights for policy 0, policy_version 43915 (0.0034) +[2024-03-29 16:57:13,839][00126] Fps is (10 sec: 40960.6, 60 sec: 40140.9, 300 sec: 40876.7). Total num frames: 719552512. Throughput: 0: 40029.8. Samples: 601771640. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 16:57:13,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 16:57:16,957][00497] Updated weights for policy 0, policy_version 43925 (0.0022) +[2024-03-29 16:57:18,839][00126] Fps is (10 sec: 42598.2, 60 sec: 40413.8, 300 sec: 40932.2). Total num frames: 719765504. Throughput: 0: 40171.1. Samples: 601892720. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 16:57:18,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 16:57:21,036][00497] Updated weights for policy 0, policy_version 43935 (0.0022) +[2024-03-29 16:57:23,839][00126] Fps is (10 sec: 37682.8, 60 sec: 39867.7, 300 sec: 40710.1). Total num frames: 719929344. Throughput: 0: 39409.3. Samples: 602117840. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 16:57:23,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 16:57:25,324][00497] Updated weights for policy 0, policy_version 43945 (0.0018) +[2024-03-29 16:57:28,839][00126] Fps is (10 sec: 36045.1, 60 sec: 39594.7, 300 sec: 40654.5). Total num frames: 720125952. Throughput: 0: 40267.6. Samples: 602378420. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:28,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 16:57:29,414][00497] Updated weights for policy 0, policy_version 43955 (0.0030) +[2024-03-29 16:57:33,312][00497] Updated weights for policy 0, policy_version 43965 (0.0032) +[2024-03-29 16:57:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 39867.7, 300 sec: 40765.6). Total num frames: 720338944. Throughput: 0: 40344.4. Samples: 602481580. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:33,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:57:37,348][00497] Updated weights for policy 0, policy_version 43975 (0.0019) +[2024-03-29 16:57:38,839][00126] Fps is (10 sec: 42598.1, 60 sec: 40686.9, 300 sec: 40710.1). Total num frames: 720551936. Throughput: 0: 39900.9. Samples: 602728960. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:38,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 16:57:41,881][00497] Updated weights for policy 0, policy_version 43985 (0.0017) +[2024-03-29 16:57:43,839][00126] Fps is (10 sec: 36044.9, 60 sec: 39321.6, 300 sec: 40487.9). Total num frames: 720699392. Throughput: 0: 40582.3. Samples: 602985400. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:43,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 16:57:45,660][00497] Updated weights for policy 0, policy_version 43995 (0.0019) +[2024-03-29 16:57:46,983][00476] Signal inference workers to stop experience collection... (21400 times) +[2024-03-29 16:57:47,013][00497] InferenceWorker_p0-w0: stopping experience collection (21400 times) +[2024-03-29 16:57:47,173][00476] Signal inference workers to resume experience collection... (21400 times) +[2024-03-29 16:57:47,174][00497] InferenceWorker_p0-w0: resuming experience collection (21400 times) +[2024-03-29 16:57:48,839][00126] Fps is (10 sec: 37683.1, 60 sec: 39867.7, 300 sec: 40654.5). Total num frames: 720928768. Throughput: 0: 39909.8. Samples: 603078480. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:48,840][00126] Avg episode reward: [(0, '0.460')] +[2024-03-29 16:57:49,852][00497] Updated weights for policy 0, policy_version 44005 (0.0021) +[2024-03-29 16:57:53,839][00126] Fps is (10 sec: 44236.9, 60 sec: 40413.9, 300 sec: 40654.5). Total num frames: 721141760. Throughput: 0: 39623.6. Samples: 603313720. Policy #0 lag: (min: 2.0, avg: 18.7, max: 43.0) +[2024-03-29 16:57:53,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 16:57:53,852][00497] Updated weights for policy 0, policy_version 44015 (0.0024) +[2024-03-29 16:57:58,480][00497] Updated weights for policy 0, policy_version 44025 (0.0018) +[2024-03-29 16:57:58,839][00126] Fps is (10 sec: 37683.4, 60 sec: 39321.6, 300 sec: 40376.8). Total num frames: 721305600. Throughput: 0: 40134.6. Samples: 603577700. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:57:58,840][00126] Avg episode reward: [(0, '0.442')] +[2024-03-29 16:58:02,192][00497] Updated weights for policy 0, policy_version 44035 (0.0022) +[2024-03-29 16:58:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40140.8, 300 sec: 40654.5). Total num frames: 721551360. Throughput: 0: 40131.1. Samples: 603698620. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:58:03,842][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:58:06,318][00497] Updated weights for policy 0, policy_version 44045 (0.0026) +[2024-03-29 16:58:08,839][00126] Fps is (10 sec: 44236.9, 60 sec: 40140.8, 300 sec: 40710.1). Total num frames: 721747968. Throughput: 0: 40122.2. Samples: 603923340. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:58:08,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 16:58:10,394][00497] Updated weights for policy 0, policy_version 44055 (0.0031) +[2024-03-29 16:58:13,839][00126] Fps is (10 sec: 36044.6, 60 sec: 39321.5, 300 sec: 40376.8). Total num frames: 721911808. Throughput: 0: 39519.0. Samples: 604156780. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:58:13,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 16:58:15,142][00497] Updated weights for policy 0, policy_version 44065 (0.0025) +[2024-03-29 16:58:18,822][00497] Updated weights for policy 0, policy_version 44075 (0.0031) +[2024-03-29 16:58:18,839][00126] Fps is (10 sec: 37683.5, 60 sec: 39321.7, 300 sec: 40487.9). Total num frames: 722124800. Throughput: 0: 40167.7. Samples: 604289120. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:58:18,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 16:58:22,692][00497] Updated weights for policy 0, policy_version 44085 (0.0022) +[2024-03-29 16:58:23,839][00126] Fps is (10 sec: 40960.4, 60 sec: 39867.8, 300 sec: 40543.5). Total num frames: 722321408. Throughput: 0: 39664.0. Samples: 604513840. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 16:58:23,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 16:58:24,540][00476] Signal inference workers to stop experience collection... (21450 times) +[2024-03-29 16:58:24,540][00476] Signal inference workers to resume experience collection... (21450 times) +[2024-03-29 16:58:24,583][00497] InferenceWorker_p0-w0: stopping experience collection (21450 times) +[2024-03-29 16:58:24,583][00497] InferenceWorker_p0-w0: resuming experience collection (21450 times) +[2024-03-29 16:58:26,764][00497] Updated weights for policy 0, policy_version 44095 (0.0017) +[2024-03-29 16:58:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 40140.8, 300 sec: 40376.8). Total num frames: 722534400. Throughput: 0: 39161.8. Samples: 604747680. Policy #0 lag: (min: 1.0, avg: 22.1, max: 40.0) +[2024-03-29 16:58:28,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:58:31,415][00497] Updated weights for policy 0, policy_version 44105 (0.0023) +[2024-03-29 16:58:33,839][00126] Fps is (10 sec: 37682.9, 60 sec: 39321.6, 300 sec: 40321.3). Total num frames: 722698240. Throughput: 0: 40300.5. Samples: 604892000. Policy #0 lag: (min: 1.0, avg: 22.1, max: 40.0) +[2024-03-29 16:58:33,840][00126] Avg episode reward: [(0, '0.476')] +[2024-03-29 16:58:35,072][00497] Updated weights for policy 0, policy_version 44115 (0.0023) +[2024-03-29 16:58:38,818][00497] Updated weights for policy 0, policy_version 44125 (0.0018) +[2024-03-29 16:58:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 39867.7, 300 sec: 40543.5). Total num frames: 722944000. Throughput: 0: 40017.3. Samples: 605114500. Policy #0 lag: (min: 1.0, avg: 22.1, max: 40.0) +[2024-03-29 16:58:38,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:58:42,957][00497] Updated weights for policy 0, policy_version 44135 (0.0019) +[2024-03-29 16:58:43,839][00126] Fps is (10 sec: 44237.0, 60 sec: 40686.9, 300 sec: 40376.8). Total num frames: 723140608. Throughput: 0: 39668.9. Samples: 605362800. Policy #0 lag: (min: 1.0, avg: 22.1, max: 40.0) +[2024-03-29 16:58:43,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 16:58:47,541][00497] Updated weights for policy 0, policy_version 44145 (0.0028) +[2024-03-29 16:58:48,839][00126] Fps is (10 sec: 36045.0, 60 sec: 39594.7, 300 sec: 40265.8). Total num frames: 723304448. Throughput: 0: 39959.2. Samples: 605496780. Policy #0 lag: (min: 1.0, avg: 22.1, max: 40.0) +[2024-03-29 16:58:48,840][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 16:58:51,231][00497] Updated weights for policy 0, policy_version 44155 (0.0026) +[2024-03-29 16:58:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 39867.7, 300 sec: 40377.0). Total num frames: 723533824. Throughput: 0: 39975.9. Samples: 605722260. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:58:53,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 16:58:55,629][00497] Updated weights for policy 0, policy_version 44165 (0.0028) +[2024-03-29 16:58:58,839][00126] Fps is (10 sec: 42598.0, 60 sec: 40413.8, 300 sec: 40321.3). Total num frames: 723730432. Throughput: 0: 40030.7. Samples: 605958160. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:58:58,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 16:58:59,417][00497] Updated weights for policy 0, policy_version 44175 (0.0026) +[2024-03-29 16:59:00,856][00476] Signal inference workers to stop experience collection... (21500 times) +[2024-03-29 16:59:00,897][00497] InferenceWorker_p0-w0: stopping experience collection (21500 times) +[2024-03-29 16:59:01,066][00476] Signal inference workers to resume experience collection... (21500 times) +[2024-03-29 16:59:01,066][00497] InferenceWorker_p0-w0: resuming experience collection (21500 times) +[2024-03-29 16:59:03,839][00126] Fps is (10 sec: 37683.3, 60 sec: 39321.6, 300 sec: 40154.7). Total num frames: 723910656. Throughput: 0: 39813.2. Samples: 606080720. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:59:03,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 16:59:04,054][00497] Updated weights for policy 0, policy_version 44185 (0.0018) +[2024-03-29 16:59:04,334][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044186_723943424.pth... +[2024-03-29 16:59:04,666][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043595_714260480.pth +[2024-03-29 16:59:07,789][00497] Updated weights for policy 0, policy_version 44195 (0.0024) +[2024-03-29 16:59:08,839][00126] Fps is (10 sec: 40960.1, 60 sec: 39867.7, 300 sec: 40321.3). Total num frames: 724140032. Throughput: 0: 40313.7. Samples: 606327960. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:59:08,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 16:59:12,307][00497] Updated weights for policy 0, policy_version 44205 (0.0022) +[2024-03-29 16:59:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40413.9, 300 sec: 40321.3). Total num frames: 724336640. Throughput: 0: 40237.7. Samples: 606558380. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:59:13,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 16:59:15,863][00497] Updated weights for policy 0, policy_version 44215 (0.0024) +[2024-03-29 16:59:18,839][00126] Fps is (10 sec: 37683.3, 60 sec: 39867.7, 300 sec: 40210.2). Total num frames: 724516864. Throughput: 0: 39705.8. Samples: 606678760. Policy #0 lag: (min: 1.0, avg: 23.8, max: 44.0) +[2024-03-29 16:59:18,840][00126] Avg episode reward: [(0, '0.467')] +[2024-03-29 16:59:20,248][00497] Updated weights for policy 0, policy_version 44225 (0.0024) +[2024-03-29 16:59:23,839][00126] Fps is (10 sec: 39321.7, 60 sec: 40140.8, 300 sec: 40265.8). Total num frames: 724729856. Throughput: 0: 40525.8. Samples: 606938160. Policy #0 lag: (min: 0.0, avg: 19.2, max: 42.0) +[2024-03-29 16:59:23,840][00126] Avg episode reward: [(0, '0.468')] +[2024-03-29 16:59:23,874][00497] Updated weights for policy 0, policy_version 44235 (0.0024) +[2024-03-29 16:59:28,319][00497] Updated weights for policy 0, policy_version 44245 (0.0019) +[2024-03-29 16:59:28,839][00126] Fps is (10 sec: 42598.5, 60 sec: 40140.8, 300 sec: 40321.3). Total num frames: 724942848. Throughput: 0: 40209.8. Samples: 607172240. Policy #0 lag: (min: 0.0, avg: 19.2, max: 42.0) +[2024-03-29 16:59:28,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:59:32,008][00497] Updated weights for policy 0, policy_version 44255 (0.0022) +[2024-03-29 16:59:33,839][00126] Fps is (10 sec: 42598.2, 60 sec: 40960.0, 300 sec: 40210.2). Total num frames: 725155840. Throughput: 0: 39792.8. Samples: 607287460. Policy #0 lag: (min: 0.0, avg: 19.2, max: 42.0) +[2024-03-29 16:59:33,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 16:59:35,769][00476] Signal inference workers to stop experience collection... (21550 times) +[2024-03-29 16:59:35,796][00497] InferenceWorker_p0-w0: stopping experience collection (21550 times) +[2024-03-29 16:59:35,967][00476] Signal inference workers to resume experience collection... (21550 times) +[2024-03-29 16:59:35,968][00497] InferenceWorker_p0-w0: resuming experience collection (21550 times) +[2024-03-29 16:59:36,681][00497] Updated weights for policy 0, policy_version 44265 (0.0024) +[2024-03-29 16:59:38,839][00126] Fps is (10 sec: 39321.4, 60 sec: 39867.7, 300 sec: 40321.3). Total num frames: 725336064. Throughput: 0: 40339.6. Samples: 607537540. Policy #0 lag: (min: 0.0, avg: 19.2, max: 42.0) +[2024-03-29 16:59:38,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 16:59:40,087][00497] Updated weights for policy 0, policy_version 44275 (0.0026) +[2024-03-29 16:59:43,839][00126] Fps is (10 sec: 37683.4, 60 sec: 39867.8, 300 sec: 40376.8). Total num frames: 725532672. Throughput: 0: 40230.3. Samples: 607768520. Policy #0 lag: (min: 0.0, avg: 19.2, max: 42.0) +[2024-03-29 16:59:43,840][00126] Avg episode reward: [(0, '0.431')] +[2024-03-29 16:59:44,381][00497] Updated weights for policy 0, policy_version 44285 (0.0017) +[2024-03-29 16:59:48,360][00497] Updated weights for policy 0, policy_version 44295 (0.0022) +[2024-03-29 16:59:48,842][00126] Fps is (10 sec: 40950.3, 60 sec: 40685.3, 300 sec: 40210.1). Total num frames: 725745664. Throughput: 0: 40339.7. Samples: 607896100. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:59:48,842][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 16:59:52,912][00497] Updated weights for policy 0, policy_version 44305 (0.0023) +[2024-03-29 16:59:53,839][00126] Fps is (10 sec: 40959.7, 60 sec: 40140.8, 300 sec: 40210.2). Total num frames: 725942272. Throughput: 0: 40550.6. Samples: 608152740. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:59:53,840][00126] Avg episode reward: [(0, '0.464')] +[2024-03-29 16:59:56,259][00497] Updated weights for policy 0, policy_version 44315 (0.0019) +[2024-03-29 16:59:58,839][00126] Fps is (10 sec: 42608.6, 60 sec: 40687.0, 300 sec: 40376.8). Total num frames: 726171648. Throughput: 0: 40395.6. Samples: 608376180. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 16:59:58,840][00126] Avg episode reward: [(0, '0.470')] +[2024-03-29 17:00:00,488][00497] Updated weights for policy 0, policy_version 44325 (0.0027) +[2024-03-29 17:00:03,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40686.9, 300 sec: 40265.8). Total num frames: 726351872. Throughput: 0: 40702.2. Samples: 608510360. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:00:03,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:00:04,820][00497] Updated weights for policy 0, policy_version 44335 (0.0018) +[2024-03-29 17:00:08,670][00497] Updated weights for policy 0, policy_version 44345 (0.0021) +[2024-03-29 17:00:08,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40140.8, 300 sec: 40265.9). Total num frames: 726548480. Throughput: 0: 40300.9. Samples: 608751700. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:00:08,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 17:00:09,006][00476] Signal inference workers to stop experience collection... (21600 times) +[2024-03-29 17:00:09,055][00497] InferenceWorker_p0-w0: stopping experience collection (21600 times) +[2024-03-29 17:00:09,191][00476] Signal inference workers to resume experience collection... (21600 times) +[2024-03-29 17:00:09,191][00497] InferenceWorker_p0-w0: resuming experience collection (21600 times) +[2024-03-29 17:00:12,451][00497] Updated weights for policy 0, policy_version 44355 (0.0022) +[2024-03-29 17:00:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40686.9, 300 sec: 40321.3). Total num frames: 726777856. Throughput: 0: 40491.0. Samples: 608994340. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:00:13,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:00:16,631][00497] Updated weights for policy 0, policy_version 44365 (0.0024) +[2024-03-29 17:00:18,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40413.8, 300 sec: 40265.8). Total num frames: 726941696. Throughput: 0: 40450.2. Samples: 609107720. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:18,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 17:00:20,535][00497] Updated weights for policy 0, policy_version 44375 (0.0025) +[2024-03-29 17:00:23,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40413.8, 300 sec: 40154.7). Total num frames: 727154688. Throughput: 0: 40261.8. Samples: 609349320. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:23,841][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 17:00:24,884][00497] Updated weights for policy 0, policy_version 44385 (0.0020) +[2024-03-29 17:00:28,665][00497] Updated weights for policy 0, policy_version 44395 (0.0019) +[2024-03-29 17:00:28,839][00126] Fps is (10 sec: 42598.2, 60 sec: 40413.8, 300 sec: 40321.3). Total num frames: 727367680. Throughput: 0: 40391.9. Samples: 609586160. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:28,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:00:33,172][00497] Updated weights for policy 0, policy_version 44405 (0.0019) +[2024-03-29 17:00:33,839][00126] Fps is (10 sec: 39321.6, 60 sec: 39867.7, 300 sec: 40210.2). Total num frames: 727547904. Throughput: 0: 40249.7. Samples: 609707240. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:33,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:00:37,061][00497] Updated weights for policy 0, policy_version 44415 (0.0023) +[2024-03-29 17:00:38,840][00126] Fps is (10 sec: 39318.7, 60 sec: 40413.3, 300 sec: 40154.6). Total num frames: 727760896. Throughput: 0: 39903.8. Samples: 609948440. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:38,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 17:00:41,127][00497] Updated weights for policy 0, policy_version 44425 (0.0017) +[2024-03-29 17:00:43,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40413.9, 300 sec: 40265.8). Total num frames: 727957504. Throughput: 0: 40298.2. Samples: 610189600. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 17:00:43,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 17:00:44,862][00497] Updated weights for policy 0, policy_version 44435 (0.0020) +[2024-03-29 17:00:48,839][00126] Fps is (10 sec: 40963.2, 60 sec: 40415.4, 300 sec: 40210.2). Total num frames: 728170496. Throughput: 0: 40005.3. Samples: 610310600. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 17:00:48,841][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:00:49,400][00497] Updated weights for policy 0, policy_version 44445 (0.0018) +[2024-03-29 17:00:49,681][00476] Signal inference workers to stop experience collection... (21650 times) +[2024-03-29 17:00:49,702][00497] InferenceWorker_p0-w0: stopping experience collection (21650 times) +[2024-03-29 17:00:49,878][00476] Signal inference workers to resume experience collection... (21650 times) +[2024-03-29 17:00:49,878][00497] InferenceWorker_p0-w0: resuming experience collection (21650 times) +[2024-03-29 17:00:52,887][00497] Updated weights for policy 0, policy_version 44455 (0.0030) +[2024-03-29 17:00:53,839][00126] Fps is (10 sec: 40959.6, 60 sec: 40413.9, 300 sec: 40154.7). Total num frames: 728367104. Throughput: 0: 40392.8. Samples: 610569380. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 17:00:53,840][00126] Avg episode reward: [(0, '0.651')] +[2024-03-29 17:00:57,048][00497] Updated weights for policy 0, policy_version 44465 (0.0027) +[2024-03-29 17:00:58,839][00126] Fps is (10 sec: 40959.8, 60 sec: 40140.7, 300 sec: 40265.8). Total num frames: 728580096. Throughput: 0: 40341.3. Samples: 610809700. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 17:00:58,842][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 17:01:00,737][00497] Updated weights for policy 0, policy_version 44475 (0.0023) +[2024-03-29 17:01:03,839][00126] Fps is (10 sec: 42598.7, 60 sec: 40687.0, 300 sec: 40210.2). Total num frames: 728793088. Throughput: 0: 40361.8. Samples: 610924000. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 17:01:03,841][00126] Avg episode reward: [(0, '0.465')] +[2024-03-29 17:01:04,065][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044483_728809472.pth... +[2024-03-29 17:01:04,399][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000043894_719159296.pth +[2024-03-29 17:01:05,365][00497] Updated weights for policy 0, policy_version 44485 (0.0030) +[2024-03-29 17:01:08,839][00126] Fps is (10 sec: 40960.6, 60 sec: 40687.0, 300 sec: 40154.7). Total num frames: 728989696. Throughput: 0: 41235.2. Samples: 611204900. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 17:01:08,840][00126] Avg episode reward: [(0, '0.715')] +[2024-03-29 17:01:08,862][00476] Saving new best policy, reward=0.715! +[2024-03-29 17:01:08,873][00497] Updated weights for policy 0, policy_version 44495 (0.0030) +[2024-03-29 17:01:12,849][00497] Updated weights for policy 0, policy_version 44505 (0.0029) +[2024-03-29 17:01:13,839][00126] Fps is (10 sec: 40959.8, 60 sec: 40413.9, 300 sec: 40210.2). Total num frames: 729202688. Throughput: 0: 40893.8. Samples: 611426380. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:13,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 17:01:16,566][00497] Updated weights for policy 0, policy_version 44515 (0.0022) +[2024-03-29 17:01:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 40265.8). Total num frames: 729415680. Throughput: 0: 41108.1. Samples: 611557100. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:18,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 17:01:21,190][00497] Updated weights for policy 0, policy_version 44525 (0.0020) +[2024-03-29 17:01:23,684][00476] Signal inference workers to stop experience collection... (21700 times) +[2024-03-29 17:01:23,702][00497] InferenceWorker_p0-w0: stopping experience collection (21700 times) +[2024-03-29 17:01:23,839][00126] Fps is (10 sec: 40960.3, 60 sec: 40960.0, 300 sec: 40210.2). Total num frames: 729612288. Throughput: 0: 41660.8. Samples: 611823140. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:23,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 17:01:23,897][00476] Signal inference workers to resume experience collection... (21700 times) +[2024-03-29 17:01:23,898][00497] InferenceWorker_p0-w0: resuming experience collection (21700 times) +[2024-03-29 17:01:24,528][00497] Updated weights for policy 0, policy_version 44535 (0.0022) +[2024-03-29 17:01:28,370][00497] Updated weights for policy 0, policy_version 44545 (0.0027) +[2024-03-29 17:01:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41233.1, 300 sec: 40321.3). Total num frames: 729841664. Throughput: 0: 41680.0. Samples: 612065200. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:28,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 17:01:31,945][00497] Updated weights for policy 0, policy_version 44555 (0.0036) +[2024-03-29 17:01:33,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.1, 300 sec: 40487.9). Total num frames: 730054656. Throughput: 0: 41840.8. Samples: 612193440. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:33,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 17:01:36,687][00497] Updated weights for policy 0, policy_version 44565 (0.0024) +[2024-03-29 17:01:38,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.7, 300 sec: 40321.3). Total num frames: 730234880. Throughput: 0: 41984.5. Samples: 612458680. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 17:01:38,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:01:39,924][00497] Updated weights for policy 0, policy_version 44575 (0.0019) +[2024-03-29 17:01:43,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 40432.4). Total num frames: 730464256. Throughput: 0: 41948.9. Samples: 612697400. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:01:43,841][00126] Avg episode reward: [(0, '0.434')] +[2024-03-29 17:01:44,143][00497] Updated weights for policy 0, policy_version 44585 (0.0018) +[2024-03-29 17:01:47,666][00497] Updated weights for policy 0, policy_version 44595 (0.0023) +[2024-03-29 17:01:48,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42052.3, 300 sec: 40599.0). Total num frames: 730693632. Throughput: 0: 42186.3. Samples: 612822380. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:01:48,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:01:52,066][00497] Updated weights for policy 0, policy_version 44605 (0.0022) +[2024-03-29 17:01:53,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41506.2, 300 sec: 40376.9). Total num frames: 730857472. Throughput: 0: 41908.9. Samples: 613090800. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:01:53,840][00126] Avg episode reward: [(0, '0.654')] +[2024-03-29 17:01:55,493][00497] Updated weights for policy 0, policy_version 44615 (0.0027) +[2024-03-29 17:01:57,105][00476] Signal inference workers to stop experience collection... (21750 times) +[2024-03-29 17:01:57,174][00497] InferenceWorker_p0-w0: stopping experience collection (21750 times) +[2024-03-29 17:01:57,180][00476] Signal inference workers to resume experience collection... (21750 times) +[2024-03-29 17:01:57,201][00497] InferenceWorker_p0-w0: resuming experience collection (21750 times) +[2024-03-29 17:01:58,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.2, 300 sec: 40487.9). Total num frames: 731086848. Throughput: 0: 42220.4. Samples: 613326300. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:01:58,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:01:59,775][00497] Updated weights for policy 0, policy_version 44625 (0.0027) +[2024-03-29 17:02:03,212][00497] Updated weights for policy 0, policy_version 44635 (0.0019) +[2024-03-29 17:02:03,839][00126] Fps is (10 sec: 47513.1, 60 sec: 42325.3, 300 sec: 40654.5). Total num frames: 731332608. Throughput: 0: 42088.3. Samples: 613451080. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:02:03,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:02:07,781][00497] Updated weights for policy 0, policy_version 44645 (0.0019) +[2024-03-29 17:02:08,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 40487.9). Total num frames: 731496448. Throughput: 0: 41843.9. Samples: 613706120. Policy #0 lag: (min: 1.0, avg: 20.2, max: 41.0) +[2024-03-29 17:02:08,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 17:02:11,156][00497] Updated weights for policy 0, policy_version 44655 (0.0029) +[2024-03-29 17:02:13,839][00126] Fps is (10 sec: 39321.3, 60 sec: 42052.2, 300 sec: 40543.4). Total num frames: 731725824. Throughput: 0: 42102.5. Samples: 613959820. Policy #0 lag: (min: 2.0, avg: 20.7, max: 44.0) +[2024-03-29 17:02:13,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 17:02:15,268][00497] Updated weights for policy 0, policy_version 44665 (0.0018) +[2024-03-29 17:02:18,839][00126] Fps is (10 sec: 45875.5, 60 sec: 42325.3, 300 sec: 40765.6). Total num frames: 731955200. Throughput: 0: 42021.0. Samples: 614084380. Policy #0 lag: (min: 2.0, avg: 20.7, max: 44.0) +[2024-03-29 17:02:18,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 17:02:18,842][00497] Updated weights for policy 0, policy_version 44675 (0.0024) +[2024-03-29 17:02:23,392][00497] Updated weights for policy 0, policy_version 44685 (0.0024) +[2024-03-29 17:02:23,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42052.3, 300 sec: 40710.1). Total num frames: 732135424. Throughput: 0: 41915.1. Samples: 614344860. Policy #0 lag: (min: 2.0, avg: 20.7, max: 44.0) +[2024-03-29 17:02:23,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 17:02:25,806][00476] Signal inference workers to stop experience collection... (21800 times) +[2024-03-29 17:02:25,843][00497] InferenceWorker_p0-w0: stopping experience collection (21800 times) +[2024-03-29 17:02:25,895][00476] Signal inference workers to resume experience collection... (21800 times) +[2024-03-29 17:02:25,895][00497] InferenceWorker_p0-w0: resuming experience collection (21800 times) +[2024-03-29 17:02:26,454][00497] Updated weights for policy 0, policy_version 44695 (0.0034) +[2024-03-29 17:02:28,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 40765.6). Total num frames: 732364800. Throughput: 0: 42329.8. Samples: 614602240. Policy #0 lag: (min: 2.0, avg: 20.7, max: 44.0) +[2024-03-29 17:02:28,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 17:02:30,462][00497] Updated weights for policy 0, policy_version 44705 (0.0022) +[2024-03-29 17:02:33,829][00497] Updated weights for policy 0, policy_version 44715 (0.0020) +[2024-03-29 17:02:33,839][00126] Fps is (10 sec: 47513.5, 60 sec: 42598.5, 300 sec: 40876.7). Total num frames: 732610560. Throughput: 0: 42486.7. Samples: 614734280. Policy #0 lag: (min: 2.0, avg: 20.7, max: 44.0) +[2024-03-29 17:02:33,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:02:38,342][00497] Updated weights for policy 0, policy_version 44725 (0.0026) +[2024-03-29 17:02:38,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42598.4, 300 sec: 40987.8). Total num frames: 732790784. Throughput: 0: 42215.1. Samples: 614990480. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:02:38,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:02:41,686][00497] Updated weights for policy 0, policy_version 44735 (0.0020) +[2024-03-29 17:02:43,839][00126] Fps is (10 sec: 39321.2, 60 sec: 42325.3, 300 sec: 40932.2). Total num frames: 733003776. Throughput: 0: 42776.4. Samples: 615251240. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:02:43,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:02:45,795][00497] Updated weights for policy 0, policy_version 44745 (0.0019) +[2024-03-29 17:02:48,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.2, 300 sec: 40987.8). Total num frames: 733233152. Throughput: 0: 42825.7. Samples: 615378240. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:02:48,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:02:49,276][00497] Updated weights for policy 0, policy_version 44755 (0.0027) +[2024-03-29 17:02:53,553][00497] Updated weights for policy 0, policy_version 44765 (0.0027) +[2024-03-29 17:02:53,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42871.4, 300 sec: 41098.9). Total num frames: 733429760. Throughput: 0: 42745.4. Samples: 615629660. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:02:53,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 17:02:57,029][00497] Updated weights for policy 0, policy_version 44775 (0.0028) +[2024-03-29 17:02:58,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42598.5, 300 sec: 40987.8). Total num frames: 733642752. Throughput: 0: 42804.1. Samples: 615886000. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:02:58,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 17:03:00,846][00476] Signal inference workers to stop experience collection... (21850 times) +[2024-03-29 17:03:00,923][00497] InferenceWorker_p0-w0: stopping experience collection (21850 times) +[2024-03-29 17:03:00,931][00476] Signal inference workers to resume experience collection... (21850 times) +[2024-03-29 17:03:00,953][00497] InferenceWorker_p0-w0: resuming experience collection (21850 times) +[2024-03-29 17:03:01,264][00497] Updated weights for policy 0, policy_version 44785 (0.0023) +[2024-03-29 17:03:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42598.4, 300 sec: 41154.4). Total num frames: 733888512. Throughput: 0: 42989.8. Samples: 616018920. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 17:03:03,840][00126] Avg episode reward: [(0, '0.502')] +[2024-03-29 17:03:04,117][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044794_733904896.pth... +[2024-03-29 17:03:04,465][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044186_723943424.pth +[2024-03-29 17:03:04,731][00497] Updated weights for policy 0, policy_version 44795 (0.0020) +[2024-03-29 17:03:08,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42871.4, 300 sec: 41209.9). Total num frames: 734068736. Throughput: 0: 42663.9. Samples: 616264740. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:08,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:03:09,075][00497] Updated weights for policy 0, policy_version 44805 (0.0023) +[2024-03-29 17:03:12,614][00497] Updated weights for policy 0, policy_version 44815 (0.0023) +[2024-03-29 17:03:13,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42598.5, 300 sec: 41209.9). Total num frames: 734281728. Throughput: 0: 42502.8. Samples: 616514860. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:13,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:03:16,622][00497] Updated weights for policy 0, policy_version 44825 (0.0025) +[2024-03-29 17:03:18,839][00126] Fps is (10 sec: 45875.9, 60 sec: 42871.5, 300 sec: 41376.5). Total num frames: 734527488. Throughput: 0: 42496.5. Samples: 616646620. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:18,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:03:19,956][00497] Updated weights for policy 0, policy_version 44835 (0.0028) +[2024-03-29 17:03:23,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42871.5, 300 sec: 41265.5). Total num frames: 734707712. Throughput: 0: 42381.8. Samples: 616897660. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:23,841][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:03:24,606][00497] Updated weights for policy 0, policy_version 44845 (0.0018) +[2024-03-29 17:03:28,008][00497] Updated weights for policy 0, policy_version 44855 (0.0023) +[2024-03-29 17:03:28,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42598.5, 300 sec: 41432.1). Total num frames: 734920704. Throughput: 0: 42335.7. Samples: 617156340. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:28,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:03:31,857][00497] Updated weights for policy 0, policy_version 44865 (0.0028) +[2024-03-29 17:03:32,557][00476] Signal inference workers to stop experience collection... (21900 times) +[2024-03-29 17:03:32,593][00497] InferenceWorker_p0-w0: stopping experience collection (21900 times) +[2024-03-29 17:03:32,748][00476] Signal inference workers to resume experience collection... (21900 times) +[2024-03-29 17:03:32,748][00497] InferenceWorker_p0-w0: resuming experience collection (21900 times) +[2024-03-29 17:03:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 41376.5). Total num frames: 735150080. Throughput: 0: 42415.7. Samples: 617286940. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 17:03:33,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 17:03:35,212][00497] Updated weights for policy 0, policy_version 44875 (0.0025) +[2024-03-29 17:03:38,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42871.5, 300 sec: 41432.1). Total num frames: 735363072. Throughput: 0: 42578.6. Samples: 617545700. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:03:38,840][00126] Avg episode reward: [(0, '0.459')] +[2024-03-29 17:03:39,699][00497] Updated weights for policy 0, policy_version 44885 (0.0017) +[2024-03-29 17:03:43,179][00497] Updated weights for policy 0, policy_version 44895 (0.0028) +[2024-03-29 17:03:43,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42871.6, 300 sec: 41598.7). Total num frames: 735576064. Throughput: 0: 42450.7. Samples: 617796280. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:03:43,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:03:47,261][00497] Updated weights for policy 0, policy_version 44905 (0.0018) +[2024-03-29 17:03:48,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42871.6, 300 sec: 41598.7). Total num frames: 735805440. Throughput: 0: 42552.9. Samples: 617933800. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:03:48,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:03:50,775][00497] Updated weights for policy 0, policy_version 44915 (0.0018) +[2024-03-29 17:03:53,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42871.4, 300 sec: 41598.7). Total num frames: 736002048. Throughput: 0: 42528.8. Samples: 618178540. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:03:53,842][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 17:03:55,079][00497] Updated weights for policy 0, policy_version 44925 (0.0025) +[2024-03-29 17:03:58,515][00497] Updated weights for policy 0, policy_version 44935 (0.0025) +[2024-03-29 17:03:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 43144.5, 300 sec: 41765.3). Total num frames: 736231424. Throughput: 0: 42849.8. Samples: 618443100. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:03:58,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 17:04:02,596][00497] Updated weights for policy 0, policy_version 44945 (0.0024) +[2024-03-29 17:04:03,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42325.3, 300 sec: 41654.2). Total num frames: 736428032. Throughput: 0: 42899.1. Samples: 618577080. Policy #0 lag: (min: 0.0, avg: 22.1, max: 43.0) +[2024-03-29 17:04:03,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 17:04:05,959][00497] Updated weights for policy 0, policy_version 44955 (0.0037) +[2024-03-29 17:04:07,931][00476] Signal inference workers to stop experience collection... (21950 times) +[2024-03-29 17:04:08,002][00497] InferenceWorker_p0-w0: stopping experience collection (21950 times) +[2024-03-29 17:04:08,007][00476] Signal inference workers to resume experience collection... (21950 times) +[2024-03-29 17:04:08,028][00497] InferenceWorker_p0-w0: resuming experience collection (21950 times) +[2024-03-29 17:04:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42871.5, 300 sec: 41709.8). Total num frames: 736641024. Throughput: 0: 42629.3. Samples: 618815980. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:08,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:04:10,515][00497] Updated weights for policy 0, policy_version 44965 (0.0018) +[2024-03-29 17:04:13,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42871.4, 300 sec: 41820.8). Total num frames: 736854016. Throughput: 0: 42905.7. Samples: 619087100. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:13,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 17:04:13,943][00497] Updated weights for policy 0, policy_version 44975 (0.0021) +[2024-03-29 17:04:17,905][00497] Updated weights for policy 0, policy_version 44985 (0.0017) +[2024-03-29 17:04:18,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42325.2, 300 sec: 41820.8). Total num frames: 737067008. Throughput: 0: 42791.8. Samples: 619212580. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:18,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:04:21,114][00497] Updated weights for policy 0, policy_version 44995 (0.0028) +[2024-03-29 17:04:23,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42598.4, 300 sec: 41765.3). Total num frames: 737263616. Throughput: 0: 42436.9. Samples: 619455360. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:23,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:04:25,982][00497] Updated weights for policy 0, policy_version 45005 (0.0025) +[2024-03-29 17:04:28,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42871.4, 300 sec: 41820.9). Total num frames: 737492992. Throughput: 0: 42865.7. Samples: 619725240. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:28,841][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:04:29,338][00497] Updated weights for policy 0, policy_version 45015 (0.0035) +[2024-03-29 17:04:33,072][00497] Updated weights for policy 0, policy_version 45025 (0.0023) +[2024-03-29 17:04:33,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42871.4, 300 sec: 41987.5). Total num frames: 737722368. Throughput: 0: 42743.9. Samples: 619857280. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:33,840][00126] Avg episode reward: [(0, '0.619')] +[2024-03-29 17:04:36,468][00497] Updated weights for policy 0, policy_version 45035 (0.0023) +[2024-03-29 17:04:38,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42598.4, 300 sec: 41987.5). Total num frames: 737918976. Throughput: 0: 42724.1. Samples: 620101120. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:38,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 17:04:40,752][00476] Signal inference workers to stop experience collection... (22000 times) +[2024-03-29 17:04:40,774][00497] InferenceWorker_p0-w0: stopping experience collection (22000 times) +[2024-03-29 17:04:40,944][00476] Signal inference workers to resume experience collection... (22000 times) +[2024-03-29 17:04:40,945][00497] InferenceWorker_p0-w0: resuming experience collection (22000 times) +[2024-03-29 17:04:41,238][00497] Updated weights for policy 0, policy_version 45045 (0.0028) +[2024-03-29 17:04:43,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42325.3, 300 sec: 41932.3). Total num frames: 738115584. Throughput: 0: 42710.2. Samples: 620365060. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:43,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 17:04:44,749][00497] Updated weights for policy 0, policy_version 45055 (0.0022) +[2024-03-29 17:04:48,710][00497] Updated weights for policy 0, policy_version 45065 (0.0039) +[2024-03-29 17:04:48,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 738344960. Throughput: 0: 42593.3. Samples: 620493780. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:48,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 17:04:52,291][00497] Updated weights for policy 0, policy_version 45075 (0.0019) +[2024-03-29 17:04:53,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42598.5, 300 sec: 41987.5). Total num frames: 738557952. Throughput: 0: 42652.4. Samples: 620735340. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:53,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 17:04:56,987][00497] Updated weights for policy 0, policy_version 45085 (0.0032) +[2024-03-29 17:04:58,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 738754560. Throughput: 0: 42586.3. Samples: 621003480. Policy #0 lag: (min: 1.0, avg: 20.9, max: 42.0) +[2024-03-29 17:04:58,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:05:00,166][00497] Updated weights for policy 0, policy_version 45095 (0.0025) +[2024-03-29 17:05:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42325.3, 300 sec: 42098.6). Total num frames: 738967552. Throughput: 0: 42516.2. Samples: 621125800. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:03,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 17:05:04,139][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045105_739000320.pth... +[2024-03-29 17:05:04,140][00497] Updated weights for policy 0, policy_version 45105 (0.0021) +[2024-03-29 17:05:04,459][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044483_728809472.pth +[2024-03-29 17:05:07,776][00497] Updated weights for policy 0, policy_version 45115 (0.0019) +[2024-03-29 17:05:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 42098.6). Total num frames: 739196928. Throughput: 0: 42347.1. Samples: 621360980. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:08,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 17:05:12,617][00497] Updated weights for policy 0, policy_version 45125 (0.0032) +[2024-03-29 17:05:12,952][00476] Signal inference workers to stop experience collection... (22050 times) +[2024-03-29 17:05:13,012][00497] InferenceWorker_p0-w0: stopping experience collection (22050 times) +[2024-03-29 17:05:13,048][00476] Signal inference workers to resume experience collection... (22050 times) +[2024-03-29 17:05:13,050][00497] InferenceWorker_p0-w0: resuming experience collection (22050 times) +[2024-03-29 17:05:13,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.4, 300 sec: 42154.1). Total num frames: 739377152. Throughput: 0: 42521.9. Samples: 621638720. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:13,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 17:05:15,886][00497] Updated weights for policy 0, policy_version 45135 (0.0028) +[2024-03-29 17:05:18,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42325.4, 300 sec: 42209.6). Total num frames: 739606528. Throughput: 0: 42094.6. Samples: 621751540. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:18,840][00126] Avg episode reward: [(0, '0.489')] +[2024-03-29 17:05:19,645][00497] Updated weights for policy 0, policy_version 45145 (0.0024) +[2024-03-29 17:05:23,356][00497] Updated weights for policy 0, policy_version 45155 (0.0022) +[2024-03-29 17:05:23,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42871.4, 300 sec: 42265.2). Total num frames: 739835904. Throughput: 0: 42165.8. Samples: 621998580. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:23,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 17:05:27,854][00497] Updated weights for policy 0, policy_version 45165 (0.0020) +[2024-03-29 17:05:28,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 740016128. Throughput: 0: 42256.9. Samples: 622266620. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 17:05:28,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:05:31,549][00497] Updated weights for policy 0, policy_version 45175 (0.0030) +[2024-03-29 17:05:33,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.3, 300 sec: 42265.3). Total num frames: 740229120. Throughput: 0: 42110.2. Samples: 622388740. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:33,840][00126] Avg episode reward: [(0, '0.475')] +[2024-03-29 17:05:35,079][00497] Updated weights for policy 0, policy_version 45185 (0.0021) +[2024-03-29 17:05:38,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.4, 300 sec: 42376.2). Total num frames: 740458496. Throughput: 0: 42098.7. Samples: 622629780. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:38,840][00126] Avg episode reward: [(0, '0.483')] +[2024-03-29 17:05:38,944][00497] Updated weights for policy 0, policy_version 45195 (0.0019) +[2024-03-29 17:05:43,724][00497] Updated weights for policy 0, policy_version 45205 (0.0028) +[2024-03-29 17:05:43,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 740638720. Throughput: 0: 41960.4. Samples: 622891700. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:43,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 17:05:45,273][00476] Signal inference workers to stop experience collection... (22100 times) +[2024-03-29 17:05:45,307][00497] InferenceWorker_p0-w0: stopping experience collection (22100 times) +[2024-03-29 17:05:45,489][00476] Signal inference workers to resume experience collection... (22100 times) +[2024-03-29 17:05:45,490][00497] InferenceWorker_p0-w0: resuming experience collection (22100 times) +[2024-03-29 17:05:47,086][00497] Updated weights for policy 0, policy_version 45215 (0.0022) +[2024-03-29 17:05:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 42320.7). Total num frames: 740851712. Throughput: 0: 42009.4. Samples: 623016220. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:48,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 17:05:50,810][00497] Updated weights for policy 0, policy_version 45225 (0.0025) +[2024-03-29 17:05:53,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42325.2, 300 sec: 42431.8). Total num frames: 741097472. Throughput: 0: 42118.5. Samples: 623256320. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:53,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:05:54,544][00497] Updated weights for policy 0, policy_version 45235 (0.0023) +[2024-03-29 17:05:58,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42052.2, 300 sec: 42320.7). Total num frames: 741277696. Throughput: 0: 41848.3. Samples: 623521900. Policy #0 lag: (min: 1.0, avg: 19.7, max: 42.0) +[2024-03-29 17:05:58,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:05:59,393][00497] Updated weights for policy 0, policy_version 45245 (0.0024) +[2024-03-29 17:06:02,612][00497] Updated weights for policy 0, policy_version 45255 (0.0023) +[2024-03-29 17:06:03,839][00126] Fps is (10 sec: 39322.1, 60 sec: 42052.3, 300 sec: 42376.2). Total num frames: 741490688. Throughput: 0: 42228.5. Samples: 623651820. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 17:06:03,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:06:06,331][00497] Updated weights for policy 0, policy_version 45265 (0.0023) +[2024-03-29 17:06:08,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 741736448. Throughput: 0: 42189.8. Samples: 623897120. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 17:06:08,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:06:09,884][00497] Updated weights for policy 0, policy_version 45275 (0.0024) +[2024-03-29 17:06:13,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42325.1, 300 sec: 42376.2). Total num frames: 741916672. Throughput: 0: 42047.4. Samples: 624158760. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 17:06:13,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:06:14,680][00497] Updated weights for policy 0, policy_version 45285 (0.0023) +[2024-03-29 17:06:17,370][00476] Signal inference workers to stop experience collection... (22150 times) +[2024-03-29 17:06:17,408][00497] InferenceWorker_p0-w0: stopping experience collection (22150 times) +[2024-03-29 17:06:17,588][00476] Signal inference workers to resume experience collection... (22150 times) +[2024-03-29 17:06:17,589][00497] InferenceWorker_p0-w0: resuming experience collection (22150 times) +[2024-03-29 17:06:17,852][00497] Updated weights for policy 0, policy_version 45295 (0.0020) +[2024-03-29 17:06:18,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 742129664. Throughput: 0: 42379.6. Samples: 624295820. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 17:06:18,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:06:21,559][00497] Updated weights for policy 0, policy_version 45305 (0.0024) +[2024-03-29 17:06:23,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 742359040. Throughput: 0: 42313.2. Samples: 624533880. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 17:06:23,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 17:06:25,308][00497] Updated weights for policy 0, policy_version 45315 (0.0020) +[2024-03-29 17:06:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.3, 300 sec: 42376.3). Total num frames: 742555648. Throughput: 0: 42309.8. Samples: 624795640. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:28,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:06:29,960][00497] Updated weights for policy 0, policy_version 45325 (0.0023) +[2024-03-29 17:06:33,517][00497] Updated weights for policy 0, policy_version 45335 (0.0030) +[2024-03-29 17:06:33,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42598.3, 300 sec: 42542.8). Total num frames: 742785024. Throughput: 0: 42447.8. Samples: 624926380. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:33,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 17:06:37,080][00497] Updated weights for policy 0, policy_version 45345 (0.0027) +[2024-03-29 17:06:38,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 742998016. Throughput: 0: 42371.3. Samples: 625163020. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:38,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:06:41,055][00497] Updated weights for policy 0, policy_version 45355 (0.0027) +[2024-03-29 17:06:43,839][00126] Fps is (10 sec: 40960.7, 60 sec: 42598.4, 300 sec: 42376.2). Total num frames: 743194624. Throughput: 0: 42374.8. Samples: 625428760. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:43,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:06:45,687][00497] Updated weights for policy 0, policy_version 45365 (0.0023) +[2024-03-29 17:06:48,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 743391232. Throughput: 0: 42414.3. Samples: 625560460. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:48,841][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 17:06:49,283][00497] Updated weights for policy 0, policy_version 45375 (0.0025) +[2024-03-29 17:06:52,844][00497] Updated weights for policy 0, policy_version 45385 (0.0021) +[2024-03-29 17:06:53,059][00476] Signal inference workers to stop experience collection... (22200 times) +[2024-03-29 17:06:53,093][00497] InferenceWorker_p0-w0: stopping experience collection (22200 times) +[2024-03-29 17:06:53,266][00476] Signal inference workers to resume experience collection... (22200 times) +[2024-03-29 17:06:53,267][00497] InferenceWorker_p0-w0: resuming experience collection (22200 times) +[2024-03-29 17:06:53,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 42487.3). Total num frames: 743620608. Throughput: 0: 42386.7. Samples: 625804520. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:06:53,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:06:56,429][00497] Updated weights for policy 0, policy_version 45395 (0.0020) +[2024-03-29 17:06:58,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42598.5, 300 sec: 42376.3). Total num frames: 743833600. Throughput: 0: 42154.0. Samples: 626055680. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:06:58,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:07:01,205][00497] Updated weights for policy 0, policy_version 45405 (0.0029) +[2024-03-29 17:07:03,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42325.3, 300 sec: 42487.3). Total num frames: 744030208. Throughput: 0: 42156.9. Samples: 626192880. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:07:03,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 17:07:03,857][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045412_744030208.pth... +[2024-03-29 17:07:04,256][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000044794_733904896.pth +[2024-03-29 17:07:05,045][00497] Updated weights for policy 0, policy_version 45415 (0.0022) +[2024-03-29 17:07:08,501][00497] Updated weights for policy 0, policy_version 45425 (0.0017) +[2024-03-29 17:07:08,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 42487.3). Total num frames: 744259584. Throughput: 0: 42209.8. Samples: 626433320. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:07:08,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:07:12,087][00497] Updated weights for policy 0, policy_version 45435 (0.0021) +[2024-03-29 17:07:13,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42598.6, 300 sec: 42431.8). Total num frames: 744472576. Throughput: 0: 41915.2. Samples: 626681820. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:07:13,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 17:07:16,932][00497] Updated weights for policy 0, policy_version 45445 (0.0017) +[2024-03-29 17:07:18,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 744652800. Throughput: 0: 42026.4. Samples: 626817560. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:07:18,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:07:20,520][00497] Updated weights for policy 0, policy_version 45455 (0.0030) +[2024-03-29 17:07:23,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 744882176. Throughput: 0: 42490.2. Samples: 627075080. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 17:07:23,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 17:07:23,843][00497] Updated weights for policy 0, policy_version 45465 (0.0019) +[2024-03-29 17:07:26,298][00476] Signal inference workers to stop experience collection... (22250 times) +[2024-03-29 17:07:26,318][00497] InferenceWorker_p0-w0: stopping experience collection (22250 times) +[2024-03-29 17:07:26,508][00476] Signal inference workers to resume experience collection... (22250 times) +[2024-03-29 17:07:26,508][00497] InferenceWorker_p0-w0: resuming experience collection (22250 times) +[2024-03-29 17:07:27,536][00497] Updated weights for policy 0, policy_version 45475 (0.0023) +[2024-03-29 17:07:28,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 745095168. Throughput: 0: 41955.1. Samples: 627316740. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:28,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:07:32,450][00497] Updated weights for policy 0, policy_version 45485 (0.0021) +[2024-03-29 17:07:33,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.2, 300 sec: 42376.2). Total num frames: 745291776. Throughput: 0: 42020.7. Samples: 627451400. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:33,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 17:07:35,984][00497] Updated weights for policy 0, policy_version 45495 (0.0021) +[2024-03-29 17:07:38,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 42431.8). Total num frames: 745521152. Throughput: 0: 42503.7. Samples: 627717180. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:38,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 17:07:39,150][00497] Updated weights for policy 0, policy_version 45505 (0.0025) +[2024-03-29 17:07:42,872][00497] Updated weights for policy 0, policy_version 45515 (0.0026) +[2024-03-29 17:07:43,839][00126] Fps is (10 sec: 45875.7, 60 sec: 42598.4, 300 sec: 42431.8). Total num frames: 745750528. Throughput: 0: 42240.4. Samples: 627956500. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:43,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:07:47,662][00497] Updated weights for policy 0, policy_version 45525 (0.0020) +[2024-03-29 17:07:48,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 745930752. Throughput: 0: 42184.4. Samples: 628091180. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:48,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:07:51,202][00497] Updated weights for policy 0, policy_version 45535 (0.0026) +[2024-03-29 17:07:53,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.4, 300 sec: 42431.8). Total num frames: 746160128. Throughput: 0: 42644.5. Samples: 628352320. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:53,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:07:54,527][00497] Updated weights for policy 0, policy_version 45545 (0.0024) +[2024-03-29 17:07:58,316][00497] Updated weights for policy 0, policy_version 45555 (0.0018) +[2024-03-29 17:07:58,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42598.3, 300 sec: 42376.2). Total num frames: 746389504. Throughput: 0: 42559.9. Samples: 628597020. Policy #0 lag: (min: 1.0, avg: 22.2, max: 42.0) +[2024-03-29 17:07:58,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:08:03,204][00497] Updated weights for policy 0, policy_version 45565 (0.0024) +[2024-03-29 17:08:03,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.3, 300 sec: 42376.3). Total num frames: 746569728. Throughput: 0: 42587.5. Samples: 628734000. Policy #0 lag: (min: 1.0, avg: 22.2, max: 42.0) +[2024-03-29 17:08:03,840][00126] Avg episode reward: [(0, '0.484')] +[2024-03-29 17:08:06,379][00476] Signal inference workers to stop experience collection... (22300 times) +[2024-03-29 17:08:06,413][00497] InferenceWorker_p0-w0: stopping experience collection (22300 times) +[2024-03-29 17:08:06,559][00476] Signal inference workers to resume experience collection... (22300 times) +[2024-03-29 17:08:06,560][00497] InferenceWorker_p0-w0: resuming experience collection (22300 times) +[2024-03-29 17:08:06,819][00497] Updated weights for policy 0, policy_version 45575 (0.0026) +[2024-03-29 17:08:08,839][00126] Fps is (10 sec: 39322.1, 60 sec: 42052.3, 300 sec: 42376.2). Total num frames: 746782720. Throughput: 0: 42439.6. Samples: 628984860. Policy #0 lag: (min: 1.0, avg: 22.2, max: 42.0) +[2024-03-29 17:08:08,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 17:08:10,144][00497] Updated weights for policy 0, policy_version 45585 (0.0023) +[2024-03-29 17:08:13,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42598.3, 300 sec: 42376.2). Total num frames: 747028480. Throughput: 0: 42288.3. Samples: 629219720. Policy #0 lag: (min: 1.0, avg: 22.2, max: 42.0) +[2024-03-29 17:08:13,840][00126] Avg episode reward: [(0, '0.452')] +[2024-03-29 17:08:13,841][00497] Updated weights for policy 0, policy_version 45595 (0.0026) +[2024-03-29 17:08:18,840][00126] Fps is (10 sec: 39318.8, 60 sec: 42051.8, 300 sec: 42265.1). Total num frames: 747175936. Throughput: 0: 42241.2. Samples: 629352280. Policy #0 lag: (min: 1.0, avg: 22.2, max: 42.0) +[2024-03-29 17:08:18,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 17:08:18,897][00497] Updated weights for policy 0, policy_version 45605 (0.0023) +[2024-03-29 17:08:22,582][00497] Updated weights for policy 0, policy_version 45615 (0.0024) +[2024-03-29 17:08:23,839][00126] Fps is (10 sec: 37683.5, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 747405312. Throughput: 0: 42354.2. Samples: 629623120. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:23,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:08:25,783][00497] Updated weights for policy 0, policy_version 45625 (0.0025) +[2024-03-29 17:08:28,839][00126] Fps is (10 sec: 47516.5, 60 sec: 42598.3, 300 sec: 42376.2). Total num frames: 747651072. Throughput: 0: 42152.8. Samples: 629853380. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:28,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:08:29,336][00497] Updated weights for policy 0, policy_version 45635 (0.0022) +[2024-03-29 17:08:33,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42052.2, 300 sec: 42209.6). Total num frames: 747814912. Throughput: 0: 42090.1. Samples: 629985240. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:33,840][00126] Avg episode reward: [(0, '0.456')] +[2024-03-29 17:08:34,519][00497] Updated weights for policy 0, policy_version 45645 (0.0022) +[2024-03-29 17:08:38,122][00497] Updated weights for policy 0, policy_version 45655 (0.0031) +[2024-03-29 17:08:38,157][00476] Signal inference workers to stop experience collection... (22350 times) +[2024-03-29 17:08:38,176][00497] InferenceWorker_p0-w0: stopping experience collection (22350 times) +[2024-03-29 17:08:38,371][00476] Signal inference workers to resume experience collection... (22350 times) +[2024-03-29 17:08:38,372][00497] InferenceWorker_p0-w0: resuming experience collection (22350 times) +[2024-03-29 17:08:38,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 748044288. Throughput: 0: 42137.8. Samples: 630248520. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:38,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:08:41,511][00497] Updated weights for policy 0, policy_version 45665 (0.0028) +[2024-03-29 17:08:43,839][00126] Fps is (10 sec: 45875.8, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 748273664. Throughput: 0: 41958.8. Samples: 630485160. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:43,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:08:44,990][00497] Updated weights for policy 0, policy_version 45675 (0.0028) +[2024-03-29 17:08:48,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 42209.7). Total num frames: 748453888. Throughput: 0: 41798.2. Samples: 630614920. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 17:08:48,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 17:08:50,303][00497] Updated weights for policy 0, policy_version 45685 (0.0017) +[2024-03-29 17:08:53,711][00497] Updated weights for policy 0, policy_version 45695 (0.0028) +[2024-03-29 17:08:53,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 748666880. Throughput: 0: 41941.3. Samples: 630872220. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:08:53,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 17:08:57,389][00497] Updated weights for policy 0, policy_version 45705 (0.0027) +[2024-03-29 17:08:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 748879872. Throughput: 0: 41833.9. Samples: 631102240. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:08:58,840][00126] Avg episode reward: [(0, '0.464')] +[2024-03-29 17:09:00,789][00497] Updated weights for policy 0, policy_version 45715 (0.0023) +[2024-03-29 17:09:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.1, 300 sec: 42154.1). Total num frames: 749076480. Throughput: 0: 41621.0. Samples: 631225200. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:09:03,841][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 17:09:03,862][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045721_749092864.pth... +[2024-03-29 17:09:04,181][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045105_739000320.pth +[2024-03-29 17:09:05,891][00497] Updated weights for policy 0, policy_version 45725 (0.0031) +[2024-03-29 17:09:08,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 749289472. Throughput: 0: 41713.4. Samples: 631500220. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:09:08,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:09:09,395][00497] Updated weights for policy 0, policy_version 45735 (0.0031) +[2024-03-29 17:09:10,554][00476] Signal inference workers to stop experience collection... (22400 times) +[2024-03-29 17:09:10,634][00497] InferenceWorker_p0-w0: stopping experience collection (22400 times) +[2024-03-29 17:09:10,721][00476] Signal inference workers to resume experience collection... (22400 times) +[2024-03-29 17:09:10,721][00497] InferenceWorker_p0-w0: resuming experience collection (22400 times) +[2024-03-29 17:09:12,911][00497] Updated weights for policy 0, policy_version 45745 (0.0034) +[2024-03-29 17:09:13,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 749518848. Throughput: 0: 41755.6. Samples: 631732380. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:09:13,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 17:09:16,395][00497] Updated weights for policy 0, policy_version 45755 (0.0025) +[2024-03-29 17:09:18,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42598.9, 300 sec: 42265.2). Total num frames: 749731840. Throughput: 0: 41515.3. Samples: 631853420. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 17:09:18,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 17:09:21,674][00497] Updated weights for policy 0, policy_version 45765 (0.0024) +[2024-03-29 17:09:23,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.1, 300 sec: 42098.5). Total num frames: 749912064. Throughput: 0: 41708.3. Samples: 632125400. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:23,840][00126] Avg episode reward: [(0, '0.481')] +[2024-03-29 17:09:25,374][00497] Updated weights for policy 0, policy_version 45775 (0.0029) +[2024-03-29 17:09:28,595][00497] Updated weights for policy 0, policy_version 45785 (0.0024) +[2024-03-29 17:09:28,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 42098.6). Total num frames: 750141440. Throughput: 0: 41602.2. Samples: 632357260. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:28,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:09:32,191][00497] Updated weights for policy 0, policy_version 45795 (0.0025) +[2024-03-29 17:09:33,839][00126] Fps is (10 sec: 45875.8, 60 sec: 42598.5, 300 sec: 42209.6). Total num frames: 750370816. Throughput: 0: 41456.4. Samples: 632480460. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:33,840][00126] Avg episode reward: [(0, '0.506')] +[2024-03-29 17:09:37,196][00497] Updated weights for policy 0, policy_version 45805 (0.0032) +[2024-03-29 17:09:38,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.1, 300 sec: 42098.6). Total num frames: 750534656. Throughput: 0: 41741.0. Samples: 632750560. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:38,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:09:41,247][00497] Updated weights for policy 0, policy_version 45815 (0.0029) +[2024-03-29 17:09:43,057][00476] Signal inference workers to stop experience collection... (22450 times) +[2024-03-29 17:09:43,094][00497] InferenceWorker_p0-w0: stopping experience collection (22450 times) +[2024-03-29 17:09:43,284][00476] Signal inference workers to resume experience collection... (22450 times) +[2024-03-29 17:09:43,284][00497] InferenceWorker_p0-w0: resuming experience collection (22450 times) +[2024-03-29 17:09:43,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.1, 300 sec: 42043.0). Total num frames: 750747648. Throughput: 0: 41864.4. Samples: 632986140. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:43,840][00126] Avg episode reward: [(0, '0.634')] +[2024-03-29 17:09:44,580][00497] Updated weights for policy 0, policy_version 45825 (0.0032) +[2024-03-29 17:09:48,338][00497] Updated weights for policy 0, policy_version 45835 (0.0030) +[2024-03-29 17:09:48,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 750977024. Throughput: 0: 41573.4. Samples: 633096000. Policy #0 lag: (min: 0.0, avg: 18.9, max: 41.0) +[2024-03-29 17:09:48,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 17:09:53,134][00497] Updated weights for policy 0, policy_version 45845 (0.0017) +[2024-03-29 17:09:53,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40960.0, 300 sec: 41931.9). Total num frames: 751124480. Throughput: 0: 41405.2. Samples: 633363460. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:09:53,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 17:09:57,087][00497] Updated weights for policy 0, policy_version 45855 (0.0020) +[2024-03-29 17:09:58,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41233.1, 300 sec: 41987.5). Total num frames: 751353856. Throughput: 0: 41617.0. Samples: 633605140. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:09:58,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 17:10:00,501][00497] Updated weights for policy 0, policy_version 45865 (0.0017) +[2024-03-29 17:10:03,839][00126] Fps is (10 sec: 47513.9, 60 sec: 42052.4, 300 sec: 42043.0). Total num frames: 751599616. Throughput: 0: 41295.1. Samples: 633711700. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:10:03,841][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 17:10:04,123][00497] Updated weights for policy 0, policy_version 45875 (0.0023) +[2024-03-29 17:10:08,839][00126] Fps is (10 sec: 39320.9, 60 sec: 40959.9, 300 sec: 41931.9). Total num frames: 751747072. Throughput: 0: 40902.7. Samples: 633966020. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:10:08,840][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 17:10:09,297][00497] Updated weights for policy 0, policy_version 45885 (0.0023) +[2024-03-29 17:10:13,307][00497] Updated weights for policy 0, policy_version 45895 (0.0030) +[2024-03-29 17:10:13,839][00126] Fps is (10 sec: 36044.7, 60 sec: 40687.0, 300 sec: 41876.4). Total num frames: 751960064. Throughput: 0: 41484.0. Samples: 634224040. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:10:13,840][00126] Avg episode reward: [(0, '0.477')] +[2024-03-29 17:10:16,210][00476] Signal inference workers to stop experience collection... (22500 times) +[2024-03-29 17:10:16,238][00497] InferenceWorker_p0-w0: stopping experience collection (22500 times) +[2024-03-29 17:10:16,384][00476] Signal inference workers to resume experience collection... (22500 times) +[2024-03-29 17:10:16,385][00497] InferenceWorker_p0-w0: resuming experience collection (22500 times) +[2024-03-29 17:10:16,640][00497] Updated weights for policy 0, policy_version 45905 (0.0023) +[2024-03-29 17:10:18,839][00126] Fps is (10 sec: 44237.4, 60 sec: 40960.0, 300 sec: 41876.4). Total num frames: 752189440. Throughput: 0: 41054.2. Samples: 634327900. Policy #0 lag: (min: 0.0, avg: 22.9, max: 42.0) +[2024-03-29 17:10:18,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 17:10:20,334][00497] Updated weights for policy 0, policy_version 45915 (0.0027) +[2024-03-29 17:10:23,839][00126] Fps is (10 sec: 39320.9, 60 sec: 40686.9, 300 sec: 41820.8). Total num frames: 752353280. Throughput: 0: 40300.3. Samples: 634564080. Policy #0 lag: (min: 0.0, avg: 22.5, max: 40.0) +[2024-03-29 17:10:23,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 17:10:25,420][00497] Updated weights for policy 0, policy_version 45925 (0.0023) +[2024-03-29 17:10:28,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40413.8, 300 sec: 41820.9). Total num frames: 752566272. Throughput: 0: 41022.7. Samples: 634832160. Policy #0 lag: (min: 0.0, avg: 22.5, max: 40.0) +[2024-03-29 17:10:28,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:10:29,437][00497] Updated weights for policy 0, policy_version 45935 (0.0021) +[2024-03-29 17:10:32,561][00497] Updated weights for policy 0, policy_version 45945 (0.0025) +[2024-03-29 17:10:33,839][00126] Fps is (10 sec: 44237.3, 60 sec: 40413.8, 300 sec: 41820.8). Total num frames: 752795648. Throughput: 0: 41141.8. Samples: 634947380. Policy #0 lag: (min: 0.0, avg: 22.5, max: 40.0) +[2024-03-29 17:10:33,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 17:10:36,183][00497] Updated weights for policy 0, policy_version 45955 (0.0029) +[2024-03-29 17:10:38,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41233.1, 300 sec: 41931.9). Total num frames: 753008640. Throughput: 0: 40363.6. Samples: 635179820. Policy #0 lag: (min: 0.0, avg: 22.5, max: 40.0) +[2024-03-29 17:10:38,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 17:10:41,262][00497] Updated weights for policy 0, policy_version 45965 (0.0018) +[2024-03-29 17:10:43,839][00126] Fps is (10 sec: 39321.7, 60 sec: 40686.9, 300 sec: 41820.8). Total num frames: 753188864. Throughput: 0: 40781.7. Samples: 635440320. Policy #0 lag: (min: 0.0, avg: 22.5, max: 40.0) +[2024-03-29 17:10:43,840][00126] Avg episode reward: [(0, '0.467')] +[2024-03-29 17:10:45,404][00497] Updated weights for policy 0, policy_version 45975 (0.0019) +[2024-03-29 17:10:48,360][00476] Signal inference workers to stop experience collection... (22550 times) +[2024-03-29 17:10:48,436][00476] Signal inference workers to resume experience collection... (22550 times) +[2024-03-29 17:10:48,438][00497] InferenceWorker_p0-w0: stopping experience collection (22550 times) +[2024-03-29 17:10:48,441][00497] Updated weights for policy 0, policy_version 45985 (0.0020) +[2024-03-29 17:10:48,467][00497] InferenceWorker_p0-w0: resuming experience collection (22550 times) +[2024-03-29 17:10:48,839][00126] Fps is (10 sec: 42597.9, 60 sec: 40960.0, 300 sec: 41820.9). Total num frames: 753434624. Throughput: 0: 41199.0. Samples: 635565660. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:10:48,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:10:52,061][00497] Updated weights for policy 0, policy_version 45995 (0.0023) +[2024-03-29 17:10:53,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 753631232. Throughput: 0: 40711.2. Samples: 635798020. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:10:53,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:10:57,028][00497] Updated weights for policy 0, policy_version 46005 (0.0022) +[2024-03-29 17:10:58,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40960.0, 300 sec: 41765.3). Total num frames: 753811456. Throughput: 0: 41191.5. Samples: 636077660. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:10:58,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 17:11:00,929][00497] Updated weights for policy 0, policy_version 46015 (0.0024) +[2024-03-29 17:11:03,839][00126] Fps is (10 sec: 40959.8, 60 sec: 40686.9, 300 sec: 41709.8). Total num frames: 754040832. Throughput: 0: 41525.7. Samples: 636196560. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:11:03,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 17:11:04,256][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046025_754073600.pth... +[2024-03-29 17:11:04,257][00497] Updated weights for policy 0, policy_version 46025 (0.0023) +[2024-03-29 17:11:04,582][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045412_744030208.pth +[2024-03-29 17:11:07,879][00497] Updated weights for policy 0, policy_version 46035 (0.0031) +[2024-03-29 17:11:08,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 754270208. Throughput: 0: 41230.3. Samples: 636419440. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:11:08,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 17:11:12,716][00497] Updated weights for policy 0, policy_version 46045 (0.0022) +[2024-03-29 17:11:13,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 754434048. Throughput: 0: 41335.1. Samples: 636692240. Policy #0 lag: (min: 1.0, avg: 22.8, max: 42.0) +[2024-03-29 17:11:13,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 17:11:16,891][00497] Updated weights for policy 0, policy_version 46055 (0.0022) +[2024-03-29 17:11:18,840][00126] Fps is (10 sec: 39318.2, 60 sec: 41232.4, 300 sec: 41709.7). Total num frames: 754663424. Throughput: 0: 41514.3. Samples: 636815560. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:18,841][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:11:19,714][00476] Signal inference workers to stop experience collection... (22600 times) +[2024-03-29 17:11:19,755][00497] InferenceWorker_p0-w0: stopping experience collection (22600 times) +[2024-03-29 17:11:19,873][00476] Signal inference workers to resume experience collection... (22600 times) +[2024-03-29 17:11:19,874][00497] InferenceWorker_p0-w0: resuming experience collection (22600 times) +[2024-03-29 17:11:20,182][00497] Updated weights for policy 0, policy_version 46065 (0.0033) +[2024-03-29 17:11:23,580][00497] Updated weights for policy 0, policy_version 46075 (0.0018) +[2024-03-29 17:11:23,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42325.4, 300 sec: 41820.8). Total num frames: 754892800. Throughput: 0: 41547.9. Samples: 637049480. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:23,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:11:28,397][00497] Updated weights for policy 0, policy_version 46085 (0.0022) +[2024-03-29 17:11:28,839][00126] Fps is (10 sec: 39325.4, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 755056640. Throughput: 0: 41588.5. Samples: 637311800. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:28,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:11:32,574][00497] Updated weights for policy 0, policy_version 46095 (0.0026) +[2024-03-29 17:11:33,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 755286016. Throughput: 0: 41550.4. Samples: 637435420. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:33,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 17:11:35,890][00497] Updated weights for policy 0, policy_version 46105 (0.0020) +[2024-03-29 17:11:38,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 755515392. Throughput: 0: 41682.7. Samples: 637673740. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:38,840][00126] Avg episode reward: [(0, '0.502')] +[2024-03-29 17:11:39,272][00497] Updated weights for policy 0, policy_version 46115 (0.0025) +[2024-03-29 17:11:43,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 755679232. Throughput: 0: 41408.4. Samples: 637941040. Policy #0 lag: (min: 1.0, avg: 21.0, max: 44.0) +[2024-03-29 17:11:43,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:11:44,254][00497] Updated weights for policy 0, policy_version 46125 (0.0023) +[2024-03-29 17:11:48,402][00497] Updated weights for policy 0, policy_version 46135 (0.0024) +[2024-03-29 17:11:48,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40960.1, 300 sec: 41598.7). Total num frames: 755892224. Throughput: 0: 41478.3. Samples: 638063080. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:11:48,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 17:11:51,270][00476] Signal inference workers to stop experience collection... (22650 times) +[2024-03-29 17:11:51,341][00497] InferenceWorker_p0-w0: stopping experience collection (22650 times) +[2024-03-29 17:11:51,349][00476] Signal inference workers to resume experience collection... (22650 times) +[2024-03-29 17:11:51,368][00497] InferenceWorker_p0-w0: resuming experience collection (22650 times) +[2024-03-29 17:11:51,657][00497] Updated weights for policy 0, policy_version 46145 (0.0031) +[2024-03-29 17:11:53,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 756137984. Throughput: 0: 41844.1. Samples: 638302420. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:11:53,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 17:11:54,953][00497] Updated weights for policy 0, policy_version 46155 (0.0025) +[2024-03-29 17:11:58,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 756318208. Throughput: 0: 41590.1. Samples: 638563800. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:11:58,840][00126] Avg episode reward: [(0, '0.442')] +[2024-03-29 17:11:59,803][00497] Updated weights for policy 0, policy_version 46165 (0.0020) +[2024-03-29 17:12:03,765][00497] Updated weights for policy 0, policy_version 46175 (0.0028) +[2024-03-29 17:12:03,839][00126] Fps is (10 sec: 39320.8, 60 sec: 41506.0, 300 sec: 41598.7). Total num frames: 756531200. Throughput: 0: 41927.8. Samples: 638702280. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:12:03,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 17:12:07,184][00497] Updated weights for policy 0, policy_version 46185 (0.0022) +[2024-03-29 17:12:08,839][00126] Fps is (10 sec: 44237.2, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 756760576. Throughput: 0: 41934.8. Samples: 638936540. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:12:08,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:12:10,489][00497] Updated weights for policy 0, policy_version 46195 (0.0019) +[2024-03-29 17:12:13,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 756940800. Throughput: 0: 41684.0. Samples: 639187580. Policy #0 lag: (min: 1.0, avg: 19.5, max: 43.0) +[2024-03-29 17:12:13,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 17:12:15,320][00497] Updated weights for policy 0, policy_version 46205 (0.0019) +[2024-03-29 17:12:18,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41506.7, 300 sec: 41598.7). Total num frames: 757153792. Throughput: 0: 41954.5. Samples: 639323380. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:18,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 17:12:19,539][00497] Updated weights for policy 0, policy_version 46215 (0.0031) +[2024-03-29 17:12:22,941][00497] Updated weights for policy 0, policy_version 46225 (0.0027) +[2024-03-29 17:12:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 757383168. Throughput: 0: 41951.1. Samples: 639561540. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:23,840][00126] Avg episode reward: [(0, '0.444')] +[2024-03-29 17:12:25,281][00476] Signal inference workers to stop experience collection... (22700 times) +[2024-03-29 17:12:25,303][00497] InferenceWorker_p0-w0: stopping experience collection (22700 times) +[2024-03-29 17:12:25,462][00476] Signal inference workers to resume experience collection... (22700 times) +[2024-03-29 17:12:25,463][00497] InferenceWorker_p0-w0: resuming experience collection (22700 times) +[2024-03-29 17:12:26,330][00497] Updated weights for policy 0, policy_version 46235 (0.0025) +[2024-03-29 17:12:28,839][00126] Fps is (10 sec: 44237.4, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 757596160. Throughput: 0: 41394.2. Samples: 639803780. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:28,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:12:31,096][00497] Updated weights for policy 0, policy_version 46245 (0.0018) +[2024-03-29 17:12:33,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.0, 300 sec: 41487.6). Total num frames: 757760000. Throughput: 0: 41738.7. Samples: 639941320. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:33,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 17:12:35,372][00497] Updated weights for policy 0, policy_version 46255 (0.0021) +[2024-03-29 17:12:38,564][00497] Updated weights for policy 0, policy_version 46265 (0.0019) +[2024-03-29 17:12:38,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 758005760. Throughput: 0: 42013.8. Samples: 640193040. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:38,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:12:42,103][00497] Updated weights for policy 0, policy_version 46275 (0.0025) +[2024-03-29 17:12:43,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42325.2, 300 sec: 41654.2). Total num frames: 758218752. Throughput: 0: 41245.7. Samples: 640419860. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 17:12:43,841][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:12:46,905][00497] Updated weights for policy 0, policy_version 46285 (0.0018) +[2024-03-29 17:12:48,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 758398976. Throughput: 0: 41417.4. Samples: 640566060. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:12:48,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:12:51,001][00497] Updated weights for policy 0, policy_version 46295 (0.0025) +[2024-03-29 17:12:53,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 758628352. Throughput: 0: 41867.1. Samples: 640820560. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:12:53,841][00126] Avg episode reward: [(0, '0.662')] +[2024-03-29 17:12:54,263][00497] Updated weights for policy 0, policy_version 46305 (0.0024) +[2024-03-29 17:12:56,583][00476] Signal inference workers to stop experience collection... (22750 times) +[2024-03-29 17:12:56,655][00476] Signal inference workers to resume experience collection... (22750 times) +[2024-03-29 17:12:56,658][00497] InferenceWorker_p0-w0: stopping experience collection (22750 times) +[2024-03-29 17:12:56,686][00497] InferenceWorker_p0-w0: resuming experience collection (22750 times) +[2024-03-29 17:12:57,592][00497] Updated weights for policy 0, policy_version 46315 (0.0023) +[2024-03-29 17:12:58,839][00126] Fps is (10 sec: 45875.3, 60 sec: 42325.3, 300 sec: 41654.2). Total num frames: 758857728. Throughput: 0: 41619.5. Samples: 641060460. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:12:58,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 17:13:02,240][00497] Updated weights for policy 0, policy_version 46325 (0.0022) +[2024-03-29 17:13:03,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.2, 300 sec: 41543.1). Total num frames: 759037952. Throughput: 0: 41705.3. Samples: 641200120. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:13:03,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:13:04,095][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046329_759054336.pth... +[2024-03-29 17:13:04,412][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000045721_749092864.pth +[2024-03-29 17:13:06,336][00497] Updated weights for policy 0, policy_version 46335 (0.0021) +[2024-03-29 17:13:08,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 759250944. Throughput: 0: 42237.8. Samples: 641462240. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:13:08,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 17:13:09,692][00497] Updated weights for policy 0, policy_version 46345 (0.0023) +[2024-03-29 17:13:13,147][00497] Updated weights for policy 0, policy_version 46355 (0.0021) +[2024-03-29 17:13:13,839][00126] Fps is (10 sec: 47513.7, 60 sec: 42871.4, 300 sec: 41820.9). Total num frames: 759513088. Throughput: 0: 42087.0. Samples: 641697700. Policy #0 lag: (min: 0.0, avg: 21.4, max: 42.0) +[2024-03-29 17:13:13,840][00126] Avg episode reward: [(0, '0.451')] +[2024-03-29 17:13:17,700][00497] Updated weights for policy 0, policy_version 46365 (0.0018) +[2024-03-29 17:13:18,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 759676928. Throughput: 0: 41941.3. Samples: 641828680. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:18,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:13:21,796][00497] Updated weights for policy 0, policy_version 46375 (0.0023) +[2024-03-29 17:13:23,839][00126] Fps is (10 sec: 37683.8, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 759889920. Throughput: 0: 42379.1. Samples: 642100100. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:23,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:13:25,096][00497] Updated weights for policy 0, policy_version 46385 (0.0022) +[2024-03-29 17:13:28,541][00497] Updated weights for policy 0, policy_version 46395 (0.0020) +[2024-03-29 17:13:28,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 760135680. Throughput: 0: 42662.8. Samples: 642339680. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:28,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 17:13:30,424][00476] Signal inference workers to stop experience collection... (22800 times) +[2024-03-29 17:13:30,425][00476] Signal inference workers to resume experience collection... (22800 times) +[2024-03-29 17:13:30,464][00497] InferenceWorker_p0-w0: stopping experience collection (22800 times) +[2024-03-29 17:13:30,464][00497] InferenceWorker_p0-w0: resuming experience collection (22800 times) +[2024-03-29 17:13:32,913][00497] Updated weights for policy 0, policy_version 46405 (0.0022) +[2024-03-29 17:13:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42598.4, 300 sec: 41598.7). Total num frames: 760315904. Throughput: 0: 42380.6. Samples: 642473180. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:33,840][00126] Avg episode reward: [(0, '0.456')] +[2024-03-29 17:13:37,138][00497] Updated weights for policy 0, policy_version 46415 (0.0019) +[2024-03-29 17:13:38,839][00126] Fps is (10 sec: 39321.0, 60 sec: 42052.1, 300 sec: 41543.1). Total num frames: 760528896. Throughput: 0: 42526.0. Samples: 642734240. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:38,842][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 17:13:40,544][00497] Updated weights for policy 0, policy_version 46425 (0.0023) +[2024-03-29 17:13:43,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42598.5, 300 sec: 41765.3). Total num frames: 760774656. Throughput: 0: 42677.0. Samples: 642980920. Policy #0 lag: (min: 0.0, avg: 23.7, max: 43.0) +[2024-03-29 17:13:43,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 17:13:43,928][00497] Updated weights for policy 0, policy_version 46435 (0.0020) +[2024-03-29 17:13:48,419][00497] Updated weights for policy 0, policy_version 46445 (0.0023) +[2024-03-29 17:13:48,839][00126] Fps is (10 sec: 42599.3, 60 sec: 42598.5, 300 sec: 41654.3). Total num frames: 760954880. Throughput: 0: 42359.7. Samples: 643106300. Policy #0 lag: (min: 1.0, avg: 23.5, max: 42.0) +[2024-03-29 17:13:48,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:13:52,590][00497] Updated weights for policy 0, policy_version 46455 (0.0020) +[2024-03-29 17:13:53,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42325.3, 300 sec: 41654.2). Total num frames: 761167872. Throughput: 0: 42642.6. Samples: 643381160. Policy #0 lag: (min: 1.0, avg: 23.5, max: 42.0) +[2024-03-29 17:13:53,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:13:56,035][00497] Updated weights for policy 0, policy_version 46465 (0.0025) +[2024-03-29 17:13:58,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42598.5, 300 sec: 41820.9). Total num frames: 761413632. Throughput: 0: 42733.9. Samples: 643620720. Policy #0 lag: (min: 1.0, avg: 23.5, max: 42.0) +[2024-03-29 17:13:58,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:13:59,261][00497] Updated weights for policy 0, policy_version 46475 (0.0024) +[2024-03-29 17:14:03,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42598.5, 300 sec: 41709.8). Total num frames: 761593856. Throughput: 0: 42591.1. Samples: 643745280. Policy #0 lag: (min: 1.0, avg: 23.5, max: 42.0) +[2024-03-29 17:14:03,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:14:03,861][00497] Updated weights for policy 0, policy_version 46485 (0.0019) +[2024-03-29 17:14:06,341][00476] Signal inference workers to stop experience collection... (22850 times) +[2024-03-29 17:14:06,382][00497] InferenceWorker_p0-w0: stopping experience collection (22850 times) +[2024-03-29 17:14:06,564][00476] Signal inference workers to resume experience collection... (22850 times) +[2024-03-29 17:14:06,565][00497] InferenceWorker_p0-w0: resuming experience collection (22850 times) +[2024-03-29 17:14:08,037][00497] Updated weights for policy 0, policy_version 46495 (0.0019) +[2024-03-29 17:14:08,839][00126] Fps is (10 sec: 39321.1, 60 sec: 42598.3, 300 sec: 41654.2). Total num frames: 761806848. Throughput: 0: 42499.0. Samples: 644012560. Policy #0 lag: (min: 1.0, avg: 23.5, max: 42.0) +[2024-03-29 17:14:08,840][00126] Avg episode reward: [(0, '0.457')] +[2024-03-29 17:14:11,488][00497] Updated weights for policy 0, policy_version 46505 (0.0018) +[2024-03-29 17:14:13,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42052.4, 300 sec: 41709.8). Total num frames: 762036224. Throughput: 0: 42524.5. Samples: 644253280. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:13,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 17:14:14,827][00497] Updated weights for policy 0, policy_version 46515 (0.0022) +[2024-03-29 17:14:18,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42598.4, 300 sec: 41765.3). Total num frames: 762232832. Throughput: 0: 42197.8. Samples: 644372080. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:18,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 17:14:19,424][00497] Updated weights for policy 0, policy_version 46525 (0.0027) +[2024-03-29 17:14:23,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.3, 300 sec: 41654.2). Total num frames: 762429440. Throughput: 0: 42555.7. Samples: 644649240. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:23,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:14:23,854][00497] Updated weights for policy 0, policy_version 46536 (0.0031) +[2024-03-29 17:14:27,222][00497] Updated weights for policy 0, policy_version 46546 (0.0023) +[2024-03-29 17:14:28,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 762658816. Throughput: 0: 42327.5. Samples: 644885660. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:28,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 17:14:30,767][00497] Updated weights for policy 0, policy_version 46556 (0.0026) +[2024-03-29 17:14:33,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42598.4, 300 sec: 41820.9). Total num frames: 762871808. Throughput: 0: 41992.4. Samples: 644995960. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:33,840][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 17:14:35,662][00497] Updated weights for policy 0, policy_version 46566 (0.0022) +[2024-03-29 17:14:38,756][00476] Signal inference workers to stop experience collection... (22900 times) +[2024-03-29 17:14:38,806][00497] InferenceWorker_p0-w0: stopping experience collection (22900 times) +[2024-03-29 17:14:38,839][00126] Fps is (10 sec: 39322.2, 60 sec: 42052.4, 300 sec: 41709.8). Total num frames: 763052032. Throughput: 0: 42261.9. Samples: 645282940. Policy #0 lag: (min: 2.0, avg: 21.8, max: 42.0) +[2024-03-29 17:14:38,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:14:38,842][00476] Signal inference workers to resume experience collection... (22900 times) +[2024-03-29 17:14:38,844][00497] InferenceWorker_p0-w0: resuming experience collection (22900 times) +[2024-03-29 17:14:39,355][00497] Updated weights for policy 0, policy_version 46576 (0.0030) +[2024-03-29 17:14:42,846][00497] Updated weights for policy 0, policy_version 46586 (0.0025) +[2024-03-29 17:14:43,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 763297792. Throughput: 0: 42072.4. Samples: 645513980. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:14:43,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 17:14:46,360][00497] Updated weights for policy 0, policy_version 46596 (0.0027) +[2024-03-29 17:14:48,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 41931.9). Total num frames: 763494400. Throughput: 0: 42048.9. Samples: 645637480. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:14:48,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:14:50,965][00497] Updated weights for policy 0, policy_version 46606 (0.0023) +[2024-03-29 17:14:53,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 763707392. Throughput: 0: 42103.6. Samples: 645907220. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:14:53,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:14:54,767][00497] Updated weights for policy 0, policy_version 46616 (0.0023) +[2024-03-29 17:14:58,136][00497] Updated weights for policy 0, policy_version 46626 (0.0023) +[2024-03-29 17:14:58,840][00126] Fps is (10 sec: 44231.3, 60 sec: 42051.4, 300 sec: 41820.7). Total num frames: 763936768. Throughput: 0: 42429.5. Samples: 646162660. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:14:58,841][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:15:01,510][00497] Updated weights for policy 0, policy_version 46636 (0.0028) +[2024-03-29 17:15:03,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42598.4, 300 sec: 42043.0). Total num frames: 764149760. Throughput: 0: 42627.0. Samples: 646290300. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:15:03,840][00126] Avg episode reward: [(0, '0.424')] +[2024-03-29 17:15:04,035][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046641_764166144.pth... +[2024-03-29 17:15:04,347][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046025_754073600.pth +[2024-03-29 17:15:05,825][00476] Signal inference workers to stop experience collection... (22950 times) +[2024-03-29 17:15:05,825][00476] Signal inference workers to resume experience collection... (22950 times) +[2024-03-29 17:15:05,862][00497] InferenceWorker_p0-w0: stopping experience collection (22950 times) +[2024-03-29 17:15:05,863][00497] InferenceWorker_p0-w0: resuming experience collection (22950 times) +[2024-03-29 17:15:06,147][00497] Updated weights for policy 0, policy_version 46646 (0.0019) +[2024-03-29 17:15:08,839][00126] Fps is (10 sec: 40964.9, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 764346368. Throughput: 0: 42300.0. Samples: 646552740. Policy #0 lag: (min: 1.0, avg: 21.5, max: 42.0) +[2024-03-29 17:15:08,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 17:15:09,995][00497] Updated weights for policy 0, policy_version 46656 (0.0028) +[2024-03-29 17:15:13,574][00497] Updated weights for policy 0, policy_version 46666 (0.0024) +[2024-03-29 17:15:13,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 764575744. Throughput: 0: 42397.0. Samples: 646793520. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:13,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:15:17,123][00497] Updated weights for policy 0, policy_version 46676 (0.0029) +[2024-03-29 17:15:18,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 42154.1). Total num frames: 764788736. Throughput: 0: 42792.5. Samples: 646921620. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:18,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:15:21,815][00497] Updated weights for policy 0, policy_version 46686 (0.0020) +[2024-03-29 17:15:23,839][00126] Fps is (10 sec: 39321.0, 60 sec: 42325.2, 300 sec: 42043.0). Total num frames: 764968960. Throughput: 0: 42226.5. Samples: 647183140. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:23,840][00126] Avg episode reward: [(0, '0.476')] +[2024-03-29 17:15:25,559][00497] Updated weights for policy 0, policy_version 46696 (0.0021) +[2024-03-29 17:15:28,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 765198336. Throughput: 0: 42608.9. Samples: 647431380. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:28,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 17:15:29,345][00497] Updated weights for policy 0, policy_version 46706 (0.0024) +[2024-03-29 17:15:33,029][00497] Updated weights for policy 0, policy_version 46716 (0.0033) +[2024-03-29 17:15:33,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42598.3, 300 sec: 42098.5). Total num frames: 765427712. Throughput: 0: 42492.3. Samples: 647549640. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:33,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:15:37,826][00497] Updated weights for policy 0, policy_version 46726 (0.0023) +[2024-03-29 17:15:38,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 765591552. Throughput: 0: 42011.2. Samples: 647797720. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 17:15:38,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 17:15:40,257][00476] Signal inference workers to stop experience collection... (23000 times) +[2024-03-29 17:15:40,258][00476] Signal inference workers to resume experience collection... (23000 times) +[2024-03-29 17:15:40,321][00497] InferenceWorker_p0-w0: stopping experience collection (23000 times) +[2024-03-29 17:15:40,321][00497] InferenceWorker_p0-w0: resuming experience collection (23000 times) +[2024-03-29 17:15:41,514][00497] Updated weights for policy 0, policy_version 46736 (0.0021) +[2024-03-29 17:15:43,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 765804544. Throughput: 0: 41953.1. Samples: 648050500. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:15:43,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:15:45,197][00497] Updated weights for policy 0, policy_version 46746 (0.0023) +[2024-03-29 17:15:48,757][00497] Updated weights for policy 0, policy_version 46756 (0.0036) +[2024-03-29 17:15:48,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42598.4, 300 sec: 42098.5). Total num frames: 766050304. Throughput: 0: 41830.2. Samples: 648172660. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:15:48,840][00126] Avg episode reward: [(0, '0.468')] +[2024-03-29 17:15:53,260][00497] Updated weights for policy 0, policy_version 46766 (0.0018) +[2024-03-29 17:15:53,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 766230528. Throughput: 0: 41638.1. Samples: 648426460. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:15:53,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 17:15:57,089][00497] Updated weights for policy 0, policy_version 46776 (0.0023) +[2024-03-29 17:15:58,842][00126] Fps is (10 sec: 39310.4, 60 sec: 41778.0, 300 sec: 42042.6). Total num frames: 766443520. Throughput: 0: 41921.8. Samples: 648680120. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:15:58,844][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 17:16:00,932][00497] Updated weights for policy 0, policy_version 46786 (0.0023) +[2024-03-29 17:16:03,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 766672896. Throughput: 0: 41733.3. Samples: 648799620. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:16:03,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:16:04,242][00497] Updated weights for policy 0, policy_version 46796 (0.0031) +[2024-03-29 17:16:08,839][00126] Fps is (10 sec: 40971.8, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 766853120. Throughput: 0: 41440.2. Samples: 649047940. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 17:16:08,841][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 17:16:08,878][00497] Updated weights for policy 0, policy_version 46806 (0.0027) +[2024-03-29 17:16:12,662][00497] Updated weights for policy 0, policy_version 46816 (0.0022) +[2024-03-29 17:16:13,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.1, 300 sec: 42043.1). Total num frames: 767066112. Throughput: 0: 41875.1. Samples: 649315760. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:13,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 17:16:14,796][00476] Signal inference workers to stop experience collection... (23050 times) +[2024-03-29 17:16:14,829][00497] InferenceWorker_p0-w0: stopping experience collection (23050 times) +[2024-03-29 17:16:15,008][00476] Signal inference workers to resume experience collection... (23050 times) +[2024-03-29 17:16:15,009][00497] InferenceWorker_p0-w0: resuming experience collection (23050 times) +[2024-03-29 17:16:16,363][00497] Updated weights for policy 0, policy_version 46826 (0.0029) +[2024-03-29 17:16:18,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 767311872. Throughput: 0: 41837.4. Samples: 649432320. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:18,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:16:19,654][00497] Updated weights for policy 0, policy_version 46836 (0.0025) +[2024-03-29 17:16:23,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 767492096. Throughput: 0: 41895.9. Samples: 649683040. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:23,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:16:24,667][00497] Updated weights for policy 0, policy_version 46846 (0.0020) +[2024-03-29 17:16:28,295][00497] Updated weights for policy 0, policy_version 46856 (0.0025) +[2024-03-29 17:16:28,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 767688704. Throughput: 0: 41983.1. Samples: 649939740. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:28,840][00126] Avg episode reward: [(0, '0.496')] +[2024-03-29 17:16:32,179][00497] Updated weights for policy 0, policy_version 46866 (0.0023) +[2024-03-29 17:16:33,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41233.1, 300 sec: 41987.5). Total num frames: 767901696. Throughput: 0: 41898.3. Samples: 650058080. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:33,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:16:35,687][00497] Updated weights for policy 0, policy_version 46876 (0.0027) +[2024-03-29 17:16:38,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 768114688. Throughput: 0: 41582.6. Samples: 650297680. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 17:16:38,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:16:40,620][00497] Updated weights for policy 0, policy_version 46886 (0.0024) +[2024-03-29 17:16:43,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 42098.5). Total num frames: 768311296. Throughput: 0: 41705.7. Samples: 650556760. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:16:43,841][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 17:16:44,396][00497] Updated weights for policy 0, policy_version 46896 (0.0018) +[2024-03-29 17:16:44,406][00476] Signal inference workers to stop experience collection... (23100 times) +[2024-03-29 17:16:44,406][00476] Signal inference workers to resume experience collection... (23100 times) +[2024-03-29 17:16:44,446][00497] InferenceWorker_p0-w0: stopping experience collection (23100 times) +[2024-03-29 17:16:44,446][00497] InferenceWorker_p0-w0: resuming experience collection (23100 times) +[2024-03-29 17:16:47,874][00497] Updated weights for policy 0, policy_version 46906 (0.0026) +[2024-03-29 17:16:48,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41506.0, 300 sec: 42043.0). Total num frames: 768540672. Throughput: 0: 41880.7. Samples: 650684260. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:16:48,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:16:51,338][00497] Updated weights for policy 0, policy_version 46916 (0.0023) +[2024-03-29 17:16:53,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 768753664. Throughput: 0: 41491.9. Samples: 650915080. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:16:53,840][00126] Avg episode reward: [(0, '0.450')] +[2024-03-29 17:16:55,927][00497] Updated weights for policy 0, policy_version 46926 (0.0028) +[2024-03-29 17:16:58,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41781.2, 300 sec: 42098.6). Total num frames: 768950272. Throughput: 0: 42004.4. Samples: 651205960. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:16:58,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:16:59,593][00497] Updated weights for policy 0, policy_version 46936 (0.0028) +[2024-03-29 17:17:03,528][00497] Updated weights for policy 0, policy_version 46946 (0.0028) +[2024-03-29 17:17:03,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.1, 300 sec: 42098.5). Total num frames: 769179648. Throughput: 0: 41897.7. Samples: 651317720. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:17:03,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 17:17:04,123][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046948_769196032.pth... +[2024-03-29 17:17:04,459][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046329_759054336.pth +[2024-03-29 17:17:06,987][00497] Updated weights for policy 0, policy_version 46956 (0.0028) +[2024-03-29 17:17:08,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.2, 300 sec: 42209.6). Total num frames: 769392640. Throughput: 0: 41603.5. Samples: 651555200. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:17:08,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:17:11,755][00497] Updated weights for policy 0, policy_version 46966 (0.0023) +[2024-03-29 17:17:13,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 769572864. Throughput: 0: 41849.8. Samples: 651822980. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:13,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 17:17:15,344][00497] Updated weights for policy 0, policy_version 46976 (0.0025) +[2024-03-29 17:17:17,673][00476] Signal inference workers to stop experience collection... (23150 times) +[2024-03-29 17:17:17,728][00497] InferenceWorker_p0-w0: stopping experience collection (23150 times) +[2024-03-29 17:17:17,841][00476] Signal inference workers to resume experience collection... (23150 times) +[2024-03-29 17:17:17,841][00497] InferenceWorker_p0-w0: resuming experience collection (23150 times) +[2024-03-29 17:17:18,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41506.1, 300 sec: 42098.6). Total num frames: 769802240. Throughput: 0: 41916.4. Samples: 651944320. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:18,841][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:17:19,390][00497] Updated weights for policy 0, policy_version 46986 (0.0021) +[2024-03-29 17:17:22,619][00497] Updated weights for policy 0, policy_version 46996 (0.0023) +[2024-03-29 17:17:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 770015232. Throughput: 0: 41925.9. Samples: 652184340. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:23,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 17:17:27,294][00497] Updated weights for policy 0, policy_version 47006 (0.0018) +[2024-03-29 17:17:28,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 770195456. Throughput: 0: 42129.3. Samples: 652452580. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:28,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 17:17:31,038][00497] Updated weights for policy 0, policy_version 47016 (0.0022) +[2024-03-29 17:17:33,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 770424832. Throughput: 0: 41936.1. Samples: 652571380. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:33,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 17:17:35,026][00497] Updated weights for policy 0, policy_version 47026 (0.0022) +[2024-03-29 17:17:38,366][00497] Updated weights for policy 0, policy_version 47036 (0.0022) +[2024-03-29 17:17:38,839][00126] Fps is (10 sec: 45875.7, 60 sec: 42325.5, 300 sec: 42154.1). Total num frames: 770654208. Throughput: 0: 42230.3. Samples: 652815440. Policy #0 lag: (min: 1.0, avg: 19.6, max: 41.0) +[2024-03-29 17:17:38,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:17:42,979][00497] Updated weights for policy 0, policy_version 47046 (0.0026) +[2024-03-29 17:17:43,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 770834432. Throughput: 0: 41628.4. Samples: 653079240. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:17:43,840][00126] Avg episode reward: [(0, '0.466')] +[2024-03-29 17:17:46,579][00497] Updated weights for policy 0, policy_version 47056 (0.0023) +[2024-03-29 17:17:48,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.3, 300 sec: 42098.5). Total num frames: 771047424. Throughput: 0: 41922.7. Samples: 653204240. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:17:48,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 17:17:50,652][00497] Updated weights for policy 0, policy_version 47066 (0.0026) +[2024-03-29 17:17:52,137][00476] Signal inference workers to stop experience collection... (23200 times) +[2024-03-29 17:17:52,215][00497] InferenceWorker_p0-w0: stopping experience collection (23200 times) +[2024-03-29 17:17:52,222][00476] Signal inference workers to resume experience collection... (23200 times) +[2024-03-29 17:17:52,241][00497] InferenceWorker_p0-w0: resuming experience collection (23200 times) +[2024-03-29 17:17:53,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 771276800. Throughput: 0: 42166.2. Samples: 653452680. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:17:53,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:17:54,056][00497] Updated weights for policy 0, policy_version 47076 (0.0021) +[2024-03-29 17:17:58,732][00497] Updated weights for policy 0, policy_version 47086 (0.0018) +[2024-03-29 17:17:58,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 771457024. Throughput: 0: 41991.1. Samples: 653712580. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:17:58,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:18:02,081][00497] Updated weights for policy 0, policy_version 47096 (0.0026) +[2024-03-29 17:18:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 771686400. Throughput: 0: 42014.9. Samples: 653835000. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:18:03,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:18:06,161][00497] Updated weights for policy 0, policy_version 47106 (0.0023) +[2024-03-29 17:18:08,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.3, 300 sec: 41987.5). Total num frames: 771899392. Throughput: 0: 42173.4. Samples: 654082140. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:18:08,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:18:09,497][00497] Updated weights for policy 0, policy_version 47116 (0.0023) +[2024-03-29 17:18:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 772096000. Throughput: 0: 41912.8. Samples: 654338660. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:13,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:18:14,137][00497] Updated weights for policy 0, policy_version 47126 (0.0026) +[2024-03-29 17:18:17,630][00497] Updated weights for policy 0, policy_version 47136 (0.0018) +[2024-03-29 17:18:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 772325376. Throughput: 0: 42172.1. Samples: 654469120. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:18,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 17:18:21,747][00497] Updated weights for policy 0, policy_version 47146 (0.0024) +[2024-03-29 17:18:23,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 772521984. Throughput: 0: 42169.3. Samples: 654713060. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:23,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:18:24,425][00476] Signal inference workers to stop experience collection... (23250 times) +[2024-03-29 17:18:24,506][00476] Signal inference workers to resume experience collection... (23250 times) +[2024-03-29 17:18:24,507][00497] InferenceWorker_p0-w0: stopping experience collection (23250 times) +[2024-03-29 17:18:24,533][00497] InferenceWorker_p0-w0: resuming experience collection (23250 times) +[2024-03-29 17:18:25,100][00497] Updated weights for policy 0, policy_version 47156 (0.0027) +[2024-03-29 17:18:28,839][00126] Fps is (10 sec: 39320.8, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 772718592. Throughput: 0: 41811.0. Samples: 654960740. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:28,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 17:18:29,869][00497] Updated weights for policy 0, policy_version 47166 (0.0018) +[2024-03-29 17:18:33,427][00497] Updated weights for policy 0, policy_version 47176 (0.0019) +[2024-03-29 17:18:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 772947968. Throughput: 0: 42154.3. Samples: 655101180. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:33,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 17:18:37,383][00497] Updated weights for policy 0, policy_version 47186 (0.0028) +[2024-03-29 17:18:38,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 773144576. Throughput: 0: 41902.9. Samples: 655338300. Policy #0 lag: (min: 0.0, avg: 23.1, max: 41.0) +[2024-03-29 17:18:38,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:18:40,986][00497] Updated weights for policy 0, policy_version 47196 (0.0028) +[2024-03-29 17:18:43,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 773357568. Throughput: 0: 41411.5. Samples: 655576100. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:18:43,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:18:45,627][00497] Updated weights for policy 0, policy_version 47206 (0.0025) +[2024-03-29 17:18:48,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 773570560. Throughput: 0: 41890.8. Samples: 655720080. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:18:48,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 17:18:49,027][00497] Updated weights for policy 0, policy_version 47216 (0.0026) +[2024-03-29 17:18:52,979][00497] Updated weights for policy 0, policy_version 47226 (0.0024) +[2024-03-29 17:18:53,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 773783552. Throughput: 0: 41937.3. Samples: 655969320. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:18:53,840][00126] Avg episode reward: [(0, '0.653')] +[2024-03-29 17:18:56,315][00497] Updated weights for policy 0, policy_version 47236 (0.0021) +[2024-03-29 17:18:58,674][00476] Signal inference workers to stop experience collection... (23300 times) +[2024-03-29 17:18:58,717][00497] InferenceWorker_p0-w0: stopping experience collection (23300 times) +[2024-03-29 17:18:58,835][00476] Signal inference workers to resume experience collection... (23300 times) +[2024-03-29 17:18:58,835][00497] InferenceWorker_p0-w0: resuming experience collection (23300 times) +[2024-03-29 17:18:58,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 773996544. Throughput: 0: 41729.5. Samples: 656216480. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:18:58,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:19:01,004][00497] Updated weights for policy 0, policy_version 47246 (0.0029) +[2024-03-29 17:19:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.3, 300 sec: 41987.5). Total num frames: 774193152. Throughput: 0: 41832.4. Samples: 656351580. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:19:03,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:19:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047253_774193152.pth... +[2024-03-29 17:19:04,170][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046641_764166144.pth +[2024-03-29 17:19:04,950][00497] Updated weights for policy 0, policy_version 47256 (0.0019) +[2024-03-29 17:19:08,606][00497] Updated weights for policy 0, policy_version 47266 (0.0018) +[2024-03-29 17:19:08,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 774406144. Throughput: 0: 42005.7. Samples: 656603320. Policy #0 lag: (min: 1.0, avg: 22.6, max: 43.0) +[2024-03-29 17:19:08,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 17:19:12,141][00497] Updated weights for policy 0, policy_version 47276 (0.0028) +[2024-03-29 17:19:13,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 774635520. Throughput: 0: 41645.9. Samples: 656834800. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:13,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:19:16,775][00497] Updated weights for policy 0, policy_version 47286 (0.0020) +[2024-03-29 17:19:18,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 774815744. Throughput: 0: 41767.1. Samples: 656980700. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:18,840][00126] Avg episode reward: [(0, '0.476')] +[2024-03-29 17:19:20,481][00497] Updated weights for policy 0, policy_version 47296 (0.0032) +[2024-03-29 17:19:23,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 775028736. Throughput: 0: 41906.9. Samples: 657224120. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:23,840][00126] Avg episode reward: [(0, '0.463')] +[2024-03-29 17:19:24,284][00497] Updated weights for policy 0, policy_version 47306 (0.0029) +[2024-03-29 17:19:27,751][00497] Updated weights for policy 0, policy_version 47316 (0.0033) +[2024-03-29 17:19:28,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 775258112. Throughput: 0: 41970.3. Samples: 657464760. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:28,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:19:32,367][00497] Updated weights for policy 0, policy_version 47326 (0.0019) +[2024-03-29 17:19:33,211][00476] Signal inference workers to stop experience collection... (23350 times) +[2024-03-29 17:19:33,246][00497] InferenceWorker_p0-w0: stopping experience collection (23350 times) +[2024-03-29 17:19:33,392][00476] Signal inference workers to resume experience collection... (23350 times) +[2024-03-29 17:19:33,393][00497] InferenceWorker_p0-w0: resuming experience collection (23350 times) +[2024-03-29 17:19:33,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.0, 300 sec: 41987.4). Total num frames: 775438336. Throughput: 0: 41844.7. Samples: 657603100. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:33,840][00126] Avg episode reward: [(0, '0.454')] +[2024-03-29 17:19:36,069][00497] Updated weights for policy 0, policy_version 47336 (0.0023) +[2024-03-29 17:19:38,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41876.4). Total num frames: 775651328. Throughput: 0: 41799.4. Samples: 657850300. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:19:38,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:19:40,197][00497] Updated weights for policy 0, policy_version 47346 (0.0023) +[2024-03-29 17:19:43,550][00497] Updated weights for policy 0, policy_version 47356 (0.0022) +[2024-03-29 17:19:43,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 775897088. Throughput: 0: 41602.0. Samples: 658088580. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:19:43,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:19:48,146][00497] Updated weights for policy 0, policy_version 47366 (0.0034) +[2024-03-29 17:19:48,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 776060928. Throughput: 0: 41665.3. Samples: 658226520. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:19:48,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 17:19:51,936][00497] Updated weights for policy 0, policy_version 47376 (0.0025) +[2024-03-29 17:19:53,839][00126] Fps is (10 sec: 37683.9, 60 sec: 41506.1, 300 sec: 41821.0). Total num frames: 776273920. Throughput: 0: 41682.3. Samples: 658479020. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:19:53,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 17:19:55,708][00497] Updated weights for policy 0, policy_version 47386 (0.0022) +[2024-03-29 17:19:58,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 776519680. Throughput: 0: 42211.1. Samples: 658734300. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:19:58,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:19:58,965][00497] Updated weights for policy 0, policy_version 47396 (0.0023) +[2024-03-29 17:20:03,631][00497] Updated weights for policy 0, policy_version 47406 (0.0023) +[2024-03-29 17:20:03,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.1, 300 sec: 41876.4). Total num frames: 776699904. Throughput: 0: 41793.6. Samples: 658861420. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:20:03,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:20:04,623][00476] Signal inference workers to stop experience collection... (23400 times) +[2024-03-29 17:20:04,625][00476] Signal inference workers to resume experience collection... (23400 times) +[2024-03-29 17:20:04,667][00497] InferenceWorker_p0-w0: stopping experience collection (23400 times) +[2024-03-29 17:20:04,667][00497] InferenceWorker_p0-w0: resuming experience collection (23400 times) +[2024-03-29 17:20:07,540][00497] Updated weights for policy 0, policy_version 47416 (0.0022) +[2024-03-29 17:20:08,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 776896512. Throughput: 0: 42125.0. Samples: 659119740. Policy #0 lag: (min: 0.0, avg: 21.3, max: 44.0) +[2024-03-29 17:20:08,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 17:20:11,445][00497] Updated weights for policy 0, policy_version 47426 (0.0024) +[2024-03-29 17:20:13,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 777125888. Throughput: 0: 41972.0. Samples: 659353500. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:13,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 17:20:14,781][00497] Updated weights for policy 0, policy_version 47436 (0.0027) +[2024-03-29 17:20:18,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 777306112. Throughput: 0: 41797.8. Samples: 659484000. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:18,842][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:20:19,725][00497] Updated weights for policy 0, policy_version 47446 (0.0024) +[2024-03-29 17:20:23,496][00497] Updated weights for policy 0, policy_version 47456 (0.0022) +[2024-03-29 17:20:23,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 777535488. Throughput: 0: 41737.0. Samples: 659728460. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:23,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:20:27,325][00497] Updated weights for policy 0, policy_version 47466 (0.0024) +[2024-03-29 17:20:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41506.0, 300 sec: 41765.3). Total num frames: 777748480. Throughput: 0: 41753.8. Samples: 659967500. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:28,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 17:20:30,673][00497] Updated weights for policy 0, policy_version 47476 (0.0024) +[2024-03-29 17:20:33,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.3, 300 sec: 41931.9). Total num frames: 777961472. Throughput: 0: 41404.4. Samples: 660089720. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:33,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:20:35,038][00476] Signal inference workers to stop experience collection... (23450 times) +[2024-03-29 17:20:35,076][00497] InferenceWorker_p0-w0: stopping experience collection (23450 times) +[2024-03-29 17:20:35,252][00476] Signal inference workers to resume experience collection... (23450 times) +[2024-03-29 17:20:35,253][00497] InferenceWorker_p0-w0: resuming experience collection (23450 times) +[2024-03-29 17:20:35,507][00497] Updated weights for policy 0, policy_version 47486 (0.0025) +[2024-03-29 17:20:38,839][00126] Fps is (10 sec: 39322.4, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 778141696. Throughput: 0: 41644.0. Samples: 660353000. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 17:20:38,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:20:39,467][00497] Updated weights for policy 0, policy_version 47496 (0.0026) +[2024-03-29 17:20:43,290][00497] Updated weights for policy 0, policy_version 47506 (0.0025) +[2024-03-29 17:20:43,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 778371072. Throughput: 0: 41252.8. Samples: 660590680. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:20:43,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:20:46,707][00497] Updated weights for policy 0, policy_version 47516 (0.0020) +[2024-03-29 17:20:48,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.1, 300 sec: 41820.9). Total num frames: 778567680. Throughput: 0: 40929.4. Samples: 660703240. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:20:48,842][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 17:20:51,559][00497] Updated weights for policy 0, policy_version 47526 (0.0032) +[2024-03-29 17:20:53,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41506.1, 300 sec: 41765.7). Total num frames: 778764288. Throughput: 0: 41144.8. Samples: 660971260. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:20:53,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:20:55,337][00497] Updated weights for policy 0, policy_version 47536 (0.0028) +[2024-03-29 17:20:58,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40959.9, 300 sec: 41709.8). Total num frames: 778977280. Throughput: 0: 41209.7. Samples: 661207940. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:20:58,840][00126] Avg episode reward: [(0, '0.667')] +[2024-03-29 17:20:59,039][00497] Updated weights for policy 0, policy_version 47546 (0.0020) +[2024-03-29 17:21:02,421][00497] Updated weights for policy 0, policy_version 47556 (0.0022) +[2024-03-29 17:21:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41506.2, 300 sec: 41820.8). Total num frames: 779190272. Throughput: 0: 41008.4. Samples: 661329380. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:21:03,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 17:21:03,864][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047558_779190272.pth... +[2024-03-29 17:21:04,216][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000046948_769196032.pth +[2024-03-29 17:21:07,038][00476] Signal inference workers to stop experience collection... (23500 times) +[2024-03-29 17:21:07,114][00476] Signal inference workers to resume experience collection... (23500 times) +[2024-03-29 17:21:07,117][00497] InferenceWorker_p0-w0: stopping experience collection (23500 times) +[2024-03-29 17:21:07,144][00497] InferenceWorker_p0-w0: resuming experience collection (23500 times) +[2024-03-29 17:21:07,375][00497] Updated weights for policy 0, policy_version 47566 (0.0034) +[2024-03-29 17:21:08,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 779386880. Throughput: 0: 41680.8. Samples: 661604100. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:21:08,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 17:21:11,280][00497] Updated weights for policy 0, policy_version 47576 (0.0021) +[2024-03-29 17:21:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 779616256. Throughput: 0: 41450.4. Samples: 661832760. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:13,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 17:21:14,797][00497] Updated weights for policy 0, policy_version 47586 (0.0025) +[2024-03-29 17:21:18,310][00497] Updated weights for policy 0, policy_version 47596 (0.0024) +[2024-03-29 17:21:18,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 779829248. Throughput: 0: 41304.9. Samples: 661948440. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:18,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 17:21:23,219][00497] Updated weights for policy 0, policy_version 47606 (0.0021) +[2024-03-29 17:21:23,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.0, 300 sec: 41765.3). Total num frames: 780009472. Throughput: 0: 41470.1. Samples: 662219160. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:23,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 17:21:27,041][00497] Updated weights for policy 0, policy_version 47616 (0.0027) +[2024-03-29 17:21:28,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 780222464. Throughput: 0: 41484.4. Samples: 662457480. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:28,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:21:30,683][00497] Updated weights for policy 0, policy_version 47626 (0.0023) +[2024-03-29 17:21:33,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 780451840. Throughput: 0: 41660.0. Samples: 662577940. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:33,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:21:33,957][00497] Updated weights for policy 0, policy_version 47636 (0.0023) +[2024-03-29 17:21:38,242][00476] Signal inference workers to stop experience collection... (23550 times) +[2024-03-29 17:21:38,310][00497] InferenceWorker_p0-w0: stopping experience collection (23550 times) +[2024-03-29 17:21:38,313][00476] Signal inference workers to resume experience collection... (23550 times) +[2024-03-29 17:21:38,335][00497] InferenceWorker_p0-w0: resuming experience collection (23550 times) +[2024-03-29 17:21:38,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 780615680. Throughput: 0: 41461.4. Samples: 662837020. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 17:21:38,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:21:38,926][00497] Updated weights for policy 0, policy_version 47646 (0.0018) +[2024-03-29 17:21:42,431][00497] Updated weights for policy 0, policy_version 47656 (0.0023) +[2024-03-29 17:21:43,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41506.3, 300 sec: 41765.3). Total num frames: 780861440. Throughput: 0: 41996.6. Samples: 663097780. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:21:43,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:21:45,950][00497] Updated weights for policy 0, policy_version 47666 (0.0025) +[2024-03-29 17:21:48,839][00126] Fps is (10 sec: 47513.4, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 781090816. Throughput: 0: 41991.2. Samples: 663218980. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:21:48,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 17:21:49,506][00497] Updated weights for policy 0, policy_version 47676 (0.0019) +[2024-03-29 17:21:53,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 781254656. Throughput: 0: 41523.2. Samples: 663472640. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:21:53,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 17:21:54,339][00497] Updated weights for policy 0, policy_version 47686 (0.0026) +[2024-03-29 17:21:58,238][00497] Updated weights for policy 0, policy_version 47696 (0.0022) +[2024-03-29 17:21:58,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 781484032. Throughput: 0: 42119.1. Samples: 663728120. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:21:58,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 17:22:01,426][00497] Updated weights for policy 0, policy_version 47706 (0.0025) +[2024-03-29 17:22:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 781713408. Throughput: 0: 42151.6. Samples: 663845260. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:03,840][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 17:22:04,990][00497] Updated weights for policy 0, policy_version 47716 (0.0024) +[2024-03-29 17:22:08,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 781893632. Throughput: 0: 41809.8. Samples: 664100600. Policy #0 lag: (min: 1.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:08,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:22:09,992][00497] Updated weights for policy 0, policy_version 47726 (0.0028) +[2024-03-29 17:22:10,987][00476] Signal inference workers to stop experience collection... (23600 times) +[2024-03-29 17:22:11,010][00497] InferenceWorker_p0-w0: stopping experience collection (23600 times) +[2024-03-29 17:22:11,203][00476] Signal inference workers to resume experience collection... (23600 times) +[2024-03-29 17:22:11,204][00497] InferenceWorker_p0-w0: resuming experience collection (23600 times) +[2024-03-29 17:22:13,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 782090240. Throughput: 0: 42097.8. Samples: 664351880. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:13,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:22:14,050][00497] Updated weights for policy 0, policy_version 47736 (0.0024) +[2024-03-29 17:22:17,407][00497] Updated weights for policy 0, policy_version 47746 (0.0021) +[2024-03-29 17:22:18,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 782336000. Throughput: 0: 41824.4. Samples: 664460040. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:18,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:22:20,927][00497] Updated weights for policy 0, policy_version 47756 (0.0020) +[2024-03-29 17:22:23,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.4, 300 sec: 41820.9). Total num frames: 782532608. Throughput: 0: 41686.6. Samples: 664712920. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:23,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 17:22:26,064][00497] Updated weights for policy 0, policy_version 47766 (0.0028) +[2024-03-29 17:22:28,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 782712832. Throughput: 0: 41586.6. Samples: 664969180. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:28,841][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:22:29,794][00497] Updated weights for policy 0, policy_version 47776 (0.0034) +[2024-03-29 17:22:33,077][00497] Updated weights for policy 0, policy_version 47786 (0.0019) +[2024-03-29 17:22:33,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 782942208. Throughput: 0: 41599.6. Samples: 665090960. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:33,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:22:36,606][00497] Updated weights for policy 0, policy_version 47796 (0.0029) +[2024-03-29 17:22:38,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.2, 300 sec: 41765.3). Total num frames: 783155200. Throughput: 0: 41271.9. Samples: 665329880. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:22:38,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 17:22:42,005][00497] Updated weights for policy 0, policy_version 47806 (0.0020) +[2024-03-29 17:22:43,222][00476] Signal inference workers to stop experience collection... (23650 times) +[2024-03-29 17:22:43,251][00497] InferenceWorker_p0-w0: stopping experience collection (23650 times) +[2024-03-29 17:22:43,414][00476] Signal inference workers to resume experience collection... (23650 times) +[2024-03-29 17:22:43,415][00497] InferenceWorker_p0-w0: resuming experience collection (23650 times) +[2024-03-29 17:22:43,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 783351808. Throughput: 0: 41444.5. Samples: 665593120. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:43,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 17:22:45,593][00497] Updated weights for policy 0, policy_version 47816 (0.0022) +[2024-03-29 17:22:48,665][00497] Updated weights for policy 0, policy_version 47826 (0.0022) +[2024-03-29 17:22:48,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 783581184. Throughput: 0: 41636.8. Samples: 665718920. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:48,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 17:22:52,501][00497] Updated weights for policy 0, policy_version 47836 (0.0018) +[2024-03-29 17:22:53,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 783777792. Throughput: 0: 41117.0. Samples: 665950860. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:53,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:22:57,575][00497] Updated weights for policy 0, policy_version 47846 (0.0018) +[2024-03-29 17:22:58,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 783974400. Throughput: 0: 41775.7. Samples: 666231780. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:22:58,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:23:01,209][00497] Updated weights for policy 0, policy_version 47856 (0.0023) +[2024-03-29 17:23:03,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 784203776. Throughput: 0: 42132.1. Samples: 666355980. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:23:03,841][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:23:04,016][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047865_784220160.pth... +[2024-03-29 17:23:04,388][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047253_774193152.pth +[2024-03-29 17:23:04,729][00497] Updated weights for policy 0, policy_version 47866 (0.0026) +[2024-03-29 17:23:08,117][00497] Updated weights for policy 0, policy_version 47876 (0.0025) +[2024-03-29 17:23:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.4, 300 sec: 41765.3). Total num frames: 784416768. Throughput: 0: 41422.2. Samples: 666576920. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 17:23:08,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:23:13,301][00497] Updated weights for policy 0, policy_version 47886 (0.0019) +[2024-03-29 17:23:13,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.2, 300 sec: 41543.1). Total num frames: 784580608. Throughput: 0: 41927.1. Samples: 666855900. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:13,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 17:23:14,528][00476] Signal inference workers to stop experience collection... (23700 times) +[2024-03-29 17:23:14,600][00497] InferenceWorker_p0-w0: stopping experience collection (23700 times) +[2024-03-29 17:23:14,694][00476] Signal inference workers to resume experience collection... (23700 times) +[2024-03-29 17:23:14,695][00497] InferenceWorker_p0-w0: resuming experience collection (23700 times) +[2024-03-29 17:23:17,137][00497] Updated weights for policy 0, policy_version 47896 (0.0022) +[2024-03-29 17:23:18,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 784809984. Throughput: 0: 41845.7. Samples: 666974020. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:18,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:23:20,154][00497] Updated weights for policy 0, policy_version 47906 (0.0024) +[2024-03-29 17:23:23,839][00126] Fps is (10 sec: 45875.4, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 785039360. Throughput: 0: 41495.2. Samples: 667197160. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:23,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 17:23:24,007][00497] Updated weights for policy 0, policy_version 47916 (0.0033) +[2024-03-29 17:23:28,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.0, 300 sec: 41543.1). Total num frames: 785203200. Throughput: 0: 41821.2. Samples: 667475080. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:28,841][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 17:23:29,093][00497] Updated weights for policy 0, policy_version 47926 (0.0018) +[2024-03-29 17:23:33,079][00497] Updated weights for policy 0, policy_version 47936 (0.0019) +[2024-03-29 17:23:33,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 785416192. Throughput: 0: 41446.6. Samples: 667584020. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:33,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 17:23:36,073][00497] Updated weights for policy 0, policy_version 47946 (0.0035) +[2024-03-29 17:23:38,839][00126] Fps is (10 sec: 45876.0, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 785661952. Throughput: 0: 41732.1. Samples: 667828800. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:23:38,841][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:23:39,804][00497] Updated weights for policy 0, policy_version 47956 (0.0019) +[2024-03-29 17:23:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 785825792. Throughput: 0: 41108.3. Samples: 668081660. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:23:43,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:23:44,833][00497] Updated weights for policy 0, policy_version 47966 (0.0027) +[2024-03-29 17:23:46,309][00476] Signal inference workers to stop experience collection... (23750 times) +[2024-03-29 17:23:46,340][00497] InferenceWorker_p0-w0: stopping experience collection (23750 times) +[2024-03-29 17:23:46,509][00476] Signal inference workers to resume experience collection... (23750 times) +[2024-03-29 17:23:46,509][00497] InferenceWorker_p0-w0: resuming experience collection (23750 times) +[2024-03-29 17:23:48,708][00497] Updated weights for policy 0, policy_version 47976 (0.0030) +[2024-03-29 17:23:48,839][00126] Fps is (10 sec: 37683.0, 60 sec: 40960.0, 300 sec: 41543.2). Total num frames: 786038784. Throughput: 0: 41285.3. Samples: 668213820. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:23:48,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 17:23:51,965][00497] Updated weights for policy 0, policy_version 47986 (0.0031) +[2024-03-29 17:23:53,839][00126] Fps is (10 sec: 47514.2, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 786300928. Throughput: 0: 41618.7. Samples: 668449760. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:23:53,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:23:55,491][00497] Updated weights for policy 0, policy_version 47996 (0.0033) +[2024-03-29 17:23:58,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 786481152. Throughput: 0: 41160.9. Samples: 668708140. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:23:58,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 17:24:00,269][00497] Updated weights for policy 0, policy_version 48006 (0.0026) +[2024-03-29 17:24:03,839][00126] Fps is (10 sec: 34405.8, 60 sec: 40686.8, 300 sec: 41487.6). Total num frames: 786644992. Throughput: 0: 41583.9. Samples: 668845300. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:24:03,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 17:24:04,513][00497] Updated weights for policy 0, policy_version 48016 (0.0028) +[2024-03-29 17:24:07,699][00497] Updated weights for policy 0, policy_version 48026 (0.0027) +[2024-03-29 17:24:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 786907136. Throughput: 0: 41776.4. Samples: 669077100. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:24:08,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:24:11,551][00497] Updated weights for policy 0, policy_version 48036 (0.0019) +[2024-03-29 17:24:13,839][00126] Fps is (10 sec: 44237.6, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 787087360. Throughput: 0: 40806.8. Samples: 669311380. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:13,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:24:16,246][00497] Updated weights for policy 0, policy_version 48046 (0.0030) +[2024-03-29 17:24:18,299][00476] Signal inference workers to stop experience collection... (23800 times) +[2024-03-29 17:24:18,362][00497] InferenceWorker_p0-w0: stopping experience collection (23800 times) +[2024-03-29 17:24:18,464][00476] Signal inference workers to resume experience collection... (23800 times) +[2024-03-29 17:24:18,464][00497] InferenceWorker_p0-w0: resuming experience collection (23800 times) +[2024-03-29 17:24:18,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 787300352. Throughput: 0: 41565.0. Samples: 669454440. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:18,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 17:24:20,530][00497] Updated weights for policy 0, policy_version 48056 (0.0023) +[2024-03-29 17:24:23,431][00497] Updated weights for policy 0, policy_version 48066 (0.0020) +[2024-03-29 17:24:23,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41506.0, 300 sec: 41598.7). Total num frames: 787529728. Throughput: 0: 41619.4. Samples: 669701680. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:23,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:24:27,175][00497] Updated weights for policy 0, policy_version 48076 (0.0030) +[2024-03-29 17:24:28,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.4, 300 sec: 41654.3). Total num frames: 787726336. Throughput: 0: 41214.3. Samples: 669936300. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:28,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 17:24:31,780][00497] Updated weights for policy 0, policy_version 48086 (0.0021) +[2024-03-29 17:24:33,839][00126] Fps is (10 sec: 37683.8, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 787906560. Throughput: 0: 41643.6. Samples: 670087780. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:33,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 17:24:36,259][00497] Updated weights for policy 0, policy_version 48096 (0.0023) +[2024-03-29 17:24:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41233.0, 300 sec: 41487.6). Total num frames: 788135936. Throughput: 0: 41960.4. Samples: 670337980. Policy #0 lag: (min: 1.0, avg: 22.8, max: 40.0) +[2024-03-29 17:24:38,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:24:39,338][00497] Updated weights for policy 0, policy_version 48106 (0.0024) +[2024-03-29 17:24:43,173][00497] Updated weights for policy 0, policy_version 48116 (0.0020) +[2024-03-29 17:24:43,839][00126] Fps is (10 sec: 45874.1, 60 sec: 42325.3, 300 sec: 41709.7). Total num frames: 788365312. Throughput: 0: 41092.7. Samples: 670557320. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:24:43,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 17:24:47,691][00497] Updated weights for policy 0, policy_version 48126 (0.0024) +[2024-03-29 17:24:48,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41506.1, 300 sec: 41543.1). Total num frames: 788529152. Throughput: 0: 41082.2. Samples: 670694000. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:24:48,840][00126] Avg episode reward: [(0, '0.429')] +[2024-03-29 17:24:51,122][00476] Signal inference workers to stop experience collection... (23850 times) +[2024-03-29 17:24:51,153][00497] InferenceWorker_p0-w0: stopping experience collection (23850 times) +[2024-03-29 17:24:51,309][00476] Signal inference workers to resume experience collection... (23850 times) +[2024-03-29 17:24:51,310][00497] InferenceWorker_p0-w0: resuming experience collection (23850 times) +[2024-03-29 17:24:52,095][00497] Updated weights for policy 0, policy_version 48136 (0.0027) +[2024-03-29 17:24:53,839][00126] Fps is (10 sec: 37684.0, 60 sec: 40686.9, 300 sec: 41432.1). Total num frames: 788742144. Throughput: 0: 41912.0. Samples: 670963140. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:24:53,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:24:55,218][00497] Updated weights for policy 0, policy_version 48146 (0.0028) +[2024-03-29 17:24:58,787][00497] Updated weights for policy 0, policy_version 48156 (0.0022) +[2024-03-29 17:24:58,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 788987904. Throughput: 0: 41706.1. Samples: 671188160. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:24:58,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 17:25:03,287][00497] Updated weights for policy 0, policy_version 48166 (0.0026) +[2024-03-29 17:25:03,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 789168128. Throughput: 0: 41264.4. Samples: 671311340. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:25:03,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 17:25:03,861][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048167_789168128.pth... +[2024-03-29 17:25:04,190][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047558_779190272.pth +[2024-03-29 17:25:07,876][00497] Updated weights for policy 0, policy_version 48176 (0.0026) +[2024-03-29 17:25:08,839][00126] Fps is (10 sec: 37683.7, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 789364736. Throughput: 0: 41847.7. Samples: 671584820. Policy #0 lag: (min: 0.0, avg: 22.2, max: 40.0) +[2024-03-29 17:25:08,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 17:25:10,922][00497] Updated weights for policy 0, policy_version 48186 (0.0028) +[2024-03-29 17:25:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.2, 300 sec: 41654.3). Total num frames: 789594112. Throughput: 0: 41502.2. Samples: 671803900. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:13,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 17:25:14,570][00497] Updated weights for policy 0, policy_version 48196 (0.0024) +[2024-03-29 17:25:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 789790720. Throughput: 0: 41071.1. Samples: 671935980. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:18,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 17:25:19,055][00497] Updated weights for policy 0, policy_version 48206 (0.0029) +[2024-03-29 17:25:23,657][00497] Updated weights for policy 0, policy_version 48216 (0.0021) +[2024-03-29 17:25:23,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40687.0, 300 sec: 41432.1). Total num frames: 789970944. Throughput: 0: 41264.0. Samples: 672194860. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:23,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:25:24,709][00476] Signal inference workers to stop experience collection... (23900 times) +[2024-03-29 17:25:24,732][00497] InferenceWorker_p0-w0: stopping experience collection (23900 times) +[2024-03-29 17:25:24,923][00476] Signal inference workers to resume experience collection... (23900 times) +[2024-03-29 17:25:24,924][00497] InferenceWorker_p0-w0: resuming experience collection (23900 times) +[2024-03-29 17:25:26,570][00497] Updated weights for policy 0, policy_version 48226 (0.0025) +[2024-03-29 17:25:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 790233088. Throughput: 0: 41488.7. Samples: 672424300. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:28,840][00126] Avg episode reward: [(0, '0.632')] +[2024-03-29 17:25:30,308][00497] Updated weights for policy 0, policy_version 48236 (0.0024) +[2024-03-29 17:25:33,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 790413312. Throughput: 0: 41357.4. Samples: 672555080. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:33,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 17:25:34,739][00497] Updated weights for policy 0, policy_version 48246 (0.0021) +[2024-03-29 17:25:38,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 790609920. Throughput: 0: 41316.0. Samples: 672822360. Policy #0 lag: (min: 1.0, avg: 23.4, max: 45.0) +[2024-03-29 17:25:38,841][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 17:25:39,388][00497] Updated weights for policy 0, policy_version 48256 (0.0023) +[2024-03-29 17:25:42,401][00497] Updated weights for policy 0, policy_version 48266 (0.0020) +[2024-03-29 17:25:43,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 790855680. Throughput: 0: 41540.4. Samples: 673057480. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:25:43,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 17:25:45,879][00497] Updated weights for policy 0, policy_version 48276 (0.0018) +[2024-03-29 17:25:48,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 791052288. Throughput: 0: 41491.5. Samples: 673178460. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:25:48,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 17:25:50,636][00497] Updated weights for policy 0, policy_version 48286 (0.0019) +[2024-03-29 17:25:53,839][00126] Fps is (10 sec: 37683.7, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 791232512. Throughput: 0: 41628.0. Samples: 673458080. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:25:53,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 17:25:55,115][00497] Updated weights for policy 0, policy_version 48296 (0.0025) +[2024-03-29 17:25:55,545][00476] Signal inference workers to stop experience collection... (23950 times) +[2024-03-29 17:25:55,589][00497] InferenceWorker_p0-w0: stopping experience collection (23950 times) +[2024-03-29 17:25:55,743][00476] Signal inference workers to resume experience collection... (23950 times) +[2024-03-29 17:25:55,744][00497] InferenceWorker_p0-w0: resuming experience collection (23950 times) +[2024-03-29 17:25:58,045][00497] Updated weights for policy 0, policy_version 48306 (0.0021) +[2024-03-29 17:25:58,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 791478272. Throughput: 0: 41939.9. Samples: 673691200. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:25:58,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:26:01,587][00497] Updated weights for policy 0, policy_version 48316 (0.0020) +[2024-03-29 17:26:03,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 791691264. Throughput: 0: 41715.9. Samples: 673813200. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:26:03,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:26:06,297][00497] Updated weights for policy 0, policy_version 48326 (0.0018) +[2024-03-29 17:26:08,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 791887872. Throughput: 0: 41810.7. Samples: 674076340. Policy #0 lag: (min: 2.0, avg: 21.6, max: 42.0) +[2024-03-29 17:26:08,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:26:10,812][00497] Updated weights for policy 0, policy_version 48336 (0.0026) +[2024-03-29 17:26:13,788][00497] Updated weights for policy 0, policy_version 48346 (0.0032) +[2024-03-29 17:26:13,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 792100864. Throughput: 0: 41942.9. Samples: 674311740. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:13,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 17:26:17,561][00497] Updated weights for policy 0, policy_version 48356 (0.0026) +[2024-03-29 17:26:18,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 792297472. Throughput: 0: 41408.0. Samples: 674418440. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:18,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 17:26:22,227][00497] Updated weights for policy 0, policy_version 48366 (0.0028) +[2024-03-29 17:26:23,839][00126] Fps is (10 sec: 39321.3, 60 sec: 42052.1, 300 sec: 41598.7). Total num frames: 792494080. Throughput: 0: 41474.4. Samples: 674688720. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:23,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 17:26:27,004][00497] Updated weights for policy 0, policy_version 48376 (0.0020) +[2024-03-29 17:26:27,370][00476] Signal inference workers to stop experience collection... (24000 times) +[2024-03-29 17:26:27,407][00497] InferenceWorker_p0-w0: stopping experience collection (24000 times) +[2024-03-29 17:26:27,595][00476] Signal inference workers to resume experience collection... (24000 times) +[2024-03-29 17:26:27,596][00497] InferenceWorker_p0-w0: resuming experience collection (24000 times) +[2024-03-29 17:26:28,839][00126] Fps is (10 sec: 39322.2, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 792690688. Throughput: 0: 41826.8. Samples: 674939680. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:28,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:26:29,830][00497] Updated weights for policy 0, policy_version 48386 (0.0026) +[2024-03-29 17:26:33,519][00497] Updated weights for policy 0, policy_version 48396 (0.0024) +[2024-03-29 17:26:33,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 792936448. Throughput: 0: 41492.8. Samples: 675045640. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:33,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 17:26:38,192][00497] Updated weights for policy 0, policy_version 48406 (0.0026) +[2024-03-29 17:26:38,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 793100288. Throughput: 0: 41072.5. Samples: 675306340. Policy #0 lag: (min: 2.0, avg: 21.1, max: 43.0) +[2024-03-29 17:26:38,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 17:26:42,839][00497] Updated weights for policy 0, policy_version 48416 (0.0028) +[2024-03-29 17:26:43,839][00126] Fps is (10 sec: 36045.4, 60 sec: 40687.0, 300 sec: 41376.5). Total num frames: 793296896. Throughput: 0: 41861.4. Samples: 675574960. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:26:43,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 17:26:45,705][00497] Updated weights for policy 0, policy_version 48426 (0.0028) +[2024-03-29 17:26:48,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 793542656. Throughput: 0: 41194.3. Samples: 675666940. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:26:48,842][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:26:49,265][00497] Updated weights for policy 0, policy_version 48436 (0.0026) +[2024-03-29 17:26:53,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 793722880. Throughput: 0: 40989.3. Samples: 675920860. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:26:53,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:26:54,029][00497] Updated weights for policy 0, policy_version 48446 (0.0020) +[2024-03-29 17:26:57,442][00476] Signal inference workers to stop experience collection... (24050 times) +[2024-03-29 17:26:57,495][00497] InferenceWorker_p0-w0: stopping experience collection (24050 times) +[2024-03-29 17:26:57,531][00476] Signal inference workers to resume experience collection... (24050 times) +[2024-03-29 17:26:57,533][00497] InferenceWorker_p0-w0: resuming experience collection (24050 times) +[2024-03-29 17:26:58,477][00497] Updated weights for policy 0, policy_version 48456 (0.0030) +[2024-03-29 17:26:58,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40686.9, 300 sec: 41376.5). Total num frames: 793919488. Throughput: 0: 41933.9. Samples: 676198760. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:26:58,840][00126] Avg episode reward: [(0, '0.500')] +[2024-03-29 17:27:01,608][00497] Updated weights for policy 0, policy_version 48466 (0.0029) +[2024-03-29 17:27:03,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 794181632. Throughput: 0: 41652.1. Samples: 676292780. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:27:03,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:27:03,858][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048473_794181632.pth... +[2024-03-29 17:27:04,195][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000047865_784220160.pth +[2024-03-29 17:27:05,120][00497] Updated weights for policy 0, policy_version 48476 (0.0025) +[2024-03-29 17:27:08,839][00126] Fps is (10 sec: 42598.8, 60 sec: 40960.0, 300 sec: 41543.2). Total num frames: 794345472. Throughput: 0: 41252.2. Samples: 676545060. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:27:08,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:27:09,572][00497] Updated weights for policy 0, policy_version 48486 (0.0022) +[2024-03-29 17:27:13,839][00126] Fps is (10 sec: 36044.9, 60 sec: 40687.1, 300 sec: 41376.6). Total num frames: 794542080. Throughput: 0: 41803.5. Samples: 676820840. Policy #0 lag: (min: 2.0, avg: 18.2, max: 42.0) +[2024-03-29 17:27:13,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:27:14,156][00497] Updated weights for policy 0, policy_version 48496 (0.0022) +[2024-03-29 17:27:17,284][00497] Updated weights for policy 0, policy_version 48506 (0.0028) +[2024-03-29 17:27:18,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 794804224. Throughput: 0: 41745.9. Samples: 676924200. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:18,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 17:27:20,897][00497] Updated weights for policy 0, policy_version 48516 (0.0025) +[2024-03-29 17:27:23,839][00126] Fps is (10 sec: 44236.1, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 794984448. Throughput: 0: 41339.8. Samples: 677166640. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:23,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:27:25,483][00497] Updated weights for policy 0, policy_version 48526 (0.0022) +[2024-03-29 17:27:25,709][00476] Signal inference workers to stop experience collection... (24100 times) +[2024-03-29 17:27:25,786][00476] Signal inference workers to resume experience collection... (24100 times) +[2024-03-29 17:27:25,787][00497] InferenceWorker_p0-w0: stopping experience collection (24100 times) +[2024-03-29 17:27:25,814][00497] InferenceWorker_p0-w0: resuming experience collection (24100 times) +[2024-03-29 17:27:28,839][00126] Fps is (10 sec: 36044.8, 60 sec: 41233.0, 300 sec: 41432.1). Total num frames: 795164672. Throughput: 0: 41205.7. Samples: 677429220. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:28,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:27:29,957][00497] Updated weights for policy 0, policy_version 48536 (0.0024) +[2024-03-29 17:27:32,992][00497] Updated weights for policy 0, policy_version 48546 (0.0034) +[2024-03-29 17:27:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 795410432. Throughput: 0: 42126.1. Samples: 677562620. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:33,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 17:27:36,523][00497] Updated weights for policy 0, policy_version 48556 (0.0023) +[2024-03-29 17:27:38,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 795623424. Throughput: 0: 41505.3. Samples: 677788600. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:38,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:27:41,243][00497] Updated weights for policy 0, policy_version 48566 (0.0023) +[2024-03-29 17:27:43,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.2, 300 sec: 41487.6). Total num frames: 795820032. Throughput: 0: 41102.2. Samples: 678048360. Policy #0 lag: (min: 0.0, avg: 17.0, max: 42.0) +[2024-03-29 17:27:43,840][00126] Avg episode reward: [(0, '0.500')] +[2024-03-29 17:27:45,818][00497] Updated weights for policy 0, policy_version 48576 (0.0024) +[2024-03-29 17:27:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 796016640. Throughput: 0: 42185.4. Samples: 678191120. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:27:48,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:27:48,897][00497] Updated weights for policy 0, policy_version 48586 (0.0033) +[2024-03-29 17:27:52,449][00497] Updated weights for policy 0, policy_version 48596 (0.0026) +[2024-03-29 17:27:53,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.2, 300 sec: 41654.2). Total num frames: 796262400. Throughput: 0: 41597.6. Samples: 678416960. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:27:53,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:27:57,029][00497] Updated weights for policy 0, policy_version 48606 (0.0024) +[2024-03-29 17:27:57,062][00476] Signal inference workers to stop experience collection... (24150 times) +[2024-03-29 17:27:57,100][00497] InferenceWorker_p0-w0: stopping experience collection (24150 times) +[2024-03-29 17:27:57,254][00476] Signal inference workers to resume experience collection... (24150 times) +[2024-03-29 17:27:57,254][00497] InferenceWorker_p0-w0: resuming experience collection (24150 times) +[2024-03-29 17:27:58,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 41543.2). Total num frames: 796459008. Throughput: 0: 41267.6. Samples: 678677880. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:27:58,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:28:01,210][00497] Updated weights for policy 0, policy_version 48616 (0.0018) +[2024-03-29 17:28:03,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 796655616. Throughput: 0: 42070.2. Samples: 678817360. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:28:03,840][00126] Avg episode reward: [(0, '0.427')] +[2024-03-29 17:28:04,669][00497] Updated weights for policy 0, policy_version 48626 (0.0034) +[2024-03-29 17:28:08,395][00497] Updated weights for policy 0, policy_version 48636 (0.0024) +[2024-03-29 17:28:08,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 796868608. Throughput: 0: 41255.6. Samples: 679023140. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:28:08,840][00126] Avg episode reward: [(0, '0.644')] +[2024-03-29 17:28:12,943][00497] Updated weights for policy 0, policy_version 48646 (0.0025) +[2024-03-29 17:28:13,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 797048832. Throughput: 0: 41409.7. Samples: 679292660. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 17:28:13,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 17:28:17,236][00497] Updated weights for policy 0, policy_version 48656 (0.0025) +[2024-03-29 17:28:18,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40687.0, 300 sec: 41376.5). Total num frames: 797245440. Throughput: 0: 41345.5. Samples: 679423160. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:18,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 17:28:20,647][00497] Updated weights for policy 0, policy_version 48666 (0.0022) +[2024-03-29 17:28:23,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 797474816. Throughput: 0: 41267.5. Samples: 679645640. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:23,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 17:28:24,380][00497] Updated weights for policy 0, policy_version 48676 (0.0031) +[2024-03-29 17:28:25,340][00476] Signal inference workers to stop experience collection... (24200 times) +[2024-03-29 17:28:25,365][00497] InferenceWorker_p0-w0: stopping experience collection (24200 times) +[2024-03-29 17:28:25,526][00476] Signal inference workers to resume experience collection... (24200 times) +[2024-03-29 17:28:25,527][00497] InferenceWorker_p0-w0: resuming experience collection (24200 times) +[2024-03-29 17:28:28,727][00497] Updated weights for policy 0, policy_version 48686 (0.0017) +[2024-03-29 17:28:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 797671424. Throughput: 0: 41529.0. Samples: 679917160. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:28,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 17:28:33,056][00497] Updated weights for policy 0, policy_version 48696 (0.0024) +[2024-03-29 17:28:33,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40687.0, 300 sec: 41321.0). Total num frames: 797851648. Throughput: 0: 40959.5. Samples: 680034300. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:33,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 17:28:36,452][00497] Updated weights for policy 0, policy_version 48706 (0.0024) +[2024-03-29 17:28:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.1, 300 sec: 41654.3). Total num frames: 798113792. Throughput: 0: 40983.7. Samples: 680261220. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:38,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:28:40,185][00497] Updated weights for policy 0, policy_version 48716 (0.0026) +[2024-03-29 17:28:43,839][00126] Fps is (10 sec: 42597.7, 60 sec: 40959.9, 300 sec: 41487.6). Total num frames: 798277632. Throughput: 0: 40945.2. Samples: 680520420. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 17:28:43,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:28:44,695][00497] Updated weights for policy 0, policy_version 48726 (0.0028) +[2024-03-29 17:28:48,839][00126] Fps is (10 sec: 36044.8, 60 sec: 40960.0, 300 sec: 41265.5). Total num frames: 798474240. Throughput: 0: 40732.9. Samples: 680650340. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:28:48,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 17:28:48,932][00497] Updated weights for policy 0, policy_version 48736 (0.0018) +[2024-03-29 17:28:52,222][00497] Updated weights for policy 0, policy_version 48746 (0.0021) +[2024-03-29 17:28:53,839][00126] Fps is (10 sec: 44237.6, 60 sec: 40960.1, 300 sec: 41487.6). Total num frames: 798720000. Throughput: 0: 41715.6. Samples: 680900340. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:28:53,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:28:56,088][00497] Updated weights for policy 0, policy_version 48756 (0.0026) +[2024-03-29 17:28:56,930][00476] Signal inference workers to stop experience collection... (24250 times) +[2024-03-29 17:28:56,934][00476] Signal inference workers to resume experience collection... (24250 times) +[2024-03-29 17:28:56,982][00497] InferenceWorker_p0-w0: stopping experience collection (24250 times) +[2024-03-29 17:28:56,982][00497] InferenceWorker_p0-w0: resuming experience collection (24250 times) +[2024-03-29 17:28:58,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 798932992. Throughput: 0: 41125.0. Samples: 681143280. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:28:58,841][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 17:29:00,376][00497] Updated weights for policy 0, policy_version 48766 (0.0019) +[2024-03-29 17:29:03,839][00126] Fps is (10 sec: 37682.4, 60 sec: 40686.8, 300 sec: 41321.0). Total num frames: 799096832. Throughput: 0: 41291.8. Samples: 681281300. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:29:03,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 17:29:03,873][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048774_799113216.pth... +[2024-03-29 17:29:04,184][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048167_789168128.pth +[2024-03-29 17:29:04,720][00497] Updated weights for policy 0, policy_version 48776 (0.0031) +[2024-03-29 17:29:08,063][00497] Updated weights for policy 0, policy_version 48786 (0.0026) +[2024-03-29 17:29:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 799342592. Throughput: 0: 41612.5. Samples: 681518200. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:29:08,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 17:29:12,012][00497] Updated weights for policy 0, policy_version 48796 (0.0018) +[2024-03-29 17:29:13,839][00126] Fps is (10 sec: 44237.7, 60 sec: 41506.2, 300 sec: 41487.6). Total num frames: 799539200. Throughput: 0: 41002.7. Samples: 681762280. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:29:13,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:29:16,127][00497] Updated weights for policy 0, policy_version 48806 (0.0032) +[2024-03-29 17:29:18,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.1, 300 sec: 41376.6). Total num frames: 799735808. Throughput: 0: 41544.9. Samples: 681903820. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:18,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:29:20,411][00497] Updated weights for policy 0, policy_version 48816 (0.0023) +[2024-03-29 17:29:23,553][00497] Updated weights for policy 0, policy_version 48826 (0.0030) +[2024-03-29 17:29:23,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 799965184. Throughput: 0: 42152.4. Samples: 682158080. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:23,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:29:27,536][00497] Updated weights for policy 0, policy_version 48836 (0.0020) +[2024-03-29 17:29:28,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 800194560. Throughput: 0: 41801.4. Samples: 682401480. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:28,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:29:31,528][00497] Updated weights for policy 0, policy_version 48846 (0.0019) +[2024-03-29 17:29:32,342][00476] Signal inference workers to stop experience collection... (24300 times) +[2024-03-29 17:29:32,342][00476] Signal inference workers to resume experience collection... (24300 times) +[2024-03-29 17:29:32,391][00497] InferenceWorker_p0-w0: stopping experience collection (24300 times) +[2024-03-29 17:29:32,391][00497] InferenceWorker_p0-w0: resuming experience collection (24300 times) +[2024-03-29 17:29:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.3, 300 sec: 41543.2). Total num frames: 800391168. Throughput: 0: 41884.0. Samples: 682535120. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:33,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 17:29:35,934][00497] Updated weights for policy 0, policy_version 48856 (0.0023) +[2024-03-29 17:29:38,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.0, 300 sec: 41487.6). Total num frames: 800604160. Throughput: 0: 42046.9. Samples: 682792460. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:38,841][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:29:39,054][00497] Updated weights for policy 0, policy_version 48866 (0.0022) +[2024-03-29 17:29:43,112][00497] Updated weights for policy 0, policy_version 48876 (0.0023) +[2024-03-29 17:29:43,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.4, 300 sec: 41598.7). Total num frames: 800800768. Throughput: 0: 41883.1. Samples: 683028020. Policy #0 lag: (min: 1.0, avg: 20.8, max: 43.0) +[2024-03-29 17:29:43,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:29:47,310][00497] Updated weights for policy 0, policy_version 48886 (0.0030) +[2024-03-29 17:29:48,839][00126] Fps is (10 sec: 40960.9, 60 sec: 42325.3, 300 sec: 41598.7). Total num frames: 801013760. Throughput: 0: 41637.1. Samples: 683154960. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:29:48,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 17:29:51,675][00497] Updated weights for policy 0, policy_version 48896 (0.0020) +[2024-03-29 17:29:53,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 801210368. Throughput: 0: 42122.6. Samples: 683413720. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:29:53,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:29:54,941][00497] Updated weights for policy 0, policy_version 48906 (0.0019) +[2024-03-29 17:29:58,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 801423360. Throughput: 0: 41961.3. Samples: 683650540. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:29:58,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 17:29:58,910][00497] Updated weights for policy 0, policy_version 48916 (0.0020) +[2024-03-29 17:30:03,041][00497] Updated weights for policy 0, policy_version 48926 (0.0032) +[2024-03-29 17:30:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.4, 300 sec: 41543.2). Total num frames: 801619968. Throughput: 0: 41531.5. Samples: 683772740. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:03,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:30:04,412][00476] Signal inference workers to stop experience collection... (24350 times) +[2024-03-29 17:30:04,454][00497] InferenceWorker_p0-w0: stopping experience collection (24350 times) +[2024-03-29 17:30:04,580][00476] Signal inference workers to resume experience collection... (24350 times) +[2024-03-29 17:30:04,580][00497] InferenceWorker_p0-w0: resuming experience collection (24350 times) +[2024-03-29 17:30:07,430][00497] Updated weights for policy 0, policy_version 48936 (0.0033) +[2024-03-29 17:30:08,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 801832960. Throughput: 0: 41896.0. Samples: 684043400. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:08,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:30:10,617][00497] Updated weights for policy 0, policy_version 48946 (0.0026) +[2024-03-29 17:30:13,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.2, 300 sec: 41543.1). Total num frames: 802045952. Throughput: 0: 41450.7. Samples: 684266760. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:13,840][00126] Avg episode reward: [(0, '0.450')] +[2024-03-29 17:30:14,756][00497] Updated weights for policy 0, policy_version 48956 (0.0032) +[2024-03-29 17:30:18,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 802242560. Throughput: 0: 41371.5. Samples: 684396840. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:18,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 17:30:18,881][00497] Updated weights for policy 0, policy_version 48966 (0.0023) +[2024-03-29 17:30:23,136][00497] Updated weights for policy 0, policy_version 48976 (0.0019) +[2024-03-29 17:30:23,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41233.1, 300 sec: 41376.5). Total num frames: 802439168. Throughput: 0: 41561.9. Samples: 684662740. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:23,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 17:30:26,574][00497] Updated weights for policy 0, policy_version 48986 (0.0037) +[2024-03-29 17:30:28,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 802684928. Throughput: 0: 41433.7. Samples: 684892540. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:28,840][00126] Avg episode reward: [(0, '0.471')] +[2024-03-29 17:30:30,709][00497] Updated weights for policy 0, policy_version 48996 (0.0022) +[2024-03-29 17:30:33,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 802865152. Throughput: 0: 41361.1. Samples: 685016220. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:33,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:30:34,549][00476] Signal inference workers to stop experience collection... (24400 times) +[2024-03-29 17:30:34,588][00497] InferenceWorker_p0-w0: stopping experience collection (24400 times) +[2024-03-29 17:30:34,777][00476] Signal inference workers to resume experience collection... (24400 times) +[2024-03-29 17:30:34,777][00497] InferenceWorker_p0-w0: resuming experience collection (24400 times) +[2024-03-29 17:30:34,780][00497] Updated weights for policy 0, policy_version 49006 (0.0029) +[2024-03-29 17:30:38,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40960.1, 300 sec: 41376.6). Total num frames: 803061760. Throughput: 0: 41121.3. Samples: 685264180. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:38,841][00126] Avg episode reward: [(0, '0.473')] +[2024-03-29 17:30:38,998][00497] Updated weights for policy 0, policy_version 49016 (0.0028) +[2024-03-29 17:30:42,498][00497] Updated weights for policy 0, policy_version 49026 (0.0027) +[2024-03-29 17:30:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.0, 300 sec: 41487.6). Total num frames: 803291136. Throughput: 0: 41205.7. Samples: 685504800. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:43,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:30:46,430][00497] Updated weights for policy 0, policy_version 49036 (0.0019) +[2024-03-29 17:30:48,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41506.0, 300 sec: 41598.7). Total num frames: 803504128. Throughput: 0: 41528.3. Samples: 685641520. Policy #0 lag: (min: 1.0, avg: 20.3, max: 42.0) +[2024-03-29 17:30:48,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 17:30:50,366][00497] Updated weights for policy 0, policy_version 49046 (0.0018) +[2024-03-29 17:30:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41233.0, 300 sec: 41376.5). Total num frames: 803684352. Throughput: 0: 41252.7. Samples: 685899780. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:53,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 17:30:54,552][00497] Updated weights for policy 0, policy_version 49056 (0.0019) +[2024-03-29 17:30:58,326][00497] Updated weights for policy 0, policy_version 49066 (0.0031) +[2024-03-29 17:30:58,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 803913728. Throughput: 0: 41578.3. Samples: 686137780. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:30:58,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 17:31:02,272][00497] Updated weights for policy 0, policy_version 49076 (0.0032) +[2024-03-29 17:31:03,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 804110336. Throughput: 0: 41328.5. Samples: 686256620. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:31:03,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:31:04,244][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049081_804143104.pth... +[2024-03-29 17:31:04,575][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048473_794181632.pth +[2024-03-29 17:31:05,722][00476] Signal inference workers to stop experience collection... (24450 times) +[2024-03-29 17:31:05,822][00497] InferenceWorker_p0-w0: stopping experience collection (24450 times) +[2024-03-29 17:31:05,960][00476] Signal inference workers to resume experience collection... (24450 times) +[2024-03-29 17:31:05,960][00497] InferenceWorker_p0-w0: resuming experience collection (24450 times) +[2024-03-29 17:31:06,265][00497] Updated weights for policy 0, policy_version 49086 (0.0019) +[2024-03-29 17:31:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 804323328. Throughput: 0: 41121.4. Samples: 686513200. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:31:08,841][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:31:10,444][00497] Updated weights for policy 0, policy_version 49096 (0.0033) +[2024-03-29 17:31:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 804519936. Throughput: 0: 41375.6. Samples: 686754440. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:31:13,840][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 17:31:14,187][00497] Updated weights for policy 0, policy_version 49106 (0.0023) +[2024-03-29 17:31:18,343][00497] Updated weights for policy 0, policy_version 49116 (0.0028) +[2024-03-29 17:31:18,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 804732928. Throughput: 0: 41274.2. Samples: 686873560. Policy #0 lag: (min: 1.0, avg: 19.8, max: 41.0) +[2024-03-29 17:31:18,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 17:31:22,202][00497] Updated weights for policy 0, policy_version 49126 (0.0021) +[2024-03-29 17:31:23,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 804929536. Throughput: 0: 41428.0. Samples: 687128440. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:23,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 17:31:26,319][00497] Updated weights for policy 0, policy_version 49136 (0.0020) +[2024-03-29 17:31:28,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40960.0, 300 sec: 41376.5). Total num frames: 805142528. Throughput: 0: 41803.5. Samples: 687385960. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:28,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 17:31:29,776][00497] Updated weights for policy 0, policy_version 49146 (0.0026) +[2024-03-29 17:31:33,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 805355520. Throughput: 0: 41413.5. Samples: 687505120. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:33,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 17:31:33,957][00497] Updated weights for policy 0, policy_version 49156 (0.0020) +[2024-03-29 17:31:37,783][00497] Updated weights for policy 0, policy_version 49166 (0.0021) +[2024-03-29 17:31:38,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 805568512. Throughput: 0: 41355.3. Samples: 687760760. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:38,841][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:31:41,740][00476] Signal inference workers to stop experience collection... (24500 times) +[2024-03-29 17:31:41,742][00476] Signal inference workers to resume experience collection... (24500 times) +[2024-03-29 17:31:41,767][00497] InferenceWorker_p0-w0: stopping experience collection (24500 times) +[2024-03-29 17:31:41,767][00497] InferenceWorker_p0-w0: resuming experience collection (24500 times) +[2024-03-29 17:31:41,999][00497] Updated weights for policy 0, policy_version 49176 (0.0023) +[2024-03-29 17:31:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 805765120. Throughput: 0: 41802.6. Samples: 688018900. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:43,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:31:45,731][00497] Updated weights for policy 0, policy_version 49186 (0.0034) +[2024-03-29 17:31:48,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 805994496. Throughput: 0: 41394.2. Samples: 688119360. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 17:31:48,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:31:49,823][00497] Updated weights for policy 0, policy_version 49196 (0.0025) +[2024-03-29 17:31:53,581][00497] Updated weights for policy 0, policy_version 49206 (0.0019) +[2024-03-29 17:31:53,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 806191104. Throughput: 0: 41507.1. Samples: 688381020. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:31:53,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 17:31:57,496][00497] Updated weights for policy 0, policy_version 49216 (0.0028) +[2024-03-29 17:31:58,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 806404096. Throughput: 0: 42103.9. Samples: 688649120. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:31:58,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 17:32:01,269][00497] Updated weights for policy 0, policy_version 49226 (0.0030) +[2024-03-29 17:32:03,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 806633472. Throughput: 0: 42008.5. Samples: 688763940. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:32:03,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 17:32:05,293][00497] Updated weights for policy 0, policy_version 49236 (0.0026) +[2024-03-29 17:32:08,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 806813696. Throughput: 0: 41847.6. Samples: 689011580. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:32:08,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 17:32:09,177][00497] Updated weights for policy 0, policy_version 49246 (0.0022) +[2024-03-29 17:32:13,398][00497] Updated weights for policy 0, policy_version 49256 (0.0023) +[2024-03-29 17:32:13,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 807026688. Throughput: 0: 41907.2. Samples: 689271780. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:32:13,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 17:32:15,096][00476] Signal inference workers to stop experience collection... (24550 times) +[2024-03-29 17:32:15,137][00497] InferenceWorker_p0-w0: stopping experience collection (24550 times) +[2024-03-29 17:32:15,256][00476] Signal inference workers to resume experience collection... (24550 times) +[2024-03-29 17:32:15,256][00497] InferenceWorker_p0-w0: resuming experience collection (24550 times) +[2024-03-29 17:32:16,914][00497] Updated weights for policy 0, policy_version 49266 (0.0026) +[2024-03-29 17:32:18,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.3, 300 sec: 41543.2). Total num frames: 807239680. Throughput: 0: 42023.6. Samples: 689396180. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 17:32:18,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 17:32:21,016][00497] Updated weights for policy 0, policy_version 49276 (0.0020) +[2024-03-29 17:32:23,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 807452672. Throughput: 0: 41871.0. Samples: 689644960. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:23,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:32:24,672][00497] Updated weights for policy 0, policy_version 49286 (0.0026) +[2024-03-29 17:32:28,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 807649280. Throughput: 0: 41830.6. Samples: 689901280. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:28,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:32:28,939][00497] Updated weights for policy 0, policy_version 49296 (0.0024) +[2024-03-29 17:32:32,386][00497] Updated weights for policy 0, policy_version 49306 (0.0026) +[2024-03-29 17:32:33,840][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.1, 300 sec: 41543.1). Total num frames: 807878656. Throughput: 0: 42423.4. Samples: 690028420. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:33,841][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 17:32:36,669][00497] Updated weights for policy 0, policy_version 49316 (0.0029) +[2024-03-29 17:32:38,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 808075264. Throughput: 0: 41964.0. Samples: 690269400. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:38,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:32:40,579][00497] Updated weights for policy 0, policy_version 49326 (0.0033) +[2024-03-29 17:32:43,839][00126] Fps is (10 sec: 39322.8, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 808271872. Throughput: 0: 41655.6. Samples: 690523620. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:43,841][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:32:44,856][00497] Updated weights for policy 0, policy_version 49336 (0.0019) +[2024-03-29 17:32:48,374][00497] Updated weights for policy 0, policy_version 49346 (0.0021) +[2024-03-29 17:32:48,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 808501248. Throughput: 0: 41883.5. Samples: 690648700. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 17:32:48,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:32:50,193][00476] Signal inference workers to stop experience collection... (24600 times) +[2024-03-29 17:32:50,233][00497] InferenceWorker_p0-w0: stopping experience collection (24600 times) +[2024-03-29 17:32:50,415][00476] Signal inference workers to resume experience collection... (24600 times) +[2024-03-29 17:32:50,415][00497] InferenceWorker_p0-w0: resuming experience collection (24600 times) +[2024-03-29 17:32:52,405][00497] Updated weights for policy 0, policy_version 49356 (0.0036) +[2024-03-29 17:32:53,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 808697856. Throughput: 0: 41633.8. Samples: 690885100. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:32:53,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:32:56,333][00497] Updated weights for policy 0, policy_version 49366 (0.0023) +[2024-03-29 17:32:58,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 808910848. Throughput: 0: 41744.9. Samples: 691150300. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:32:58,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:33:00,245][00497] Updated weights for policy 0, policy_version 49376 (0.0019) +[2024-03-29 17:33:03,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 809123840. Throughput: 0: 41944.8. Samples: 691283700. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:33:03,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 17:33:03,907][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049386_809140224.pth... +[2024-03-29 17:33:03,926][00497] Updated weights for policy 0, policy_version 49386 (0.0022) +[2024-03-29 17:33:04,235][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000048774_799113216.pth +[2024-03-29 17:33:07,755][00497] Updated weights for policy 0, policy_version 49396 (0.0029) +[2024-03-29 17:33:08,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 41654.3). Total num frames: 809336832. Throughput: 0: 41817.9. Samples: 691526760. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:33:08,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:33:11,798][00497] Updated weights for policy 0, policy_version 49406 (0.0022) +[2024-03-29 17:33:13,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 809549824. Throughput: 0: 41771.6. Samples: 691781000. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:33:13,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:33:15,583][00497] Updated weights for policy 0, policy_version 49416 (0.0018) +[2024-03-29 17:33:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 809762816. Throughput: 0: 42017.2. Samples: 691919180. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:33:18,841][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 17:33:19,355][00497] Updated weights for policy 0, policy_version 49426 (0.0020) +[2024-03-29 17:33:23,046][00497] Updated weights for policy 0, policy_version 49436 (0.0020) +[2024-03-29 17:33:23,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 809992192. Throughput: 0: 42116.8. Samples: 692164660. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 17:33:23,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 17:33:26,999][00476] Signal inference workers to stop experience collection... (24650 times) +[2024-03-29 17:33:27,001][00476] Signal inference workers to resume experience collection... (24650 times) +[2024-03-29 17:33:27,020][00497] Updated weights for policy 0, policy_version 49446 (0.0021) +[2024-03-29 17:33:27,041][00497] InferenceWorker_p0-w0: stopping experience collection (24650 times) +[2024-03-29 17:33:27,041][00497] InferenceWorker_p0-w0: resuming experience collection (24650 times) +[2024-03-29 17:33:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.4, 300 sec: 41820.9). Total num frames: 810188800. Throughput: 0: 42240.5. Samples: 692424440. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:28,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 17:33:31,051][00497] Updated weights for policy 0, policy_version 49456 (0.0023) +[2024-03-29 17:33:33,839][00126] Fps is (10 sec: 42599.2, 60 sec: 42325.6, 300 sec: 41709.8). Total num frames: 810418176. Throughput: 0: 42493.1. Samples: 692560880. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:33,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:33:34,440][00497] Updated weights for policy 0, policy_version 49466 (0.0019) +[2024-03-29 17:33:38,236][00497] Updated weights for policy 0, policy_version 49476 (0.0027) +[2024-03-29 17:33:38,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42871.5, 300 sec: 41932.0). Total num frames: 810647552. Throughput: 0: 42841.8. Samples: 692812980. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:38,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:33:42,237][00497] Updated weights for policy 0, policy_version 49486 (0.0020) +[2024-03-29 17:33:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42598.4, 300 sec: 41876.4). Total num frames: 810827776. Throughput: 0: 42579.1. Samples: 693066360. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:43,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:33:46,562][00497] Updated weights for policy 0, policy_version 49496 (0.0023) +[2024-03-29 17:33:48,839][00126] Fps is (10 sec: 39321.0, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 811040768. Throughput: 0: 42520.4. Samples: 693197120. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:48,842][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:33:50,030][00497] Updated weights for policy 0, policy_version 49506 (0.0021) +[2024-03-29 17:33:53,449][00497] Updated weights for policy 0, policy_version 49516 (0.0027) +[2024-03-29 17:33:53,839][00126] Fps is (10 sec: 45875.0, 60 sec: 43144.5, 300 sec: 41876.4). Total num frames: 811286528. Throughput: 0: 42792.9. Samples: 693452440. Policy #0 lag: (min: 1.0, avg: 20.6, max: 41.0) +[2024-03-29 17:33:53,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 17:33:57,603][00497] Updated weights for policy 0, policy_version 49526 (0.0018) +[2024-03-29 17:33:58,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42871.4, 300 sec: 41987.5). Total num frames: 811483136. Throughput: 0: 42861.7. Samples: 693709780. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:33:58,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 17:34:01,657][00497] Updated weights for policy 0, policy_version 49536 (0.0022) +[2024-03-29 17:34:03,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42598.4, 300 sec: 41820.9). Total num frames: 811679744. Throughput: 0: 42724.9. Samples: 693841800. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:34:03,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:34:04,919][00476] Signal inference workers to stop experience collection... (24700 times) +[2024-03-29 17:34:04,955][00497] InferenceWorker_p0-w0: stopping experience collection (24700 times) +[2024-03-29 17:34:05,144][00476] Signal inference workers to resume experience collection... (24700 times) +[2024-03-29 17:34:05,144][00497] InferenceWorker_p0-w0: resuming experience collection (24700 times) +[2024-03-29 17:34:05,398][00497] Updated weights for policy 0, policy_version 49546 (0.0022) +[2024-03-29 17:34:08,839][00126] Fps is (10 sec: 44237.1, 60 sec: 43144.5, 300 sec: 41987.5). Total num frames: 811925504. Throughput: 0: 42751.7. Samples: 694088480. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:34:08,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 17:34:08,845][00497] Updated weights for policy 0, policy_version 49556 (0.0021) +[2024-03-29 17:34:12,950][00497] Updated weights for policy 0, policy_version 49566 (0.0022) +[2024-03-29 17:34:13,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42598.3, 300 sec: 41931.9). Total num frames: 812105728. Throughput: 0: 42619.8. Samples: 694342340. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:34:13,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:34:16,904][00497] Updated weights for policy 0, policy_version 49576 (0.0023) +[2024-03-29 17:34:18,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42598.4, 300 sec: 41876.4). Total num frames: 812318720. Throughput: 0: 42637.7. Samples: 694479580. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:34:18,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:34:20,615][00497] Updated weights for policy 0, policy_version 49586 (0.0020) +[2024-03-29 17:34:23,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42598.5, 300 sec: 41876.4). Total num frames: 812548096. Throughput: 0: 42674.2. Samples: 694733320. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 17:34:23,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:34:24,297][00497] Updated weights for policy 0, policy_version 49596 (0.0023) +[2024-03-29 17:34:28,200][00497] Updated weights for policy 0, policy_version 49606 (0.0023) +[2024-03-29 17:34:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42871.4, 300 sec: 41931.9). Total num frames: 812761088. Throughput: 0: 42495.5. Samples: 694978660. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:28,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:34:32,361][00497] Updated weights for policy 0, policy_version 49616 (0.0023) +[2024-03-29 17:34:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42052.2, 300 sec: 41820.9). Total num frames: 812941312. Throughput: 0: 42671.6. Samples: 695117340. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:33,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 17:34:35,921][00497] Updated weights for policy 0, policy_version 49626 (0.0022) +[2024-03-29 17:34:38,404][00476] Signal inference workers to stop experience collection... (24750 times) +[2024-03-29 17:34:38,476][00476] Signal inference workers to resume experience collection... (24750 times) +[2024-03-29 17:34:38,476][00497] InferenceWorker_p0-w0: stopping experience collection (24750 times) +[2024-03-29 17:34:38,506][00497] InferenceWorker_p0-w0: resuming experience collection (24750 times) +[2024-03-29 17:34:38,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42325.2, 300 sec: 41987.5). Total num frames: 813187072. Throughput: 0: 42710.1. Samples: 695374400. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:38,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:34:39,671][00497] Updated weights for policy 0, policy_version 49636 (0.0024) +[2024-03-29 17:34:43,724][00497] Updated weights for policy 0, policy_version 49646 (0.0025) +[2024-03-29 17:34:43,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42871.5, 300 sec: 41987.5). Total num frames: 813400064. Throughput: 0: 42365.8. Samples: 695616240. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:43,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:34:48,032][00497] Updated weights for policy 0, policy_version 49656 (0.0027) +[2024-03-29 17:34:48,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42598.5, 300 sec: 41987.5). Total num frames: 813596672. Throughput: 0: 42398.2. Samples: 695749720. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:48,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 17:34:51,633][00497] Updated weights for policy 0, policy_version 49666 (0.0029) +[2024-03-29 17:34:53,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 813809664. Throughput: 0: 42427.9. Samples: 695997740. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:53,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 17:34:55,335][00497] Updated weights for policy 0, policy_version 49676 (0.0020) +[2024-03-29 17:34:58,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42598.4, 300 sec: 42098.5). Total num frames: 814039040. Throughput: 0: 42245.0. Samples: 696243360. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 17:34:58,842][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 17:34:59,385][00497] Updated weights for policy 0, policy_version 49686 (0.0018) +[2024-03-29 17:35:03,544][00497] Updated weights for policy 0, policy_version 49696 (0.0017) +[2024-03-29 17:35:03,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 814219264. Throughput: 0: 42302.2. Samples: 696383180. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:03,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 17:35:04,191][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049698_814252032.pth... +[2024-03-29 17:35:04,504][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049081_804143104.pth +[2024-03-29 17:35:07,228][00497] Updated weights for policy 0, policy_version 49706 (0.0018) +[2024-03-29 17:35:08,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 814448640. Throughput: 0: 42442.1. Samples: 696643220. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:08,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 17:35:11,036][00497] Updated weights for policy 0, policy_version 49717 (0.0026) +[2024-03-29 17:35:13,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42598.4, 300 sec: 42098.5). Total num frames: 814661632. Throughput: 0: 42356.3. Samples: 696884700. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:13,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 17:35:15,272][00497] Updated weights for policy 0, policy_version 49727 (0.0020) +[2024-03-29 17:35:18,810][00476] Signal inference workers to stop experience collection... (24800 times) +[2024-03-29 17:35:18,811][00476] Signal inference workers to resume experience collection... (24800 times) +[2024-03-29 17:35:18,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42598.4, 300 sec: 42154.1). Total num frames: 814874624. Throughput: 0: 42260.8. Samples: 697019080. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:18,840][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 17:35:18,846][00497] InferenceWorker_p0-w0: stopping experience collection (24800 times) +[2024-03-29 17:35:18,846][00497] InferenceWorker_p0-w0: resuming experience collection (24800 times) +[2024-03-29 17:35:19,123][00497] Updated weights for policy 0, policy_version 49737 (0.0017) +[2024-03-29 17:35:22,621][00497] Updated weights for policy 0, policy_version 49747 (0.0024) +[2024-03-29 17:35:23,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 815087616. Throughput: 0: 42574.3. Samples: 697290240. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:23,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 17:35:26,237][00497] Updated weights for policy 0, policy_version 49757 (0.0024) +[2024-03-29 17:35:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 815316992. Throughput: 0: 42502.2. Samples: 697528840. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 17:35:28,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 17:35:30,637][00497] Updated weights for policy 0, policy_version 49767 (0.0021) +[2024-03-29 17:35:33,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42598.4, 300 sec: 42154.1). Total num frames: 815497216. Throughput: 0: 42436.0. Samples: 697659340. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:33,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 17:35:34,600][00497] Updated weights for policy 0, policy_version 49777 (0.0025) +[2024-03-29 17:35:38,191][00497] Updated weights for policy 0, policy_version 49787 (0.0019) +[2024-03-29 17:35:38,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 815726592. Throughput: 0: 42658.2. Samples: 697917360. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:38,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:35:42,306][00497] Updated weights for policy 0, policy_version 49797 (0.0025) +[2024-03-29 17:35:43,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 815939584. Throughput: 0: 42513.4. Samples: 698156460. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:43,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:35:46,366][00497] Updated weights for policy 0, policy_version 49807 (0.0023) +[2024-03-29 17:35:48,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42325.3, 300 sec: 42209.7). Total num frames: 816136192. Throughput: 0: 42047.6. Samples: 698275320. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:48,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 17:35:50,444][00497] Updated weights for policy 0, policy_version 49817 (0.0021) +[2024-03-29 17:35:53,016][00476] Signal inference workers to stop experience collection... (24850 times) +[2024-03-29 17:35:53,037][00497] InferenceWorker_p0-w0: stopping experience collection (24850 times) +[2024-03-29 17:35:53,196][00476] Signal inference workers to resume experience collection... (24850 times) +[2024-03-29 17:35:53,196][00497] InferenceWorker_p0-w0: resuming experience collection (24850 times) +[2024-03-29 17:35:53,839][00126] Fps is (10 sec: 40959.3, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 816349184. Throughput: 0: 42063.5. Samples: 698536080. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:53,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:35:54,169][00497] Updated weights for policy 0, policy_version 49827 (0.0023) +[2024-03-29 17:35:57,819][00497] Updated weights for policy 0, policy_version 49837 (0.0028) +[2024-03-29 17:35:58,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 816562176. Throughput: 0: 42043.6. Samples: 698776660. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 17:35:58,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:36:01,921][00497] Updated weights for policy 0, policy_version 49847 (0.0024) +[2024-03-29 17:36:03,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 816775168. Throughput: 0: 41786.7. Samples: 698899480. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:03,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:36:06,104][00497] Updated weights for policy 0, policy_version 49857 (0.0022) +[2024-03-29 17:36:08,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.3, 300 sec: 42154.1). Total num frames: 816955392. Throughput: 0: 41705.4. Samples: 699166980. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:08,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:36:09,973][00497] Updated weights for policy 0, policy_version 49867 (0.0024) +[2024-03-29 17:36:13,540][00497] Updated weights for policy 0, policy_version 49877 (0.0018) +[2024-03-29 17:36:13,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.5, 300 sec: 42265.2). Total num frames: 817201152. Throughput: 0: 41885.4. Samples: 699413680. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:13,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:36:17,594][00497] Updated weights for policy 0, policy_version 49887 (0.0027) +[2024-03-29 17:36:18,839][00126] Fps is (10 sec: 44235.9, 60 sec: 42052.2, 300 sec: 42265.1). Total num frames: 817397760. Throughput: 0: 41565.2. Samples: 699529780. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:18,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:36:21,551][00497] Updated weights for policy 0, policy_version 49897 (0.0027) +[2024-03-29 17:36:23,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 42209.6). Total num frames: 817594368. Throughput: 0: 41844.5. Samples: 699800360. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:23,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 17:36:25,556][00497] Updated weights for policy 0, policy_version 49907 (0.0018) +[2024-03-29 17:36:26,919][00476] Signal inference workers to stop experience collection... (24900 times) +[2024-03-29 17:36:26,998][00476] Signal inference workers to resume experience collection... (24900 times) +[2024-03-29 17:36:27,000][00497] InferenceWorker_p0-w0: stopping experience collection (24900 times) +[2024-03-29 17:36:27,025][00497] InferenceWorker_p0-w0: resuming experience collection (24900 times) +[2024-03-29 17:36:28,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41779.3, 300 sec: 42265.2). Total num frames: 817823744. Throughput: 0: 42144.5. Samples: 700052960. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 17:36:28,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 17:36:29,086][00497] Updated weights for policy 0, policy_version 49917 (0.0019) +[2024-03-29 17:36:32,860][00497] Updated weights for policy 0, policy_version 49927 (0.0027) +[2024-03-29 17:36:33,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42325.2, 300 sec: 42265.1). Total num frames: 818036736. Throughput: 0: 42079.4. Samples: 700168900. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:33,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 17:36:37,026][00497] Updated weights for policy 0, policy_version 49937 (0.0021) +[2024-03-29 17:36:38,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.4, 300 sec: 42320.7). Total num frames: 818249728. Throughput: 0: 42301.5. Samples: 700439640. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:38,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:36:40,814][00497] Updated weights for policy 0, policy_version 49947 (0.0023) +[2024-03-29 17:36:43,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 42265.2). Total num frames: 818462720. Throughput: 0: 42531.5. Samples: 700690580. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:43,841][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 17:36:44,656][00497] Updated weights for policy 0, policy_version 49957 (0.0021) +[2024-03-29 17:36:48,555][00497] Updated weights for policy 0, policy_version 49967 (0.0029) +[2024-03-29 17:36:48,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 818659328. Throughput: 0: 42557.4. Samples: 700814560. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:48,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 17:36:52,715][00497] Updated weights for policy 0, policy_version 49977 (0.0018) +[2024-03-29 17:36:53,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.3, 300 sec: 42209.6). Total num frames: 818855936. Throughput: 0: 42294.6. Samples: 701070240. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:53,840][00126] Avg episode reward: [(0, '0.421')] +[2024-03-29 17:36:56,577][00497] Updated weights for policy 0, policy_version 49987 (0.0018) +[2024-03-29 17:36:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 819085312. Throughput: 0: 42372.9. Samples: 701320460. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:36:58,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 17:37:00,203][00497] Updated weights for policy 0, policy_version 49997 (0.0020) +[2024-03-29 17:37:02,091][00476] Signal inference workers to stop experience collection... (24950 times) +[2024-03-29 17:37:02,129][00497] InferenceWorker_p0-w0: stopping experience collection (24950 times) +[2024-03-29 17:37:02,320][00476] Signal inference workers to resume experience collection... (24950 times) +[2024-03-29 17:37:02,320][00497] InferenceWorker_p0-w0: resuming experience collection (24950 times) +[2024-03-29 17:37:03,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42052.2, 300 sec: 42320.7). Total num frames: 819298304. Throughput: 0: 42332.8. Samples: 701434760. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 17:37:03,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 17:37:03,862][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050006_819298304.pth... +[2024-03-29 17:37:04,174][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049386_809140224.pth +[2024-03-29 17:37:04,541][00497] Updated weights for policy 0, policy_version 50007 (0.0022) +[2024-03-29 17:37:08,419][00497] Updated weights for policy 0, policy_version 50017 (0.0026) +[2024-03-29 17:37:08,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42052.2, 300 sec: 42209.6). Total num frames: 819478528. Throughput: 0: 41802.8. Samples: 701681480. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:08,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 17:37:12,616][00497] Updated weights for policy 0, policy_version 50027 (0.0018) +[2024-03-29 17:37:13,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41506.1, 300 sec: 42209.6). Total num frames: 819691520. Throughput: 0: 41794.6. Samples: 701933720. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:13,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 17:37:15,939][00497] Updated weights for policy 0, policy_version 50037 (0.0022) +[2024-03-29 17:37:18,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 819937280. Throughput: 0: 41957.8. Samples: 702057000. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:18,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:37:20,009][00497] Updated weights for policy 0, policy_version 50047 (0.0027) +[2024-03-29 17:37:23,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.4, 300 sec: 42265.2). Total num frames: 820117504. Throughput: 0: 41531.5. Samples: 702308560. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:23,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:37:24,039][00497] Updated weights for policy 0, policy_version 50057 (0.0032) +[2024-03-29 17:37:28,041][00497] Updated weights for policy 0, policy_version 50067 (0.0018) +[2024-03-29 17:37:28,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41779.2, 300 sec: 42209.7). Total num frames: 820330496. Throughput: 0: 41866.9. Samples: 702574580. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:28,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:37:31,454][00497] Updated weights for policy 0, policy_version 50077 (0.0024) +[2024-03-29 17:37:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.4, 300 sec: 42320.7). Total num frames: 820559872. Throughput: 0: 41795.5. Samples: 702695360. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 17:37:33,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 17:37:35,473][00497] Updated weights for policy 0, policy_version 50087 (0.0021) +[2024-03-29 17:37:38,839][00126] Fps is (10 sec: 42597.6, 60 sec: 41779.1, 300 sec: 42320.7). Total num frames: 820756480. Throughput: 0: 41584.8. Samples: 702941560. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:37:38,842][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 17:37:39,568][00497] Updated weights for policy 0, policy_version 50097 (0.0024) +[2024-03-29 17:37:40,662][00476] Signal inference workers to stop experience collection... (25000 times) +[2024-03-29 17:37:40,663][00476] Signal inference workers to resume experience collection... (25000 times) +[2024-03-29 17:37:40,703][00497] InferenceWorker_p0-w0: stopping experience collection (25000 times) +[2024-03-29 17:37:40,704][00497] InferenceWorker_p0-w0: resuming experience collection (25000 times) +[2024-03-29 17:37:43,600][00497] Updated weights for policy 0, policy_version 50107 (0.0022) +[2024-03-29 17:37:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 820953088. Throughput: 0: 41716.9. Samples: 703197720. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:37:43,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 17:37:47,162][00497] Updated weights for policy 0, policy_version 50117 (0.0027) +[2024-03-29 17:37:48,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41779.2, 300 sec: 42265.2). Total num frames: 821166080. Throughput: 0: 41830.0. Samples: 703317100. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:37:48,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 17:37:51,301][00497] Updated weights for policy 0, policy_version 50127 (0.0036) +[2024-03-29 17:37:53,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 821379072. Throughput: 0: 41759.6. Samples: 703560660. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:37:53,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:37:55,540][00497] Updated weights for policy 0, policy_version 50137 (0.0019) +[2024-03-29 17:37:58,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41233.0, 300 sec: 42154.1). Total num frames: 821559296. Throughput: 0: 41973.7. Samples: 703822540. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:37:58,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 17:37:59,564][00497] Updated weights for policy 0, policy_version 50147 (0.0019) +[2024-03-29 17:38:02,995][00497] Updated weights for policy 0, policy_version 50157 (0.0025) +[2024-03-29 17:38:03,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.3, 300 sec: 42209.6). Total num frames: 821788672. Throughput: 0: 41761.9. Samples: 703936280. Policy #0 lag: (min: 0.0, avg: 21.0, max: 41.0) +[2024-03-29 17:38:03,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:38:06,635][00497] Updated weights for policy 0, policy_version 50167 (0.0024) +[2024-03-29 17:38:08,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 822001664. Throughput: 0: 41654.7. Samples: 704183020. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:08,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:38:11,312][00497] Updated weights for policy 0, policy_version 50177 (0.0022) +[2024-03-29 17:38:13,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41779.1, 300 sec: 42154.1). Total num frames: 822198272. Throughput: 0: 41500.3. Samples: 704442100. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:13,840][00126] Avg episode reward: [(0, '0.478')] +[2024-03-29 17:38:15,349][00497] Updated weights for policy 0, policy_version 50187 (0.0026) +[2024-03-29 17:38:17,649][00476] Signal inference workers to stop experience collection... (25050 times) +[2024-03-29 17:38:17,701][00497] InferenceWorker_p0-w0: stopping experience collection (25050 times) +[2024-03-29 17:38:17,734][00476] Signal inference workers to resume experience collection... (25050 times) +[2024-03-29 17:38:17,737][00497] InferenceWorker_p0-w0: resuming experience collection (25050 times) +[2024-03-29 17:38:18,652][00497] Updated weights for policy 0, policy_version 50197 (0.0020) +[2024-03-29 17:38:18,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.1, 300 sec: 42154.1). Total num frames: 822427648. Throughput: 0: 41638.1. Samples: 704569080. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:18,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 17:38:22,273][00497] Updated weights for policy 0, policy_version 50207 (0.0030) +[2024-03-29 17:38:23,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 822624256. Throughput: 0: 41488.1. Samples: 704808520. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:23,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 17:38:26,658][00497] Updated weights for policy 0, policy_version 50217 (0.0028) +[2024-03-29 17:38:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41779.0, 300 sec: 42098.5). Total num frames: 822837248. Throughput: 0: 41700.3. Samples: 705074240. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:28,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:38:30,815][00497] Updated weights for policy 0, policy_version 50227 (0.0019) +[2024-03-29 17:38:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 823050240. Throughput: 0: 41938.7. Samples: 705204340. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:33,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:38:34,284][00497] Updated weights for policy 0, policy_version 50237 (0.0021) +[2024-03-29 17:38:37,880][00497] Updated weights for policy 0, policy_version 50247 (0.0032) +[2024-03-29 17:38:38,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 823263232. Throughput: 0: 41714.6. Samples: 705437820. Policy #0 lag: (min: 0.0, avg: 20.0, max: 40.0) +[2024-03-29 17:38:38,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 17:38:42,273][00497] Updated weights for policy 0, policy_version 50257 (0.0028) +[2024-03-29 17:38:43,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 823459840. Throughput: 0: 41875.1. Samples: 705706920. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:38:43,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 17:38:46,224][00497] Updated weights for policy 0, policy_version 50267 (0.0021) +[2024-03-29 17:38:48,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 823689216. Throughput: 0: 42272.0. Samples: 705838520. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:38:48,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 17:38:49,832][00497] Updated weights for policy 0, policy_version 50277 (0.0032) +[2024-03-29 17:38:51,585][00476] Signal inference workers to stop experience collection... (25100 times) +[2024-03-29 17:38:51,656][00497] InferenceWorker_p0-w0: stopping experience collection (25100 times) +[2024-03-29 17:38:51,673][00476] Signal inference workers to resume experience collection... (25100 times) +[2024-03-29 17:38:51,686][00497] InferenceWorker_p0-w0: resuming experience collection (25100 times) +[2024-03-29 17:38:53,481][00497] Updated weights for policy 0, policy_version 50287 (0.0027) +[2024-03-29 17:38:53,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.1, 300 sec: 42098.5). Total num frames: 823902208. Throughput: 0: 41954.1. Samples: 706070960. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:38:53,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:38:57,864][00497] Updated weights for policy 0, policy_version 50297 (0.0023) +[2024-03-29 17:38:58,839][00126] Fps is (10 sec: 40959.4, 60 sec: 42325.3, 300 sec: 42098.5). Total num frames: 824098816. Throughput: 0: 42207.2. Samples: 706341420. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:38:58,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:39:01,737][00497] Updated weights for policy 0, policy_version 50307 (0.0021) +[2024-03-29 17:39:03,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 824328192. Throughput: 0: 42384.1. Samples: 706476360. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:39:03,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 17:39:03,857][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050313_824328192.pth... +[2024-03-29 17:39:04,165][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000049698_814252032.pth +[2024-03-29 17:39:05,264][00497] Updated weights for policy 0, policy_version 50317 (0.0021) +[2024-03-29 17:39:08,630][00497] Updated weights for policy 0, policy_version 50327 (0.0023) +[2024-03-29 17:39:08,839][00126] Fps is (10 sec: 45875.7, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 824557568. Throughput: 0: 42355.5. Samples: 706714520. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 17:39:08,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 17:39:13,132][00497] Updated weights for policy 0, policy_version 50337 (0.0023) +[2024-03-29 17:39:13,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42325.5, 300 sec: 42098.5). Total num frames: 824737792. Throughput: 0: 42307.3. Samples: 706978060. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:13,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 17:39:17,284][00497] Updated weights for policy 0, policy_version 50347 (0.0025) +[2024-03-29 17:39:18,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42052.4, 300 sec: 42043.0). Total num frames: 824950784. Throughput: 0: 42423.6. Samples: 707113400. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:18,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 17:39:20,850][00497] Updated weights for policy 0, policy_version 50357 (0.0023) +[2024-03-29 17:39:23,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42871.4, 300 sec: 42154.1). Total num frames: 825196544. Throughput: 0: 42597.7. Samples: 707354720. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:23,841][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:39:24,106][00497] Updated weights for policy 0, policy_version 50367 (0.0025) +[2024-03-29 17:39:27,942][00476] Signal inference workers to stop experience collection... (25150 times) +[2024-03-29 17:39:27,967][00497] InferenceWorker_p0-w0: stopping experience collection (25150 times) +[2024-03-29 17:39:28,123][00476] Signal inference workers to resume experience collection... (25150 times) +[2024-03-29 17:39:28,124][00497] InferenceWorker_p0-w0: resuming experience collection (25150 times) +[2024-03-29 17:39:28,420][00497] Updated weights for policy 0, policy_version 50377 (0.0023) +[2024-03-29 17:39:28,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 825376768. Throughput: 0: 42515.9. Samples: 707620140. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:28,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 17:39:32,524][00497] Updated weights for policy 0, policy_version 50387 (0.0029) +[2024-03-29 17:39:33,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 825589760. Throughput: 0: 42487.9. Samples: 707750480. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:33,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:39:36,059][00497] Updated weights for policy 0, policy_version 50397 (0.0031) +[2024-03-29 17:39:38,839][00126] Fps is (10 sec: 45876.1, 60 sec: 42871.6, 300 sec: 42154.1). Total num frames: 825835520. Throughput: 0: 42721.5. Samples: 707993420. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 17:39:38,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 17:39:39,489][00497] Updated weights for policy 0, policy_version 50407 (0.0023) +[2024-03-29 17:39:43,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 825999360. Throughput: 0: 42229.5. Samples: 708241740. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:39:43,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:39:44,175][00497] Updated weights for policy 0, policy_version 50417 (0.0019) +[2024-03-29 17:39:48,153][00497] Updated weights for policy 0, policy_version 50427 (0.0025) +[2024-03-29 17:39:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.3, 300 sec: 42098.6). Total num frames: 826228736. Throughput: 0: 42305.8. Samples: 708380120. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:39:48,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 17:39:51,699][00497] Updated weights for policy 0, policy_version 50437 (0.0036) +[2024-03-29 17:39:53,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42598.5, 300 sec: 42098.6). Total num frames: 826458112. Throughput: 0: 42475.6. Samples: 708625920. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:39:53,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:39:55,048][00497] Updated weights for policy 0, policy_version 50447 (0.0017) +[2024-03-29 17:39:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42598.5, 300 sec: 42154.1). Total num frames: 826654720. Throughput: 0: 42150.7. Samples: 708874840. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:39:58,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 17:39:59,823][00497] Updated weights for policy 0, policy_version 50457 (0.0027) +[2024-03-29 17:40:00,250][00476] Signal inference workers to stop experience collection... (25200 times) +[2024-03-29 17:40:00,292][00497] InferenceWorker_p0-w0: stopping experience collection (25200 times) +[2024-03-29 17:40:00,447][00476] Signal inference workers to resume experience collection... (25200 times) +[2024-03-29 17:40:00,447][00497] InferenceWorker_p0-w0: resuming experience collection (25200 times) +[2024-03-29 17:40:03,543][00497] Updated weights for policy 0, policy_version 50467 (0.0022) +[2024-03-29 17:40:03,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 826851328. Throughput: 0: 42078.2. Samples: 709006920. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:40:03,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:40:07,120][00497] Updated weights for policy 0, policy_version 50477 (0.0018) +[2024-03-29 17:40:08,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 827097088. Throughput: 0: 42502.8. Samples: 709267340. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:40:08,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 17:40:10,377][00497] Updated weights for policy 0, policy_version 50487 (0.0029) +[2024-03-29 17:40:13,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42598.4, 300 sec: 42098.6). Total num frames: 827293696. Throughput: 0: 42150.4. Samples: 709516900. Policy #0 lag: (min: 1.0, avg: 20.8, max: 41.0) +[2024-03-29 17:40:13,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 17:40:15,053][00497] Updated weights for policy 0, policy_version 50497 (0.0025) +[2024-03-29 17:40:18,839][00126] Fps is (10 sec: 39321.9, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 827490304. Throughput: 0: 42044.1. Samples: 709642460. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:18,841][00126] Avg episode reward: [(0, '0.465')] +[2024-03-29 17:40:19,087][00497] Updated weights for policy 0, policy_version 50507 (0.0025) +[2024-03-29 17:40:22,668][00497] Updated weights for policy 0, policy_version 50517 (0.0020) +[2024-03-29 17:40:23,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.4, 300 sec: 42043.0). Total num frames: 827719680. Throughput: 0: 42429.3. Samples: 709902740. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:23,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 17:40:25,957][00497] Updated weights for policy 0, policy_version 50527 (0.0017) +[2024-03-29 17:40:28,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42598.5, 300 sec: 42154.1). Total num frames: 827932672. Throughput: 0: 42526.6. Samples: 710155440. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:28,840][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 17:40:30,117][00476] Signal inference workers to stop experience collection... (25250 times) +[2024-03-29 17:40:30,194][00476] Signal inference workers to resume experience collection... (25250 times) +[2024-03-29 17:40:30,196][00497] InferenceWorker_p0-w0: stopping experience collection (25250 times) +[2024-03-29 17:40:30,220][00497] InferenceWorker_p0-w0: resuming experience collection (25250 times) +[2024-03-29 17:40:30,505][00497] Updated weights for policy 0, policy_version 50537 (0.0020) +[2024-03-29 17:40:33,839][00126] Fps is (10 sec: 40959.3, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 828129280. Throughput: 0: 42289.2. Samples: 710283140. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:33,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 17:40:34,521][00497] Updated weights for policy 0, policy_version 50547 (0.0020) +[2024-03-29 17:40:38,085][00497] Updated weights for policy 0, policy_version 50557 (0.0027) +[2024-03-29 17:40:38,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42052.1, 300 sec: 42098.5). Total num frames: 828358656. Throughput: 0: 42523.3. Samples: 710539480. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:38,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 17:40:41,523][00497] Updated weights for policy 0, policy_version 50567 (0.0025) +[2024-03-29 17:40:43,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42871.4, 300 sec: 42154.1). Total num frames: 828571648. Throughput: 0: 42630.1. Samples: 710793200. Policy #0 lag: (min: 0.0, avg: 21.0, max: 43.0) +[2024-03-29 17:40:43,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:40:45,902][00497] Updated weights for policy 0, policy_version 50577 (0.0023) +[2024-03-29 17:40:48,839][00126] Fps is (10 sec: 39322.5, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 828751872. Throughput: 0: 42563.6. Samples: 710922280. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:40:48,841][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 17:40:49,813][00497] Updated weights for policy 0, policy_version 50587 (0.0020) +[2024-03-29 17:40:53,696][00497] Updated weights for policy 0, policy_version 50597 (0.0024) +[2024-03-29 17:40:53,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 828981248. Throughput: 0: 42304.5. Samples: 711171040. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:40:53,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:40:57,104][00497] Updated weights for policy 0, policy_version 50607 (0.0023) +[2024-03-29 17:40:58,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42325.2, 300 sec: 42098.5). Total num frames: 829194240. Throughput: 0: 42295.0. Samples: 711420180. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:40:58,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 17:41:01,858][00497] Updated weights for policy 0, policy_version 50617 (0.0032) +[2024-03-29 17:41:03,283][00476] Signal inference workers to stop experience collection... (25300 times) +[2024-03-29 17:41:03,327][00497] InferenceWorker_p0-w0: stopping experience collection (25300 times) +[2024-03-29 17:41:03,363][00476] Signal inference workers to resume experience collection... (25300 times) +[2024-03-29 17:41:03,369][00497] InferenceWorker_p0-w0: resuming experience collection (25300 times) +[2024-03-29 17:41:03,839][00126] Fps is (10 sec: 40959.2, 60 sec: 42325.2, 300 sec: 42154.1). Total num frames: 829390848. Throughput: 0: 42221.1. Samples: 711542420. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:41:03,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 17:41:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050622_829390848.pth... +[2024-03-29 17:41:04,220][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050006_819298304.pth +[2024-03-29 17:41:05,736][00497] Updated weights for policy 0, policy_version 50627 (0.0020) +[2024-03-29 17:41:08,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.3, 300 sec: 42043.0). Total num frames: 829603840. Throughput: 0: 42284.0. Samples: 711805520. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:41:08,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 17:41:09,204][00497] Updated weights for policy 0, policy_version 50637 (0.0032) +[2024-03-29 17:41:12,692][00497] Updated weights for policy 0, policy_version 50647 (0.0024) +[2024-03-29 17:41:13,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 829800448. Throughput: 0: 41714.2. Samples: 712032580. Policy #0 lag: (min: 0.0, avg: 19.9, max: 40.0) +[2024-03-29 17:41:13,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:41:17,320][00497] Updated weights for policy 0, policy_version 50657 (0.0027) +[2024-03-29 17:41:18,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 830013440. Throughput: 0: 41959.7. Samples: 712171320. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:18,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 17:41:21,204][00497] Updated weights for policy 0, policy_version 50667 (0.0023) +[2024-03-29 17:41:23,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 830242816. Throughput: 0: 42106.4. Samples: 712434260. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:23,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:41:24,568][00497] Updated weights for policy 0, policy_version 50677 (0.0023) +[2024-03-29 17:41:28,264][00497] Updated weights for policy 0, policy_version 50687 (0.0020) +[2024-03-29 17:41:28,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 830455808. Throughput: 0: 41826.3. Samples: 712675380. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:28,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:41:32,635][00497] Updated weights for policy 0, policy_version 50697 (0.0017) +[2024-03-29 17:41:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.4, 300 sec: 42098.5). Total num frames: 830668800. Throughput: 0: 41952.9. Samples: 712810160. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:33,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 17:41:36,568][00497] Updated weights for policy 0, policy_version 50707 (0.0023) +[2024-03-29 17:41:38,322][00476] Signal inference workers to stop experience collection... (25350 times) +[2024-03-29 17:41:38,399][00497] InferenceWorker_p0-w0: stopping experience collection (25350 times) +[2024-03-29 17:41:38,493][00476] Signal inference workers to resume experience collection... (25350 times) +[2024-03-29 17:41:38,493][00497] InferenceWorker_p0-w0: resuming experience collection (25350 times) +[2024-03-29 17:41:38,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 830881792. Throughput: 0: 42270.5. Samples: 713073220. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:38,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:41:39,930][00497] Updated weights for policy 0, policy_version 50717 (0.0023) +[2024-03-29 17:41:43,647][00497] Updated weights for policy 0, policy_version 50727 (0.0023) +[2024-03-29 17:41:43,839][00126] Fps is (10 sec: 44236.0, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 831111168. Throughput: 0: 42015.5. Samples: 713310880. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:43,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:41:48,084][00497] Updated weights for policy 0, policy_version 50737 (0.0028) +[2024-03-29 17:41:48,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42325.2, 300 sec: 42154.1). Total num frames: 831291392. Throughput: 0: 42304.9. Samples: 713446140. Policy #0 lag: (min: 1.0, avg: 21.0, max: 42.0) +[2024-03-29 17:41:48,840][00126] Avg episode reward: [(0, '0.634')] +[2024-03-29 17:41:51,976][00497] Updated weights for policy 0, policy_version 50747 (0.0025) +[2024-03-29 17:41:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.1, 300 sec: 42098.5). Total num frames: 831504384. Throughput: 0: 42286.0. Samples: 713708400. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:41:53,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 17:41:55,371][00497] Updated weights for policy 0, policy_version 50757 (0.0026) +[2024-03-29 17:41:58,839][00126] Fps is (10 sec: 45876.0, 60 sec: 42598.5, 300 sec: 42209.7). Total num frames: 831750144. Throughput: 0: 42833.0. Samples: 713960060. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:41:58,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:41:58,871][00497] Updated weights for policy 0, policy_version 50767 (0.0021) +[2024-03-29 17:42:03,507][00497] Updated weights for policy 0, policy_version 50777 (0.0021) +[2024-03-29 17:42:03,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 42265.1). Total num frames: 831946752. Throughput: 0: 42730.9. Samples: 714094220. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:42:03,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 17:42:07,316][00497] Updated weights for policy 0, policy_version 50787 (0.0025) +[2024-03-29 17:42:08,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 832143360. Throughput: 0: 42842.7. Samples: 714362180. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:42:08,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:42:09,508][00476] Signal inference workers to stop experience collection... (25400 times) +[2024-03-29 17:42:09,562][00497] InferenceWorker_p0-w0: stopping experience collection (25400 times) +[2024-03-29 17:42:09,664][00476] Signal inference workers to resume experience collection... (25400 times) +[2024-03-29 17:42:09,664][00497] InferenceWorker_p0-w0: resuming experience collection (25400 times) +[2024-03-29 17:42:10,554][00497] Updated weights for policy 0, policy_version 50797 (0.0026) +[2024-03-29 17:42:13,839][00126] Fps is (10 sec: 44237.7, 60 sec: 43144.6, 300 sec: 42209.6). Total num frames: 832389120. Throughput: 0: 42672.0. Samples: 714595620. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:42:13,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 17:42:14,628][00497] Updated weights for policy 0, policy_version 50807 (0.0039) +[2024-03-29 17:42:18,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 832569344. Throughput: 0: 42532.0. Samples: 714724100. Policy #0 lag: (min: 0.0, avg: 18.6, max: 40.0) +[2024-03-29 17:42:18,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:42:18,888][00497] Updated weights for policy 0, policy_version 50817 (0.0018) +[2024-03-29 17:42:22,850][00497] Updated weights for policy 0, policy_version 50827 (0.0023) +[2024-03-29 17:42:23,839][00126] Fps is (10 sec: 37682.8, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 832765952. Throughput: 0: 42594.7. Samples: 714989980. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:23,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 17:42:26,281][00497] Updated weights for policy 0, policy_version 50837 (0.0021) +[2024-03-29 17:42:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 833011712. Throughput: 0: 42585.5. Samples: 715227220. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:28,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 17:42:30,049][00497] Updated weights for policy 0, policy_version 50847 (0.0028) +[2024-03-29 17:42:33,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 833208320. Throughput: 0: 42536.6. Samples: 715360280. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:33,841][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 17:42:34,406][00497] Updated weights for policy 0, policy_version 50857 (0.0023) +[2024-03-29 17:42:38,498][00497] Updated weights for policy 0, policy_version 50867 (0.0025) +[2024-03-29 17:42:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42325.4, 300 sec: 42265.2). Total num frames: 833421312. Throughput: 0: 42719.7. Samples: 715630780. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:38,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:42:42,006][00476] Signal inference workers to stop experience collection... (25450 times) +[2024-03-29 17:42:42,007][00476] Signal inference workers to resume experience collection... (25450 times) +[2024-03-29 17:42:42,008][00497] Updated weights for policy 0, policy_version 50877 (0.0026) +[2024-03-29 17:42:42,052][00497] InferenceWorker_p0-w0: stopping experience collection (25450 times) +[2024-03-29 17:42:42,052][00497] InferenceWorker_p0-w0: resuming experience collection (25450 times) +[2024-03-29 17:42:43,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 833650688. Throughput: 0: 42195.9. Samples: 715858880. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:43,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 17:42:45,779][00497] Updated weights for policy 0, policy_version 50887 (0.0026) +[2024-03-29 17:42:48,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42598.5, 300 sec: 42265.2). Total num frames: 833847296. Throughput: 0: 41978.0. Samples: 715983220. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:48,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:42:50,056][00497] Updated weights for policy 0, policy_version 50897 (0.0026) +[2024-03-29 17:42:53,839][00126] Fps is (10 sec: 37682.9, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 834027520. Throughput: 0: 41795.8. Samples: 716243000. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 17:42:53,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 17:42:54,296][00497] Updated weights for policy 0, policy_version 50907 (0.0023) +[2024-03-29 17:42:57,594][00497] Updated weights for policy 0, policy_version 50917 (0.0024) +[2024-03-29 17:42:58,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42052.2, 300 sec: 42320.7). Total num frames: 834273280. Throughput: 0: 42036.3. Samples: 716487260. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:42:58,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 17:43:01,342][00497] Updated weights for policy 0, policy_version 50927 (0.0022) +[2024-03-29 17:43:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.3, 300 sec: 42265.1). Total num frames: 834469888. Throughput: 0: 42034.5. Samples: 716615660. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:43:03,842][00126] Avg episode reward: [(0, '0.443')] +[2024-03-29 17:43:04,079][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050933_834486272.pth... +[2024-03-29 17:43:04,395][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050313_824328192.pth +[2024-03-29 17:43:05,763][00497] Updated weights for policy 0, policy_version 50937 (0.0025) +[2024-03-29 17:43:08,839][00126] Fps is (10 sec: 37683.8, 60 sec: 41779.2, 300 sec: 42209.7). Total num frames: 834650112. Throughput: 0: 41893.9. Samples: 716875200. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:43:08,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 17:43:09,873][00497] Updated weights for policy 0, policy_version 50947 (0.0027) +[2024-03-29 17:43:13,079][00497] Updated weights for policy 0, policy_version 50957 (0.0020) +[2024-03-29 17:43:13,083][00476] Signal inference workers to stop experience collection... (25500 times) +[2024-03-29 17:43:13,084][00476] Signal inference workers to resume experience collection... (25500 times) +[2024-03-29 17:43:13,131][00497] InferenceWorker_p0-w0: stopping experience collection (25500 times) +[2024-03-29 17:43:13,131][00497] InferenceWorker_p0-w0: resuming experience collection (25500 times) +[2024-03-29 17:43:13,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 834912256. Throughput: 0: 42078.6. Samples: 717120760. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:43:13,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 17:43:16,987][00497] Updated weights for policy 0, policy_version 50967 (0.0025) +[2024-03-29 17:43:18,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 835108864. Throughput: 0: 41964.0. Samples: 717248660. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:43:18,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:43:20,923][00497] Updated weights for policy 0, policy_version 50977 (0.0024) +[2024-03-29 17:43:23,839][00126] Fps is (10 sec: 39321.4, 60 sec: 42325.4, 300 sec: 42265.2). Total num frames: 835305472. Throughput: 0: 42008.0. Samples: 717521140. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:43:23,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:43:24,907][00497] Updated weights for policy 0, policy_version 50987 (0.0030) +[2024-03-29 17:43:28,279][00497] Updated weights for policy 0, policy_version 50997 (0.0023) +[2024-03-29 17:43:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 835551232. Throughput: 0: 42344.1. Samples: 717764360. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:28,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 17:43:32,252][00497] Updated weights for policy 0, policy_version 51007 (0.0026) +[2024-03-29 17:43:33,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42325.2, 300 sec: 42320.7). Total num frames: 835747840. Throughput: 0: 42279.3. Samples: 717885800. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:33,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 17:43:36,285][00497] Updated weights for policy 0, policy_version 51017 (0.0029) +[2024-03-29 17:43:38,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 835944448. Throughput: 0: 42678.4. Samples: 718163520. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:38,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 17:43:40,313][00497] Updated weights for policy 0, policy_version 51027 (0.0019) +[2024-03-29 17:43:43,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 836173824. Throughput: 0: 42769.4. Samples: 718411880. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:43,840][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 17:43:43,854][00497] Updated weights for policy 0, policy_version 51037 (0.0027) +[2024-03-29 17:43:47,239][00476] Signal inference workers to stop experience collection... (25550 times) +[2024-03-29 17:43:47,240][00476] Signal inference workers to resume experience collection... (25550 times) +[2024-03-29 17:43:47,289][00497] InferenceWorker_p0-w0: stopping experience collection (25550 times) +[2024-03-29 17:43:47,289][00497] InferenceWorker_p0-w0: resuming experience collection (25550 times) +[2024-03-29 17:43:47,547][00497] Updated weights for policy 0, policy_version 51047 (0.0027) +[2024-03-29 17:43:48,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 836386816. Throughput: 0: 42289.5. Samples: 718518680. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:48,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 17:43:51,971][00497] Updated weights for policy 0, policy_version 51057 (0.0030) +[2024-03-29 17:43:53,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42598.5, 300 sec: 42320.7). Total num frames: 836583424. Throughput: 0: 42575.5. Samples: 718791100. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 17:43:53,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 17:43:56,110][00497] Updated weights for policy 0, policy_version 51067 (0.0020) +[2024-03-29 17:43:58,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.4, 300 sec: 42320.7). Total num frames: 836812800. Throughput: 0: 42980.5. Samples: 719054880. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:43:58,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 17:43:59,225][00497] Updated weights for policy 0, policy_version 51077 (0.0024) +[2024-03-29 17:44:02,974][00497] Updated weights for policy 0, policy_version 51087 (0.0021) +[2024-03-29 17:44:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42598.5, 300 sec: 42265.2). Total num frames: 837025792. Throughput: 0: 42299.1. Samples: 719152120. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:03,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:44:07,388][00497] Updated weights for policy 0, policy_version 51097 (0.0018) +[2024-03-29 17:44:08,839][00126] Fps is (10 sec: 42598.4, 60 sec: 43144.5, 300 sec: 42376.2). Total num frames: 837238784. Throughput: 0: 42474.3. Samples: 719432480. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:08,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 17:44:11,445][00497] Updated weights for policy 0, policy_version 51107 (0.0022) +[2024-03-29 17:44:13,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42325.3, 300 sec: 42376.2). Total num frames: 837451776. Throughput: 0: 42821.3. Samples: 719691320. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:13,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 17:44:14,635][00497] Updated weights for policy 0, policy_version 51117 (0.0018) +[2024-03-29 17:44:18,436][00497] Updated weights for policy 0, policy_version 51127 (0.0018) +[2024-03-29 17:44:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42598.4, 300 sec: 42265.2). Total num frames: 837664768. Throughput: 0: 42314.0. Samples: 719789920. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:18,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 17:44:21,017][00476] Signal inference workers to stop experience collection... (25600 times) +[2024-03-29 17:44:21,092][00476] Signal inference workers to resume experience collection... (25600 times) +[2024-03-29 17:44:21,090][00497] InferenceWorker_p0-w0: stopping experience collection (25600 times) +[2024-03-29 17:44:21,118][00497] InferenceWorker_p0-w0: resuming experience collection (25600 times) +[2024-03-29 17:44:22,702][00497] Updated weights for policy 0, policy_version 51137 (0.0022) +[2024-03-29 17:44:23,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42871.5, 300 sec: 42376.3). Total num frames: 837877760. Throughput: 0: 42433.3. Samples: 720073020. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:23,840][00126] Avg episode reward: [(0, '0.639')] +[2024-03-29 17:44:26,583][00497] Updated weights for policy 0, policy_version 51147 (0.0025) +[2024-03-29 17:44:28,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 42320.7). Total num frames: 838074368. Throughput: 0: 42582.7. Samples: 720328100. Policy #0 lag: (min: 2.0, avg: 20.9, max: 43.0) +[2024-03-29 17:44:28,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:44:30,150][00497] Updated weights for policy 0, policy_version 51157 (0.0022) +[2024-03-29 17:44:33,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42598.5, 300 sec: 42265.2). Total num frames: 838303744. Throughput: 0: 42514.7. Samples: 720431840. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:33,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 17:44:33,961][00497] Updated weights for policy 0, policy_version 51167 (0.0017) +[2024-03-29 17:44:38,514][00497] Updated weights for policy 0, policy_version 51177 (0.0022) +[2024-03-29 17:44:38,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42325.3, 300 sec: 42320.7). Total num frames: 838483968. Throughput: 0: 42501.3. Samples: 720703660. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:38,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 17:44:42,411][00497] Updated weights for policy 0, policy_version 51187 (0.0029) +[2024-03-29 17:44:43,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41779.2, 300 sec: 42209.6). Total num frames: 838680576. Throughput: 0: 42063.1. Samples: 720947720. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:43,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 17:44:45,966][00497] Updated weights for policy 0, policy_version 51197 (0.0024) +[2024-03-29 17:44:48,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42598.4, 300 sec: 42320.7). Total num frames: 838942720. Throughput: 0: 42347.1. Samples: 721057740. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:48,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 17:44:49,666][00497] Updated weights for policy 0, policy_version 51207 (0.0026) +[2024-03-29 17:44:53,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 839106560. Throughput: 0: 42075.5. Samples: 721325880. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:53,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 17:44:54,318][00497] Updated weights for policy 0, policy_version 51217 (0.0023) +[2024-03-29 17:44:54,681][00476] Signal inference workers to stop experience collection... (25650 times) +[2024-03-29 17:44:54,714][00497] InferenceWorker_p0-w0: stopping experience collection (25650 times) +[2024-03-29 17:44:54,899][00476] Signal inference workers to resume experience collection... (25650 times) +[2024-03-29 17:44:54,899][00497] InferenceWorker_p0-w0: resuming experience collection (25650 times) +[2024-03-29 17:44:58,092][00497] Updated weights for policy 0, policy_version 51227 (0.0021) +[2024-03-29 17:44:58,839][00126] Fps is (10 sec: 37683.1, 60 sec: 41779.1, 300 sec: 42265.2). Total num frames: 839319552. Throughput: 0: 41948.0. Samples: 721578980. Policy #0 lag: (min: 0.0, avg: 21.9, max: 41.0) +[2024-03-29 17:44:58,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 17:45:01,670][00497] Updated weights for policy 0, policy_version 51237 (0.0031) +[2024-03-29 17:45:03,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 839565312. Throughput: 0: 42331.1. Samples: 721694820. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:03,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 17:45:04,090][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051244_839581696.pth... +[2024-03-29 17:45:04,426][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050622_829390848.pth +[2024-03-29 17:45:05,652][00497] Updated weights for policy 0, policy_version 51247 (0.0032) +[2024-03-29 17:45:08,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 42154.1). Total num frames: 839729152. Throughput: 0: 41375.1. Samples: 721934900. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:08,841][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 17:45:10,017][00497] Updated weights for policy 0, policy_version 51257 (0.0022) +[2024-03-29 17:45:13,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 839942144. Throughput: 0: 41492.0. Samples: 722195240. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:13,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 17:45:14,068][00497] Updated weights for policy 0, policy_version 51267 (0.0019) +[2024-03-29 17:45:17,439][00497] Updated weights for policy 0, policy_version 51277 (0.0023) +[2024-03-29 17:45:18,839][00126] Fps is (10 sec: 47513.3, 60 sec: 42325.2, 300 sec: 42320.7). Total num frames: 840204288. Throughput: 0: 42029.7. Samples: 722323180. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:18,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:45:21,290][00497] Updated weights for policy 0, policy_version 51287 (0.0021) +[2024-03-29 17:45:23,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41233.0, 300 sec: 42098.5). Total num frames: 840351744. Throughput: 0: 41222.1. Samples: 722558660. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:23,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:45:25,737][00497] Updated weights for policy 0, policy_version 51297 (0.0023) +[2024-03-29 17:45:28,839][00126] Fps is (10 sec: 37683.4, 60 sec: 41779.1, 300 sec: 42209.6). Total num frames: 840581120. Throughput: 0: 41777.7. Samples: 722827720. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:28,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:45:29,249][00476] Signal inference workers to stop experience collection... (25700 times) +[2024-03-29 17:45:29,304][00497] InferenceWorker_p0-w0: stopping experience collection (25700 times) +[2024-03-29 17:45:29,338][00476] Signal inference workers to resume experience collection... (25700 times) +[2024-03-29 17:45:29,339][00497] InferenceWorker_p0-w0: resuming experience collection (25700 times) +[2024-03-29 17:45:29,672][00497] Updated weights for policy 0, policy_version 51307 (0.0025) +[2024-03-29 17:45:33,069][00497] Updated weights for policy 0, policy_version 51317 (0.0028) +[2024-03-29 17:45:33,839][00126] Fps is (10 sec: 45876.0, 60 sec: 41779.2, 300 sec: 42209.7). Total num frames: 840810496. Throughput: 0: 42115.6. Samples: 722952940. Policy #0 lag: (min: 3.0, avg: 23.6, max: 42.0) +[2024-03-29 17:45:33,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:45:36,862][00497] Updated weights for policy 0, policy_version 51327 (0.0022) +[2024-03-29 17:45:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41779.2, 300 sec: 42098.5). Total num frames: 840990720. Throughput: 0: 41296.8. Samples: 723184240. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:45:38,842][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 17:45:41,454][00497] Updated weights for policy 0, policy_version 51337 (0.0027) +[2024-03-29 17:45:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 841203712. Throughput: 0: 41798.3. Samples: 723459900. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:45:43,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 17:45:45,031][00497] Updated weights for policy 0, policy_version 51347 (0.0018) +[2024-03-29 17:45:48,697][00497] Updated weights for policy 0, policy_version 51357 (0.0017) +[2024-03-29 17:45:48,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 841433088. Throughput: 0: 42018.2. Samples: 723585640. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:45:48,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 17:45:52,478][00497] Updated weights for policy 0, policy_version 51367 (0.0029) +[2024-03-29 17:45:53,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 841646080. Throughput: 0: 41793.4. Samples: 723815600. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:45:53,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 17:45:56,920][00497] Updated weights for policy 0, policy_version 51377 (0.0021) +[2024-03-29 17:45:58,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.3, 300 sec: 42154.1). Total num frames: 841826304. Throughput: 0: 42116.0. Samples: 724090460. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:45:58,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 17:45:59,489][00476] Signal inference workers to stop experience collection... (25750 times) +[2024-03-29 17:45:59,489][00476] Signal inference workers to resume experience collection... (25750 times) +[2024-03-29 17:45:59,533][00497] InferenceWorker_p0-w0: stopping experience collection (25750 times) +[2024-03-29 17:45:59,533][00497] InferenceWorker_p0-w0: resuming experience collection (25750 times) +[2024-03-29 17:46:00,681][00497] Updated weights for policy 0, policy_version 51387 (0.0020) +[2024-03-29 17:46:03,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.1, 300 sec: 42265.1). Total num frames: 842072064. Throughput: 0: 42240.9. Samples: 724224020. Policy #0 lag: (min: 1.0, avg: 23.3, max: 40.0) +[2024-03-29 17:46:03,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:46:04,063][00497] Updated weights for policy 0, policy_version 51397 (0.0037) +[2024-03-29 17:46:07,821][00497] Updated weights for policy 0, policy_version 51407 (0.0023) +[2024-03-29 17:46:08,839][00126] Fps is (10 sec: 47513.0, 60 sec: 42871.4, 300 sec: 42376.2). Total num frames: 842301440. Throughput: 0: 42001.4. Samples: 724448720. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:08,840][00126] Avg episode reward: [(0, '0.436')] +[2024-03-29 17:46:12,372][00497] Updated weights for policy 0, policy_version 51417 (0.0025) +[2024-03-29 17:46:13,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 842465280. Throughput: 0: 42023.2. Samples: 724718760. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:13,840][00126] Avg episode reward: [(0, '0.500')] +[2024-03-29 17:46:16,524][00497] Updated weights for policy 0, policy_version 51427 (0.0024) +[2024-03-29 17:46:18,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41506.2, 300 sec: 42209.6). Total num frames: 842694656. Throughput: 0: 42241.8. Samples: 724853820. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:18,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:46:19,637][00497] Updated weights for policy 0, policy_version 51437 (0.0035) +[2024-03-29 17:46:23,482][00497] Updated weights for policy 0, policy_version 51447 (0.0029) +[2024-03-29 17:46:23,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42598.5, 300 sec: 42209.6). Total num frames: 842907648. Throughput: 0: 41853.5. Samples: 725067640. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:23,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 17:46:28,112][00497] Updated weights for policy 0, policy_version 51457 (0.0024) +[2024-03-29 17:46:28,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 843104256. Throughput: 0: 41968.0. Samples: 725348460. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:28,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:46:32,092][00497] Updated weights for policy 0, policy_version 51467 (0.0020) +[2024-03-29 17:46:33,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 843317248. Throughput: 0: 41981.8. Samples: 725474820. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:33,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 17:46:34,633][00476] Signal inference workers to stop experience collection... (25800 times) +[2024-03-29 17:46:34,705][00497] InferenceWorker_p0-w0: stopping experience collection (25800 times) +[2024-03-29 17:46:34,707][00476] Signal inference workers to resume experience collection... (25800 times) +[2024-03-29 17:46:34,730][00497] InferenceWorker_p0-w0: resuming experience collection (25800 times) +[2024-03-29 17:46:35,261][00497] Updated weights for policy 0, policy_version 51477 (0.0026) +[2024-03-29 17:46:38,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42598.5, 300 sec: 42154.1). Total num frames: 843546624. Throughput: 0: 41912.5. Samples: 725701660. Policy #0 lag: (min: 0.0, avg: 24.3, max: 42.0) +[2024-03-29 17:46:38,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 17:46:39,068][00497] Updated weights for policy 0, policy_version 51487 (0.0018) +[2024-03-29 17:46:43,707][00497] Updated weights for policy 0, policy_version 51497 (0.0022) +[2024-03-29 17:46:43,841][00126] Fps is (10 sec: 40952.5, 60 sec: 42051.0, 300 sec: 42153.9). Total num frames: 843726848. Throughput: 0: 42133.8. Samples: 725986560. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:46:43,841][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 17:46:47,646][00497] Updated weights for policy 0, policy_version 51507 (0.0027) +[2024-03-29 17:46:48,839][00126] Fps is (10 sec: 39321.0, 60 sec: 41779.1, 300 sec: 42154.1). Total num frames: 843939840. Throughput: 0: 41780.9. Samples: 726104160. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:46:48,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 17:46:50,969][00497] Updated weights for policy 0, policy_version 51517 (0.0026) +[2024-03-29 17:46:53,839][00126] Fps is (10 sec: 45882.7, 60 sec: 42325.2, 300 sec: 42154.1). Total num frames: 844185600. Throughput: 0: 41891.5. Samples: 726333840. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:46:53,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:46:54,768][00497] Updated weights for policy 0, policy_version 51527 (0.0033) +[2024-03-29 17:46:58,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 844333056. Throughput: 0: 41736.0. Samples: 726596880. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:46:58,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 17:46:59,499][00497] Updated weights for policy 0, policy_version 51537 (0.0026) +[2024-03-29 17:47:03,604][00497] Updated weights for policy 0, policy_version 51547 (0.0021) +[2024-03-29 17:47:03,839][00126] Fps is (10 sec: 36045.2, 60 sec: 41233.1, 300 sec: 42043.0). Total num frames: 844546048. Throughput: 0: 41728.8. Samples: 726731620. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:47:03,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 17:47:03,886][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051548_844562432.pth... +[2024-03-29 17:47:04,197][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000050933_834486272.pth +[2024-03-29 17:47:06,043][00476] Signal inference workers to stop experience collection... (25850 times) +[2024-03-29 17:47:06,077][00497] InferenceWorker_p0-w0: stopping experience collection (25850 times) +[2024-03-29 17:47:06,264][00476] Signal inference workers to resume experience collection... (25850 times) +[2024-03-29 17:47:06,265][00497] InferenceWorker_p0-w0: resuming experience collection (25850 times) +[2024-03-29 17:47:06,770][00497] Updated weights for policy 0, policy_version 51557 (0.0024) +[2024-03-29 17:47:08,839][00126] Fps is (10 sec: 47513.3, 60 sec: 41779.3, 300 sec: 42098.5). Total num frames: 844808192. Throughput: 0: 41976.9. Samples: 726956600. Policy #0 lag: (min: 1.0, avg: 22.9, max: 41.0) +[2024-03-29 17:47:08,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 17:47:10,384][00497] Updated weights for policy 0, policy_version 51567 (0.0028) +[2024-03-29 17:47:13,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.0, 300 sec: 41987.4). Total num frames: 844955648. Throughput: 0: 41507.0. Samples: 727216280. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:13,842][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 17:47:15,140][00497] Updated weights for policy 0, policy_version 51577 (0.0023) +[2024-03-29 17:47:18,839][00126] Fps is (10 sec: 36044.9, 60 sec: 41233.1, 300 sec: 42043.0). Total num frames: 845168640. Throughput: 0: 41495.5. Samples: 727342120. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:18,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 17:47:19,455][00497] Updated weights for policy 0, policy_version 51587 (0.0019) +[2024-03-29 17:47:22,562][00497] Updated weights for policy 0, policy_version 51597 (0.0027) +[2024-03-29 17:47:23,839][00126] Fps is (10 sec: 47514.3, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 845430784. Throughput: 0: 42044.4. Samples: 727593660. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:23,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:47:26,099][00497] Updated weights for policy 0, policy_version 51607 (0.0019) +[2024-03-29 17:47:28,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 845594624. Throughput: 0: 41192.8. Samples: 727840160. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:28,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 17:47:30,810][00497] Updated weights for policy 0, policy_version 51617 (0.0025) +[2024-03-29 17:47:33,839][00126] Fps is (10 sec: 37682.6, 60 sec: 41506.0, 300 sec: 41987.5). Total num frames: 845807616. Throughput: 0: 41483.1. Samples: 727970900. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:33,840][00126] Avg episode reward: [(0, '0.506')] +[2024-03-29 17:47:34,878][00497] Updated weights for policy 0, policy_version 51627 (0.0018) +[2024-03-29 17:47:37,659][00476] Signal inference workers to stop experience collection... (25900 times) +[2024-03-29 17:47:37,684][00497] InferenceWorker_p0-w0: stopping experience collection (25900 times) +[2024-03-29 17:47:37,883][00476] Signal inference workers to resume experience collection... (25900 times) +[2024-03-29 17:47:37,883][00497] InferenceWorker_p0-w0: resuming experience collection (25900 times) +[2024-03-29 17:47:38,197][00497] Updated weights for policy 0, policy_version 51637 (0.0025) +[2024-03-29 17:47:38,839][00126] Fps is (10 sec: 45874.5, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 846053376. Throughput: 0: 42021.4. Samples: 728224800. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:38,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 17:47:41,884][00497] Updated weights for policy 0, policy_version 51647 (0.0021) +[2024-03-29 17:47:43,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41780.5, 300 sec: 41987.5). Total num frames: 846233600. Throughput: 0: 41370.2. Samples: 728458540. Policy #0 lag: (min: 0.0, avg: 22.0, max: 42.0) +[2024-03-29 17:47:43,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 17:47:46,527][00497] Updated weights for policy 0, policy_version 51657 (0.0025) +[2024-03-29 17:47:48,839][00126] Fps is (10 sec: 37683.0, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 846430208. Throughput: 0: 41515.0. Samples: 728599800. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:47:48,842][00126] Avg episode reward: [(0, '0.653')] +[2024-03-29 17:47:50,699][00497] Updated weights for policy 0, policy_version 51667 (0.0027) +[2024-03-29 17:47:53,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 846675968. Throughput: 0: 42271.5. Samples: 728858820. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:47:53,840][00126] Avg episode reward: [(0, '0.619')] +[2024-03-29 17:47:53,840][00497] Updated weights for policy 0, policy_version 51677 (0.0030) +[2024-03-29 17:47:57,571][00497] Updated weights for policy 0, policy_version 51687 (0.0021) +[2024-03-29 17:47:58,839][00126] Fps is (10 sec: 44237.4, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 846872576. Throughput: 0: 41325.0. Samples: 729075900. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:47:58,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 17:48:02,261][00497] Updated weights for policy 0, policy_version 51697 (0.0019) +[2024-03-29 17:48:03,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 847052800. Throughput: 0: 41824.4. Samples: 729224220. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:48:03,840][00126] Avg episode reward: [(0, '0.633')] +[2024-03-29 17:48:06,591][00497] Updated weights for policy 0, policy_version 51707 (0.0022) +[2024-03-29 17:48:08,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41233.1, 300 sec: 41931.9). Total num frames: 847282176. Throughput: 0: 42061.4. Samples: 729486420. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:48:08,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:48:09,456][00497] Updated weights for policy 0, policy_version 51717 (0.0026) +[2024-03-29 17:48:10,267][00476] Signal inference workers to stop experience collection... (25950 times) +[2024-03-29 17:48:10,338][00497] InferenceWorker_p0-w0: stopping experience collection (25950 times) +[2024-03-29 17:48:10,341][00476] Signal inference workers to resume experience collection... (25950 times) +[2024-03-29 17:48:10,365][00497] InferenceWorker_p0-w0: resuming experience collection (25950 times) +[2024-03-29 17:48:13,248][00497] Updated weights for policy 0, policy_version 51727 (0.0018) +[2024-03-29 17:48:13,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42598.4, 300 sec: 42043.0). Total num frames: 847511552. Throughput: 0: 41438.5. Samples: 729704900. Policy #0 lag: (min: 0.0, avg: 19.1, max: 41.0) +[2024-03-29 17:48:13,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:48:17,874][00497] Updated weights for policy 0, policy_version 51737 (0.0019) +[2024-03-29 17:48:18,839][00126] Fps is (10 sec: 40959.3, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 847691776. Throughput: 0: 41797.3. Samples: 729851780. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:18,840][00126] Avg episode reward: [(0, '0.646')] +[2024-03-29 17:48:22,176][00497] Updated weights for policy 0, policy_version 51747 (0.0019) +[2024-03-29 17:48:23,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.1, 300 sec: 41876.4). Total num frames: 847904768. Throughput: 0: 41958.3. Samples: 730112920. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:23,841][00126] Avg episode reward: [(0, '0.450')] +[2024-03-29 17:48:25,191][00497] Updated weights for policy 0, policy_version 51757 (0.0022) +[2024-03-29 17:48:28,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 848134144. Throughput: 0: 41617.7. Samples: 730331340. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:28,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:48:29,015][00497] Updated weights for policy 0, policy_version 51767 (0.0030) +[2024-03-29 17:48:33,587][00497] Updated weights for policy 0, policy_version 51777 (0.0026) +[2024-03-29 17:48:33,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 848314368. Throughput: 0: 41716.9. Samples: 730477060. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:33,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 17:48:37,657][00497] Updated weights for policy 0, policy_version 51787 (0.0017) +[2024-03-29 17:48:38,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41233.1, 300 sec: 41876.4). Total num frames: 848527360. Throughput: 0: 41850.7. Samples: 730742100. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:38,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:48:40,814][00497] Updated weights for policy 0, policy_version 51797 (0.0026) +[2024-03-29 17:48:43,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42325.2, 300 sec: 41987.5). Total num frames: 848773120. Throughput: 0: 41927.0. Samples: 730962620. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:43,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 17:48:44,557][00497] Updated weights for policy 0, policy_version 51807 (0.0026) +[2024-03-29 17:48:47,705][00476] Signal inference workers to stop experience collection... (26000 times) +[2024-03-29 17:48:47,774][00497] InferenceWorker_p0-w0: stopping experience collection (26000 times) +[2024-03-29 17:48:47,778][00476] Signal inference workers to resume experience collection... (26000 times) +[2024-03-29 17:48:47,803][00497] InferenceWorker_p0-w0: resuming experience collection (26000 times) +[2024-03-29 17:48:48,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 848936960. Throughput: 0: 41547.9. Samples: 731093880. Policy #0 lag: (min: 1.0, avg: 19.9, max: 41.0) +[2024-03-29 17:48:48,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 17:48:49,301][00497] Updated weights for policy 0, policy_version 51817 (0.0032) +[2024-03-29 17:48:53,376][00497] Updated weights for policy 0, policy_version 51827 (0.0026) +[2024-03-29 17:48:53,839][00126] Fps is (10 sec: 37683.9, 60 sec: 41233.2, 300 sec: 41820.9). Total num frames: 849149952. Throughput: 0: 41720.5. Samples: 731363840. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:48:53,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:48:56,376][00497] Updated weights for policy 0, policy_version 51837 (0.0024) +[2024-03-29 17:48:58,839][00126] Fps is (10 sec: 47514.2, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 849412096. Throughput: 0: 42044.9. Samples: 731596920. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:48:58,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:48:59,947][00497] Updated weights for policy 0, policy_version 51847 (0.0022) +[2024-03-29 17:49:03,839][00126] Fps is (10 sec: 42597.4, 60 sec: 42052.2, 300 sec: 41820.8). Total num frames: 849575936. Throughput: 0: 41705.8. Samples: 731728540. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:49:03,841][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 17:49:03,917][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051855_849592320.pth... +[2024-03-29 17:49:04,232][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051244_839581696.pth +[2024-03-29 17:49:04,811][00497] Updated weights for policy 0, policy_version 51857 (0.0023) +[2024-03-29 17:49:08,839][00126] Fps is (10 sec: 36045.1, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 849772544. Throughput: 0: 41877.4. Samples: 731997400. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:49:08,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:49:09,104][00497] Updated weights for policy 0, policy_version 51867 (0.0027) +[2024-03-29 17:49:12,075][00497] Updated weights for policy 0, policy_version 51877 (0.0028) +[2024-03-29 17:49:13,839][00126] Fps is (10 sec: 44237.7, 60 sec: 41779.3, 300 sec: 41876.4). Total num frames: 850018304. Throughput: 0: 42084.5. Samples: 732225140. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:49:13,840][00126] Avg episode reward: [(0, '0.417')] +[2024-03-29 17:49:15,550][00497] Updated weights for policy 0, policy_version 51887 (0.0031) +[2024-03-29 17:49:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.3, 300 sec: 41765.3). Total num frames: 850198528. Throughput: 0: 41554.4. Samples: 732347000. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 17:49:18,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:49:20,727][00497] Updated weights for policy 0, policy_version 51897 (0.0024) +[2024-03-29 17:49:21,571][00476] Signal inference workers to stop experience collection... (26050 times) +[2024-03-29 17:49:21,603][00497] InferenceWorker_p0-w0: stopping experience collection (26050 times) +[2024-03-29 17:49:21,756][00476] Signal inference workers to resume experience collection... (26050 times) +[2024-03-29 17:49:21,757][00497] InferenceWorker_p0-w0: resuming experience collection (26050 times) +[2024-03-29 17:49:23,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 850411520. Throughput: 0: 41521.4. Samples: 732610560. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:23,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:49:24,912][00497] Updated weights for policy 0, policy_version 51907 (0.0020) +[2024-03-29 17:49:27,915][00497] Updated weights for policy 0, policy_version 51917 (0.0027) +[2024-03-29 17:49:28,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 850640896. Throughput: 0: 41744.6. Samples: 732841120. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:28,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:49:31,460][00497] Updated weights for policy 0, policy_version 51927 (0.0018) +[2024-03-29 17:49:33,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 850821120. Throughput: 0: 41557.0. Samples: 732963940. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:33,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:49:36,260][00497] Updated weights for policy 0, policy_version 51937 (0.0018) +[2024-03-29 17:49:38,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41779.3, 300 sec: 41876.4). Total num frames: 851034112. Throughput: 0: 41799.1. Samples: 733244800. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:38,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:49:40,590][00497] Updated weights for policy 0, policy_version 51947 (0.0023) +[2024-03-29 17:49:43,668][00497] Updated weights for policy 0, policy_version 51957 (0.0028) +[2024-03-29 17:49:43,839][00126] Fps is (10 sec: 44237.3, 60 sec: 41506.3, 300 sec: 41765.3). Total num frames: 851263488. Throughput: 0: 41746.3. Samples: 733475500. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:43,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:49:47,235][00497] Updated weights for policy 0, policy_version 51967 (0.0027) +[2024-03-29 17:49:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.4, 300 sec: 41876.4). Total num frames: 851460096. Throughput: 0: 41341.1. Samples: 733588880. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:48,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 17:49:51,997][00497] Updated weights for policy 0, policy_version 51977 (0.0019) +[2024-03-29 17:49:53,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 851656704. Throughput: 0: 41468.0. Samples: 733863460. Policy #0 lag: (min: 0.0, avg: 19.9, max: 41.0) +[2024-03-29 17:49:53,840][00126] Avg episode reward: [(0, '0.633')] +[2024-03-29 17:49:56,320][00497] Updated weights for policy 0, policy_version 51987 (0.0018) +[2024-03-29 17:49:57,783][00476] Signal inference workers to stop experience collection... (26100 times) +[2024-03-29 17:49:57,823][00497] InferenceWorker_p0-w0: stopping experience collection (26100 times) +[2024-03-29 17:49:58,013][00476] Signal inference workers to resume experience collection... (26100 times) +[2024-03-29 17:49:58,014][00497] InferenceWorker_p0-w0: resuming experience collection (26100 times) +[2024-03-29 17:49:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 851886080. Throughput: 0: 41841.8. Samples: 734108020. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:49:58,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:49:59,609][00497] Updated weights for policy 0, policy_version 51997 (0.0026) +[2024-03-29 17:50:02,769][00497] Updated weights for policy 0, policy_version 52007 (0.0031) +[2024-03-29 17:50:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 852115456. Throughput: 0: 41673.7. Samples: 734222320. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:50:03,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 17:50:07,738][00497] Updated weights for policy 0, policy_version 52017 (0.0017) +[2024-03-29 17:50:08,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 852279296. Throughput: 0: 41816.0. Samples: 734492280. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:50:08,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 17:50:12,146][00497] Updated weights for policy 0, policy_version 52027 (0.0021) +[2024-03-29 17:50:13,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.0, 300 sec: 41654.3). Total num frames: 852492288. Throughput: 0: 42493.8. Samples: 734753340. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:50:13,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 17:50:15,103][00497] Updated weights for policy 0, policy_version 52037 (0.0028) +[2024-03-29 17:50:18,476][00497] Updated weights for policy 0, policy_version 52047 (0.0024) +[2024-03-29 17:50:18,839][00126] Fps is (10 sec: 45874.2, 60 sec: 42325.2, 300 sec: 41987.5). Total num frames: 852738048. Throughput: 0: 41876.8. Samples: 734848400. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:50:18,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:50:23,130][00497] Updated weights for policy 0, policy_version 52057 (0.0025) +[2024-03-29 17:50:23,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41779.1, 300 sec: 41820.8). Total num frames: 852918272. Throughput: 0: 41759.4. Samples: 735123980. Policy #0 lag: (min: 0.0, avg: 19.9, max: 42.0) +[2024-03-29 17:50:23,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:50:27,549][00497] Updated weights for policy 0, policy_version 52067 (0.0027) +[2024-03-29 17:50:28,839][00126] Fps is (10 sec: 39322.4, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 853131264. Throughput: 0: 42548.4. Samples: 735390180. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:28,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 17:50:30,326][00476] Signal inference workers to stop experience collection... (26150 times) +[2024-03-29 17:50:30,369][00497] InferenceWorker_p0-w0: stopping experience collection (26150 times) +[2024-03-29 17:50:30,485][00476] Signal inference workers to resume experience collection... (26150 times) +[2024-03-29 17:50:30,485][00497] InferenceWorker_p0-w0: resuming experience collection (26150 times) +[2024-03-29 17:50:30,488][00497] Updated weights for policy 0, policy_version 52077 (0.0023) +[2024-03-29 17:50:33,839][00126] Fps is (10 sec: 45875.8, 60 sec: 42598.4, 300 sec: 41987.5). Total num frames: 853377024. Throughput: 0: 42040.8. Samples: 735480720. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:33,841][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:50:34,265][00497] Updated weights for policy 0, policy_version 52087 (0.0023) +[2024-03-29 17:50:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 853540864. Throughput: 0: 41705.3. Samples: 735740200. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:38,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:50:38,992][00497] Updated weights for policy 0, policy_version 52097 (0.0020) +[2024-03-29 17:50:43,555][00497] Updated weights for policy 0, policy_version 52107 (0.0030) +[2024-03-29 17:50:43,839][00126] Fps is (10 sec: 34406.5, 60 sec: 40960.0, 300 sec: 41654.2). Total num frames: 853721088. Throughput: 0: 42049.8. Samples: 736000260. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:43,840][00126] Avg episode reward: [(0, '0.478')] +[2024-03-29 17:50:46,635][00497] Updated weights for policy 0, policy_version 52117 (0.0021) +[2024-03-29 17:50:48,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 853983232. Throughput: 0: 41735.2. Samples: 736100400. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:48,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 17:50:49,887][00497] Updated weights for policy 0, policy_version 52127 (0.0032) +[2024-03-29 17:50:53,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 854147072. Throughput: 0: 41469.3. Samples: 736358400. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:53,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 17:50:54,809][00497] Updated weights for policy 0, policy_version 52137 (0.0019) +[2024-03-29 17:50:58,839][00126] Fps is (10 sec: 37682.4, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 854360064. Throughput: 0: 41499.5. Samples: 736620820. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 17:50:58,841][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:50:59,166][00497] Updated weights for policy 0, policy_version 52147 (0.0022) +[2024-03-29 17:51:02,428][00497] Updated weights for policy 0, policy_version 52157 (0.0029) +[2024-03-29 17:51:03,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 854589440. Throughput: 0: 42111.3. Samples: 736743400. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:03,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:51:04,178][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052162_854622208.pth... +[2024-03-29 17:51:04,515][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051548_844562432.pth +[2024-03-29 17:51:05,856][00497] Updated weights for policy 0, policy_version 52167 (0.0024) +[2024-03-29 17:51:05,867][00476] Signal inference workers to stop experience collection... (26200 times) +[2024-03-29 17:51:05,868][00476] Signal inference workers to resume experience collection... (26200 times) +[2024-03-29 17:51:05,908][00497] InferenceWorker_p0-w0: stopping experience collection (26200 times) +[2024-03-29 17:51:05,908][00497] InferenceWorker_p0-w0: resuming experience collection (26200 times) +[2024-03-29 17:51:08,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 854786048. Throughput: 0: 41049.1. Samples: 736971180. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:08,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 17:51:10,888][00497] Updated weights for policy 0, policy_version 52177 (0.0019) +[2024-03-29 17:51:13,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 854982656. Throughput: 0: 40719.4. Samples: 737222560. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:13,840][00126] Avg episode reward: [(0, '0.649')] +[2024-03-29 17:51:15,342][00497] Updated weights for policy 0, policy_version 52187 (0.0019) +[2024-03-29 17:51:18,363][00497] Updated weights for policy 0, policy_version 52197 (0.0025) +[2024-03-29 17:51:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.2, 300 sec: 41709.8). Total num frames: 855212032. Throughput: 0: 41712.9. Samples: 737357800. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:18,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 17:51:21,604][00497] Updated weights for policy 0, policy_version 52207 (0.0027) +[2024-03-29 17:51:23,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 855408640. Throughput: 0: 40941.8. Samples: 737582580. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:23,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 17:51:26,621][00497] Updated weights for policy 0, policy_version 52217 (0.0022) +[2024-03-29 17:51:28,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 855605248. Throughput: 0: 41176.9. Samples: 737853220. Policy #0 lag: (min: 1.0, avg: 19.1, max: 42.0) +[2024-03-29 17:51:28,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 17:51:30,938][00497] Updated weights for policy 0, policy_version 52227 (0.0024) +[2024-03-29 17:51:33,839][00126] Fps is (10 sec: 42597.7, 60 sec: 40959.9, 300 sec: 41654.2). Total num frames: 855834624. Throughput: 0: 41796.7. Samples: 737981260. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:33,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 17:51:34,086][00497] Updated weights for policy 0, policy_version 52237 (0.0024) +[2024-03-29 17:51:37,334][00497] Updated weights for policy 0, policy_version 52247 (0.0023) +[2024-03-29 17:51:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.2, 300 sec: 41765.6). Total num frames: 856047616. Throughput: 0: 40844.9. Samples: 738196420. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:38,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 17:51:42,587][00497] Updated weights for policy 0, policy_version 52257 (0.0032) +[2024-03-29 17:51:43,180][00476] Signal inference workers to stop experience collection... (26250 times) +[2024-03-29 17:51:43,211][00497] InferenceWorker_p0-w0: stopping experience collection (26250 times) +[2024-03-29 17:51:43,392][00476] Signal inference workers to resume experience collection... (26250 times) +[2024-03-29 17:51:43,393][00497] InferenceWorker_p0-w0: resuming experience collection (26250 times) +[2024-03-29 17:51:43,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 856227840. Throughput: 0: 41016.9. Samples: 738466580. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:43,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 17:51:46,680][00497] Updated weights for policy 0, policy_version 52267 (0.0023) +[2024-03-29 17:51:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40959.9, 300 sec: 41543.2). Total num frames: 856440832. Throughput: 0: 41233.8. Samples: 738598920. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:48,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 17:51:49,825][00497] Updated weights for policy 0, policy_version 52277 (0.0027) +[2024-03-29 17:51:53,150][00497] Updated weights for policy 0, policy_version 52287 (0.0024) +[2024-03-29 17:51:53,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 856686592. Throughput: 0: 41459.5. Samples: 738836860. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:53,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 17:51:58,310][00497] Updated weights for policy 0, policy_version 52297 (0.0029) +[2024-03-29 17:51:58,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 856850432. Throughput: 0: 41614.8. Samples: 739095220. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:51:58,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 17:52:02,599][00497] Updated weights for policy 0, policy_version 52307 (0.0021) +[2024-03-29 17:52:03,839][00126] Fps is (10 sec: 36044.9, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 857047040. Throughput: 0: 41332.4. Samples: 739217760. Policy #0 lag: (min: 1.0, avg: 20.6, max: 43.0) +[2024-03-29 17:52:03,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 17:52:05,756][00497] Updated weights for policy 0, policy_version 52317 (0.0028) +[2024-03-29 17:52:08,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 857309184. Throughput: 0: 41590.7. Samples: 739454160. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:08,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:52:08,960][00497] Updated weights for policy 0, policy_version 52327 (0.0035) +[2024-03-29 17:52:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 857473024. Throughput: 0: 41492.4. Samples: 739720380. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:13,841][00126] Avg episode reward: [(0, '0.496')] +[2024-03-29 17:52:14,122][00497] Updated weights for policy 0, policy_version 52337 (0.0023) +[2024-03-29 17:52:17,058][00476] Signal inference workers to stop experience collection... (26300 times) +[2024-03-29 17:52:17,079][00497] InferenceWorker_p0-w0: stopping experience collection (26300 times) +[2024-03-29 17:52:17,246][00476] Signal inference workers to resume experience collection... (26300 times) +[2024-03-29 17:52:17,247][00497] InferenceWorker_p0-w0: resuming experience collection (26300 times) +[2024-03-29 17:52:18,391][00497] Updated weights for policy 0, policy_version 52347 (0.0028) +[2024-03-29 17:52:18,839][00126] Fps is (10 sec: 36044.8, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 857669632. Throughput: 0: 41147.3. Samples: 739832880. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:18,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 17:52:21,588][00497] Updated weights for policy 0, policy_version 52357 (0.0026) +[2024-03-29 17:52:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 857915392. Throughput: 0: 41866.2. Samples: 740080400. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:23,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:52:24,850][00497] Updated weights for policy 0, policy_version 52367 (0.0030) +[2024-03-29 17:52:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41654.3). Total num frames: 858095616. Throughput: 0: 41650.3. Samples: 740340840. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:28,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 17:52:29,857][00497] Updated weights for policy 0, policy_version 52377 (0.0027) +[2024-03-29 17:52:33,836][00497] Updated weights for policy 0, policy_version 52387 (0.0018) +[2024-03-29 17:52:33,839][00126] Fps is (10 sec: 39320.9, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 858308608. Throughput: 0: 41486.1. Samples: 740465800. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:33,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 17:52:37,034][00497] Updated weights for policy 0, policy_version 52397 (0.0019) +[2024-03-29 17:52:38,839][00126] Fps is (10 sec: 45874.7, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 858554368. Throughput: 0: 41785.7. Samples: 740717220. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 17:52:38,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 17:52:40,365][00497] Updated weights for policy 0, policy_version 52407 (0.0020) +[2024-03-29 17:52:43,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 858718208. Throughput: 0: 41692.4. Samples: 740971380. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:52:43,841][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 17:52:45,311][00497] Updated weights for policy 0, policy_version 52417 (0.0018) +[2024-03-29 17:52:48,839][00126] Fps is (10 sec: 37683.7, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 858931200. Throughput: 0: 41761.3. Samples: 741097020. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:52:48,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:52:49,417][00497] Updated weights for policy 0, policy_version 52427 (0.0019) +[2024-03-29 17:52:50,679][00476] Signal inference workers to stop experience collection... (26350 times) +[2024-03-29 17:52:50,716][00497] InferenceWorker_p0-w0: stopping experience collection (26350 times) +[2024-03-29 17:52:50,904][00476] Signal inference workers to resume experience collection... (26350 times) +[2024-03-29 17:52:50,904][00497] InferenceWorker_p0-w0: resuming experience collection (26350 times) +[2024-03-29 17:52:52,700][00497] Updated weights for policy 0, policy_version 52437 (0.0024) +[2024-03-29 17:52:53,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 859160576. Throughput: 0: 41999.5. Samples: 741344140. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:52:53,840][00126] Avg episode reward: [(0, '0.405')] +[2024-03-29 17:52:55,928][00497] Updated weights for policy 0, policy_version 52447 (0.0023) +[2024-03-29 17:52:58,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 859357184. Throughput: 0: 41708.4. Samples: 741597260. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:52:58,840][00126] Avg episode reward: [(0, '0.534')] +[2024-03-29 17:53:00,904][00497] Updated weights for policy 0, policy_version 52457 (0.0017) +[2024-03-29 17:53:03,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 859553792. Throughput: 0: 42111.0. Samples: 741727880. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:53:03,841][00126] Avg episode reward: [(0, '0.643')] +[2024-03-29 17:53:04,205][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052465_859586560.pth... +[2024-03-29 17:53:04,665][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000051855_849592320.pth +[2024-03-29 17:53:05,247][00497] Updated weights for policy 0, policy_version 52467 (0.0030) +[2024-03-29 17:53:08,238][00497] Updated weights for policy 0, policy_version 52477 (0.0033) +[2024-03-29 17:53:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 859799552. Throughput: 0: 42234.1. Samples: 741980940. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 17:53:08,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:53:11,728][00497] Updated weights for policy 0, policy_version 52487 (0.0029) +[2024-03-29 17:53:13,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 859979776. Throughput: 0: 41540.3. Samples: 742210160. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:13,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:53:16,795][00497] Updated weights for policy 0, policy_version 52497 (0.0030) +[2024-03-29 17:53:18,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 860192768. Throughput: 0: 42009.9. Samples: 742356240. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:18,841][00126] Avg episode reward: [(0, '0.464')] +[2024-03-29 17:53:20,784][00497] Updated weights for policy 0, policy_version 52507 (0.0021) +[2024-03-29 17:53:22,857][00476] Signal inference workers to stop experience collection... (26400 times) +[2024-03-29 17:53:22,858][00476] Signal inference workers to resume experience collection... (26400 times) +[2024-03-29 17:53:22,895][00497] InferenceWorker_p0-w0: stopping experience collection (26400 times) +[2024-03-29 17:53:22,896][00497] InferenceWorker_p0-w0: resuming experience collection (26400 times) +[2024-03-29 17:53:23,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 860405760. Throughput: 0: 41934.7. Samples: 742604280. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:23,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 17:53:24,447][00497] Updated weights for policy 0, policy_version 52517 (0.0033) +[2024-03-29 17:53:27,731][00497] Updated weights for policy 0, policy_version 52527 (0.0021) +[2024-03-29 17:53:28,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 860635136. Throughput: 0: 41122.7. Samples: 742821900. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:28,840][00126] Avg episode reward: [(0, '0.477')] +[2024-03-29 17:53:32,721][00497] Updated weights for policy 0, policy_version 52537 (0.0018) +[2024-03-29 17:53:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 860815360. Throughput: 0: 41657.7. Samples: 742971620. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:33,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 17:53:36,639][00497] Updated weights for policy 0, policy_version 52547 (0.0023) +[2024-03-29 17:53:38,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 861028352. Throughput: 0: 41697.4. Samples: 743220520. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:38,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:53:40,112][00497] Updated weights for policy 0, policy_version 52557 (0.0019) +[2024-03-29 17:53:43,333][00497] Updated weights for policy 0, policy_version 52567 (0.0029) +[2024-03-29 17:53:43,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42598.3, 300 sec: 41820.9). Total num frames: 861274112. Throughput: 0: 41149.2. Samples: 743448980. Policy #0 lag: (min: 0.0, avg: 24.4, max: 41.0) +[2024-03-29 17:53:43,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:53:48,402][00497] Updated weights for policy 0, policy_version 52577 (0.0021) +[2024-03-29 17:53:48,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 861437952. Throughput: 0: 41564.1. Samples: 743598260. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:53:48,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 17:53:52,182][00497] Updated weights for policy 0, policy_version 52587 (0.0026) +[2024-03-29 17:53:53,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 861650944. Throughput: 0: 41713.3. Samples: 743858040. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:53:53,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:53:55,516][00497] Updated weights for policy 0, policy_version 52597 (0.0025) +[2024-03-29 17:53:56,322][00476] Signal inference workers to stop experience collection... (26450 times) +[2024-03-29 17:53:56,376][00497] InferenceWorker_p0-w0: stopping experience collection (26450 times) +[2024-03-29 17:53:56,413][00476] Signal inference workers to resume experience collection... (26450 times) +[2024-03-29 17:53:56,415][00497] InferenceWorker_p0-w0: resuming experience collection (26450 times) +[2024-03-29 17:53:58,712][00497] Updated weights for policy 0, policy_version 52607 (0.0023) +[2024-03-29 17:53:58,839][00126] Fps is (10 sec: 47512.8, 60 sec: 42598.3, 300 sec: 41820.9). Total num frames: 861913088. Throughput: 0: 41688.0. Samples: 744086120. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:53:58,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 17:54:03,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 862060544. Throughput: 0: 41629.3. Samples: 744229560. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:54:03,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 17:54:03,883][00497] Updated weights for policy 0, policy_version 52617 (0.0021) +[2024-03-29 17:54:07,861][00497] Updated weights for policy 0, policy_version 52627 (0.0018) +[2024-03-29 17:54:08,839][00126] Fps is (10 sec: 36044.8, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 862273536. Throughput: 0: 41992.8. Samples: 744493960. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:54:08,840][00126] Avg episode reward: [(0, '0.453')] +[2024-03-29 17:54:11,252][00497] Updated weights for policy 0, policy_version 52637 (0.0025) +[2024-03-29 17:54:13,839][00126] Fps is (10 sec: 45874.6, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 862519296. Throughput: 0: 42102.9. Samples: 744716540. Policy #0 lag: (min: 0.0, avg: 22.3, max: 41.0) +[2024-03-29 17:54:13,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 17:54:14,632][00497] Updated weights for policy 0, policy_version 52647 (0.0028) +[2024-03-29 17:54:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 862683136. Throughput: 0: 41503.9. Samples: 744839300. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:18,841][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 17:54:19,816][00497] Updated weights for policy 0, policy_version 52657 (0.0029) +[2024-03-29 17:54:23,839][00126] Fps is (10 sec: 37683.4, 60 sec: 41506.1, 300 sec: 41543.1). Total num frames: 862896128. Throughput: 0: 42020.8. Samples: 745111460. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:23,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 17:54:24,012][00497] Updated weights for policy 0, policy_version 52668 (0.0029) +[2024-03-29 17:54:27,213][00497] Updated weights for policy 0, policy_version 52678 (0.0023) +[2024-03-29 17:54:28,677][00476] Signal inference workers to stop experience collection... (26500 times) +[2024-03-29 17:54:28,716][00497] InferenceWorker_p0-w0: stopping experience collection (26500 times) +[2024-03-29 17:54:28,839][00126] Fps is (10 sec: 45875.6, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 863141888. Throughput: 0: 42266.3. Samples: 745350960. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:28,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 17:54:28,891][00476] Signal inference workers to resume experience collection... (26500 times) +[2024-03-29 17:54:28,891][00497] InferenceWorker_p0-w0: resuming experience collection (26500 times) +[2024-03-29 17:54:30,629][00497] Updated weights for policy 0, policy_version 52688 (0.0034) +[2024-03-29 17:54:33,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 863305728. Throughput: 0: 41514.6. Samples: 745466420. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:33,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:54:35,696][00497] Updated weights for policy 0, policy_version 52698 (0.0024) +[2024-03-29 17:54:38,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 863535104. Throughput: 0: 41766.0. Samples: 745737500. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:38,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 17:54:39,635][00497] Updated weights for policy 0, policy_version 52708 (0.0018) +[2024-03-29 17:54:42,979][00497] Updated weights for policy 0, policy_version 52718 (0.0019) +[2024-03-29 17:54:43,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 863764480. Throughput: 0: 42027.7. Samples: 745977360. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:43,840][00126] Avg episode reward: [(0, '0.456')] +[2024-03-29 17:54:46,290][00497] Updated weights for policy 0, policy_version 52728 (0.0025) +[2024-03-29 17:54:48,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 863944704. Throughput: 0: 41318.7. Samples: 746088900. Policy #0 lag: (min: 1.0, avg: 19.0, max: 41.0) +[2024-03-29 17:54:48,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 17:54:51,503][00497] Updated weights for policy 0, policy_version 52738 (0.0023) +[2024-03-29 17:54:53,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 864141312. Throughput: 0: 41577.0. Samples: 746364920. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:54:53,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 17:54:55,416][00497] Updated weights for policy 0, policy_version 52748 (0.0025) +[2024-03-29 17:54:58,727][00497] Updated weights for policy 0, policy_version 52758 (0.0020) +[2024-03-29 17:54:58,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 864387072. Throughput: 0: 41874.7. Samples: 746600900. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:54:58,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:55:01,977][00497] Updated weights for policy 0, policy_version 52768 (0.0021) +[2024-03-29 17:55:03,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 864583680. Throughput: 0: 41622.7. Samples: 746712320. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:55:03,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 17:55:03,866][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052770_864583680.pth... +[2024-03-29 17:55:04,223][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052162_854622208.pth +[2024-03-29 17:55:04,913][00476] Signal inference workers to stop experience collection... (26550 times) +[2024-03-29 17:55:04,913][00476] Signal inference workers to resume experience collection... (26550 times) +[2024-03-29 17:55:04,948][00497] InferenceWorker_p0-w0: stopping experience collection (26550 times) +[2024-03-29 17:55:04,948][00497] InferenceWorker_p0-w0: resuming experience collection (26550 times) +[2024-03-29 17:55:07,287][00497] Updated weights for policy 0, policy_version 52778 (0.0019) +[2024-03-29 17:55:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 864780288. Throughput: 0: 41828.4. Samples: 746993740. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:55:08,841][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 17:55:11,189][00497] Updated weights for policy 0, policy_version 52788 (0.0028) +[2024-03-29 17:55:13,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41233.2, 300 sec: 41543.2). Total num frames: 864993280. Throughput: 0: 41519.1. Samples: 747219320. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:55:13,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 17:55:14,602][00497] Updated weights for policy 0, policy_version 52798 (0.0023) +[2024-03-29 17:55:18,111][00497] Updated weights for policy 0, policy_version 52808 (0.0023) +[2024-03-29 17:55:18,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 865206272. Throughput: 0: 41645.7. Samples: 747340480. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:55:18,840][00126] Avg episode reward: [(0, '0.470')] +[2024-03-29 17:55:23,202][00497] Updated weights for policy 0, policy_version 52818 (0.0018) +[2024-03-29 17:55:23,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 865386496. Throughput: 0: 41497.7. Samples: 747604900. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 17:55:23,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 17:55:27,018][00497] Updated weights for policy 0, policy_version 52828 (0.0018) +[2024-03-29 17:55:28,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41233.0, 300 sec: 41487.6). Total num frames: 865615872. Throughput: 0: 41644.3. Samples: 747851360. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:28,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 17:55:30,431][00497] Updated weights for policy 0, policy_version 52838 (0.0023) +[2024-03-29 17:55:33,839][00126] Fps is (10 sec: 45875.0, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 865845248. Throughput: 0: 41721.3. Samples: 747966360. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:33,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 17:55:33,964][00497] Updated weights for policy 0, policy_version 52848 (0.0025) +[2024-03-29 17:55:36,715][00476] Signal inference workers to stop experience collection... (26600 times) +[2024-03-29 17:55:36,718][00476] Signal inference workers to resume experience collection... (26600 times) +[2024-03-29 17:55:36,758][00497] InferenceWorker_p0-w0: stopping experience collection (26600 times) +[2024-03-29 17:55:36,758][00497] InferenceWorker_p0-w0: resuming experience collection (26600 times) +[2024-03-29 17:55:38,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 866009088. Throughput: 0: 41487.1. Samples: 748231840. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:38,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 17:55:38,886][00497] Updated weights for policy 0, policy_version 52858 (0.0022) +[2024-03-29 17:55:42,630][00497] Updated weights for policy 0, policy_version 52868 (0.0032) +[2024-03-29 17:55:43,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40959.9, 300 sec: 41487.6). Total num frames: 866222080. Throughput: 0: 41497.0. Samples: 748468260. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:43,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 17:55:46,402][00497] Updated weights for policy 0, policy_version 52878 (0.0023) +[2024-03-29 17:55:48,839][00126] Fps is (10 sec: 47514.0, 60 sec: 42325.4, 300 sec: 41820.9). Total num frames: 866484224. Throughput: 0: 41485.0. Samples: 748579140. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:48,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 17:55:49,969][00497] Updated weights for policy 0, policy_version 52888 (0.0025) +[2024-03-29 17:55:53,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 866615296. Throughput: 0: 41146.8. Samples: 748845340. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 17:55:53,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 17:55:54,874][00497] Updated weights for policy 0, policy_version 52898 (0.0023) +[2024-03-29 17:55:58,615][00497] Updated weights for policy 0, policy_version 52908 (0.0026) +[2024-03-29 17:55:58,839][00126] Fps is (10 sec: 36044.8, 60 sec: 40960.1, 300 sec: 41543.2). Total num frames: 866844672. Throughput: 0: 41675.6. Samples: 749094720. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:55:58,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 17:56:02,184][00497] Updated weights for policy 0, policy_version 52918 (0.0025) +[2024-03-29 17:56:03,839][00126] Fps is (10 sec: 47513.6, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 867090432. Throughput: 0: 41564.6. Samples: 749210880. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:03,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 17:56:04,360][00476] Signal inference workers to stop experience collection... (26650 times) +[2024-03-29 17:56:04,439][00497] InferenceWorker_p0-w0: stopping experience collection (26650 times) +[2024-03-29 17:56:04,532][00476] Signal inference workers to resume experience collection... (26650 times) +[2024-03-29 17:56:04,533][00497] InferenceWorker_p0-w0: resuming experience collection (26650 times) +[2024-03-29 17:56:05,410][00497] Updated weights for policy 0, policy_version 52928 (0.0024) +[2024-03-29 17:56:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 867270656. Throughput: 0: 41454.7. Samples: 749470360. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:08,841][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 17:56:10,280][00497] Updated weights for policy 0, policy_version 52938 (0.0024) +[2024-03-29 17:56:13,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 867483648. Throughput: 0: 41924.5. Samples: 749737960. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:13,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:56:13,986][00497] Updated weights for policy 0, policy_version 52948 (0.0025) +[2024-03-29 17:56:17,284][00497] Updated weights for policy 0, policy_version 52958 (0.0023) +[2024-03-29 17:56:18,839][00126] Fps is (10 sec: 45875.7, 60 sec: 42052.4, 300 sec: 41765.3). Total num frames: 867729408. Throughput: 0: 42109.9. Samples: 749861300. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:18,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 17:56:20,803][00497] Updated weights for policy 0, policy_version 52968 (0.0026) +[2024-03-29 17:56:23,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 867893248. Throughput: 0: 41647.7. Samples: 750105980. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:23,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 17:56:25,754][00497] Updated weights for policy 0, policy_version 52978 (0.0022) +[2024-03-29 17:56:28,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41779.3, 300 sec: 41654.3). Total num frames: 868122624. Throughput: 0: 42414.3. Samples: 750376900. Policy #0 lag: (min: 0.0, avg: 20.3, max: 42.0) +[2024-03-29 17:56:28,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 17:56:29,391][00497] Updated weights for policy 0, policy_version 52988 (0.0019) +[2024-03-29 17:56:32,851][00497] Updated weights for policy 0, policy_version 52998 (0.0023) +[2024-03-29 17:56:33,839][00126] Fps is (10 sec: 47513.5, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 868368384. Throughput: 0: 42628.9. Samples: 750497440. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:33,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 17:56:36,413][00497] Updated weights for policy 0, policy_version 53008 (0.0022) +[2024-03-29 17:56:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 868532224. Throughput: 0: 42036.0. Samples: 750736960. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:38,841][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 17:56:40,054][00476] Signal inference workers to stop experience collection... (26700 times) +[2024-03-29 17:56:40,054][00476] Signal inference workers to resume experience collection... (26700 times) +[2024-03-29 17:56:40,079][00497] InferenceWorker_p0-w0: stopping experience collection (26700 times) +[2024-03-29 17:56:40,102][00497] InferenceWorker_p0-w0: resuming experience collection (26700 times) +[2024-03-29 17:56:41,269][00497] Updated weights for policy 0, policy_version 53018 (0.0026) +[2024-03-29 17:56:43,839][00126] Fps is (10 sec: 37683.0, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 868745216. Throughput: 0: 42620.4. Samples: 751012640. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:43,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 17:56:44,961][00497] Updated weights for policy 0, policy_version 53028 (0.0020) +[2024-03-29 17:56:48,509][00497] Updated weights for policy 0, policy_version 53038 (0.0020) +[2024-03-29 17:56:48,839][00126] Fps is (10 sec: 45874.9, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 868990976. Throughput: 0: 42662.6. Samples: 751130700. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:48,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 17:56:51,933][00497] Updated weights for policy 0, policy_version 53048 (0.0023) +[2024-03-29 17:56:53,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42598.4, 300 sec: 41765.3). Total num frames: 869171200. Throughput: 0: 42081.8. Samples: 751364040. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:53,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 17:56:57,105][00497] Updated weights for policy 0, policy_version 53058 (0.0026) +[2024-03-29 17:56:58,839][00126] Fps is (10 sec: 37683.1, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 869367808. Throughput: 0: 42037.8. Samples: 751629660. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 17:56:58,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 17:57:00,727][00497] Updated weights for policy 0, policy_version 53068 (0.0024) +[2024-03-29 17:57:03,839][00126] Fps is (10 sec: 44236.3, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 869613568. Throughput: 0: 42162.9. Samples: 751758640. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:03,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 17:57:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053077_869613568.pth... +[2024-03-29 17:57:04,166][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052465_859586560.pth +[2024-03-29 17:57:04,432][00497] Updated weights for policy 0, policy_version 53078 (0.0024) +[2024-03-29 17:57:07,804][00497] Updated weights for policy 0, policy_version 53088 (0.0027) +[2024-03-29 17:57:08,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.3, 300 sec: 41820.8). Total num frames: 869810176. Throughput: 0: 41479.9. Samples: 751972580. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:08,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:57:12,834][00476] Signal inference workers to stop experience collection... (26750 times) +[2024-03-29 17:57:12,877][00497] InferenceWorker_p0-w0: stopping experience collection (26750 times) +[2024-03-29 17:57:12,994][00476] Signal inference workers to resume experience collection... (26750 times) +[2024-03-29 17:57:12,995][00497] InferenceWorker_p0-w0: resuming experience collection (26750 times) +[2024-03-29 17:57:13,000][00497] Updated weights for policy 0, policy_version 53098 (0.0024) +[2024-03-29 17:57:13,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.3, 300 sec: 41820.8). Total num frames: 870006784. Throughput: 0: 41824.0. Samples: 752258980. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:13,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 17:57:16,734][00497] Updated weights for policy 0, policy_version 53108 (0.0022) +[2024-03-29 17:57:18,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41506.0, 300 sec: 41709.8). Total num frames: 870219776. Throughput: 0: 41701.6. Samples: 752374020. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:18,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 17:57:20,204][00497] Updated weights for policy 0, policy_version 53118 (0.0019) +[2024-03-29 17:57:23,494][00497] Updated weights for policy 0, policy_version 53128 (0.0027) +[2024-03-29 17:57:23,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42598.3, 300 sec: 41876.4). Total num frames: 870449152. Throughput: 0: 41285.3. Samples: 752594800. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:23,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 17:57:28,839][00126] Fps is (10 sec: 37683.8, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 870596608. Throughput: 0: 41332.1. Samples: 752872580. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:28,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 17:57:28,918][00497] Updated weights for policy 0, policy_version 53138 (0.0033) +[2024-03-29 17:57:32,482][00497] Updated weights for policy 0, policy_version 53148 (0.0024) +[2024-03-29 17:57:33,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40960.0, 300 sec: 41598.7). Total num frames: 870825984. Throughput: 0: 41377.4. Samples: 752992680. Policy #0 lag: (min: 1.0, avg: 21.7, max: 41.0) +[2024-03-29 17:57:33,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 17:57:36,130][00497] Updated weights for policy 0, policy_version 53158 (0.0019) +[2024-03-29 17:57:38,839][00126] Fps is (10 sec: 47512.8, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 871071744. Throughput: 0: 41093.7. Samples: 753213260. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:57:38,840][00126] Avg episode reward: [(0, '0.633')] +[2024-03-29 17:57:39,598][00497] Updated weights for policy 0, policy_version 53168 (0.0023) +[2024-03-29 17:57:43,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 871235584. Throughput: 0: 41250.4. Samples: 753485920. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:57:43,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 17:57:45,130][00497] Updated weights for policy 0, policy_version 53178 (0.0020) +[2024-03-29 17:57:45,387][00476] Signal inference workers to stop experience collection... (26800 times) +[2024-03-29 17:57:45,408][00497] InferenceWorker_p0-w0: stopping experience collection (26800 times) +[2024-03-29 17:57:45,604][00476] Signal inference workers to resume experience collection... (26800 times) +[2024-03-29 17:57:45,604][00497] InferenceWorker_p0-w0: resuming experience collection (26800 times) +[2024-03-29 17:57:48,446][00497] Updated weights for policy 0, policy_version 53188 (0.0026) +[2024-03-29 17:57:48,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40960.0, 300 sec: 41654.2). Total num frames: 871448576. Throughput: 0: 40932.5. Samples: 753600600. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:57:48,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 17:57:52,084][00497] Updated weights for policy 0, policy_version 53198 (0.0021) +[2024-03-29 17:57:53,839][00126] Fps is (10 sec: 45874.2, 60 sec: 42052.1, 300 sec: 41820.8). Total num frames: 871694336. Throughput: 0: 41821.6. Samples: 753854560. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:57:53,840][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 17:57:55,451][00497] Updated weights for policy 0, policy_version 53208 (0.0024) +[2024-03-29 17:57:58,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41233.2, 300 sec: 41654.3). Total num frames: 871841792. Throughput: 0: 41112.1. Samples: 754109020. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:57:58,840][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 17:58:00,884][00497] Updated weights for policy 0, policy_version 53218 (0.0028) +[2024-03-29 17:58:03,839][00126] Fps is (10 sec: 36045.6, 60 sec: 40687.0, 300 sec: 41543.2). Total num frames: 872054784. Throughput: 0: 41467.7. Samples: 754240060. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:58:03,840][00126] Avg episode reward: [(0, '0.634')] +[2024-03-29 17:58:04,343][00497] Updated weights for policy 0, policy_version 53228 (0.0025) +[2024-03-29 17:58:08,212][00497] Updated weights for policy 0, policy_version 53238 (0.0029) +[2024-03-29 17:58:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 872284160. Throughput: 0: 41441.4. Samples: 754459660. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 17:58:08,840][00126] Avg episode reward: [(0, '0.654')] +[2024-03-29 17:58:11,812][00497] Updated weights for policy 0, policy_version 53248 (0.0032) +[2024-03-29 17:58:13,839][00126] Fps is (10 sec: 40959.4, 60 sec: 40959.9, 300 sec: 41598.7). Total num frames: 872464384. Throughput: 0: 40596.7. Samples: 754699440. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:13,840][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 17:58:16,801][00497] Updated weights for policy 0, policy_version 53258 (0.0025) +[2024-03-29 17:58:17,951][00476] Signal inference workers to stop experience collection... (26850 times) +[2024-03-29 17:58:17,954][00476] Signal inference workers to resume experience collection... (26850 times) +[2024-03-29 17:58:17,996][00497] InferenceWorker_p0-w0: stopping experience collection (26850 times) +[2024-03-29 17:58:17,996][00497] InferenceWorker_p0-w0: resuming experience collection (26850 times) +[2024-03-29 17:58:18,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40687.0, 300 sec: 41543.2). Total num frames: 872660992. Throughput: 0: 41030.7. Samples: 754839060. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:18,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 17:58:20,397][00497] Updated weights for policy 0, policy_version 53268 (0.0026) +[2024-03-29 17:58:23,839][00126] Fps is (10 sec: 42598.6, 60 sec: 40686.9, 300 sec: 41543.1). Total num frames: 872890368. Throughput: 0: 41328.0. Samples: 755073020. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:23,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 17:58:24,097][00497] Updated weights for policy 0, policy_version 53278 (0.0022) +[2024-03-29 17:58:27,517][00497] Updated weights for policy 0, policy_version 53288 (0.0027) +[2024-03-29 17:58:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 873103360. Throughput: 0: 40778.2. Samples: 755320940. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:28,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 17:58:32,409][00497] Updated weights for policy 0, policy_version 53298 (0.0023) +[2024-03-29 17:58:33,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 873299968. Throughput: 0: 41375.5. Samples: 755462500. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:33,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 17:58:36,126][00497] Updated weights for policy 0, policy_version 53308 (0.0029) +[2024-03-29 17:58:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 40687.0, 300 sec: 41487.6). Total num frames: 873512960. Throughput: 0: 41102.4. Samples: 755704160. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:38,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 17:58:39,745][00497] Updated weights for policy 0, policy_version 53318 (0.0021) +[2024-03-29 17:58:43,412][00497] Updated weights for policy 0, policy_version 53328 (0.0024) +[2024-03-29 17:58:43,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 873725952. Throughput: 0: 40597.2. Samples: 755935900. Policy #0 lag: (min: 0.0, avg: 24.5, max: 44.0) +[2024-03-29 17:58:43,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:58:48,144][00497] Updated weights for policy 0, policy_version 53338 (0.0027) +[2024-03-29 17:58:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40960.0, 300 sec: 41543.2). Total num frames: 873906176. Throughput: 0: 41037.7. Samples: 756086760. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:58:48,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 17:58:51,903][00497] Updated weights for policy 0, policy_version 53348 (0.0025) +[2024-03-29 17:58:52,555][00476] Signal inference workers to stop experience collection... (26900 times) +[2024-03-29 17:58:52,585][00497] InferenceWorker_p0-w0: stopping experience collection (26900 times) +[2024-03-29 17:58:52,741][00476] Signal inference workers to resume experience collection... (26900 times) +[2024-03-29 17:58:52,741][00497] InferenceWorker_p0-w0: resuming experience collection (26900 times) +[2024-03-29 17:58:53,839][00126] Fps is (10 sec: 42598.5, 60 sec: 40960.1, 300 sec: 41487.6). Total num frames: 874151936. Throughput: 0: 41494.6. Samples: 756326920. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:58:53,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 17:58:55,546][00497] Updated weights for policy 0, policy_version 53358 (0.0032) +[2024-03-29 17:58:58,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42052.1, 300 sec: 41709.8). Total num frames: 874364928. Throughput: 0: 41425.4. Samples: 756563580. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:58:58,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 17:58:58,942][00497] Updated weights for policy 0, policy_version 53368 (0.0019) +[2024-03-29 17:59:03,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.0, 300 sec: 41543.2). Total num frames: 874528768. Throughput: 0: 41540.0. Samples: 756708360. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:59:03,841][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 17:59:03,878][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053378_874545152.pth... +[2024-03-29 17:59:03,891][00497] Updated weights for policy 0, policy_version 53378 (0.0018) +[2024-03-29 17:59:04,204][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000052770_864583680.pth +[2024-03-29 17:59:07,677][00497] Updated weights for policy 0, policy_version 53388 (0.0026) +[2024-03-29 17:59:08,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 874774528. Throughput: 0: 42126.8. Samples: 756968720. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:59:08,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 17:59:11,467][00497] Updated weights for policy 0, policy_version 53398 (0.0028) +[2024-03-29 17:59:13,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 874987520. Throughput: 0: 41279.0. Samples: 757178500. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 17:59:13,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 17:59:14,801][00497] Updated weights for policy 0, policy_version 53408 (0.0031) +[2024-03-29 17:59:18,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 875151360. Throughput: 0: 41196.4. Samples: 757316340. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:18,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 17:59:19,999][00497] Updated weights for policy 0, policy_version 53418 (0.0017) +[2024-03-29 17:59:23,640][00497] Updated weights for policy 0, policy_version 53428 (0.0021) +[2024-03-29 17:59:23,839][00126] Fps is (10 sec: 37683.4, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 875364352. Throughput: 0: 41782.1. Samples: 757584360. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:23,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 17:59:26,007][00476] Signal inference workers to stop experience collection... (26950 times) +[2024-03-29 17:59:26,050][00497] InferenceWorker_p0-w0: stopping experience collection (26950 times) +[2024-03-29 17:59:26,221][00476] Signal inference workers to resume experience collection... (26950 times) +[2024-03-29 17:59:26,222][00497] InferenceWorker_p0-w0: resuming experience collection (26950 times) +[2024-03-29 17:59:27,160][00497] Updated weights for policy 0, policy_version 53438 (0.0027) +[2024-03-29 17:59:28,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 875610112. Throughput: 0: 41594.7. Samples: 757807660. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:28,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 17:59:30,612][00497] Updated weights for policy 0, policy_version 53448 (0.0024) +[2024-03-29 17:59:33,840][00126] Fps is (10 sec: 40958.8, 60 sec: 41232.8, 300 sec: 41487.6). Total num frames: 875773952. Throughput: 0: 40997.0. Samples: 757931640. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:33,842][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 17:59:35,669][00497] Updated weights for policy 0, policy_version 53458 (0.0019) +[2024-03-29 17:59:38,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 875986944. Throughput: 0: 41745.8. Samples: 758205480. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:38,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 17:59:39,279][00497] Updated weights for policy 0, policy_version 53468 (0.0027) +[2024-03-29 17:59:42,875][00497] Updated weights for policy 0, policy_version 53478 (0.0032) +[2024-03-29 17:59:43,839][00126] Fps is (10 sec: 45876.4, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 876232704. Throughput: 0: 41646.7. Samples: 758437680. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:43,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 17:59:46,377][00497] Updated weights for policy 0, policy_version 53488 (0.0020) +[2024-03-29 17:59:48,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 876396544. Throughput: 0: 41105.9. Samples: 758558120. Policy #0 lag: (min: 0.0, avg: 18.5, max: 40.0) +[2024-03-29 17:59:48,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 17:59:51,275][00497] Updated weights for policy 0, policy_version 53498 (0.0020) +[2024-03-29 17:59:53,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 876625920. Throughput: 0: 41356.9. Samples: 758829780. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 17:59:53,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 17:59:54,981][00497] Updated weights for policy 0, policy_version 53508 (0.0019) +[2024-03-29 17:59:58,568][00497] Updated weights for policy 0, policy_version 53518 (0.0021) +[2024-03-29 17:59:58,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 876855296. Throughput: 0: 41973.5. Samples: 759067300. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 17:59:58,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 18:00:01,922][00497] Updated weights for policy 0, policy_version 53528 (0.0024) +[2024-03-29 18:00:03,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 877051904. Throughput: 0: 41711.6. Samples: 759193360. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 18:00:03,840][00126] Avg episode reward: [(0, '0.445')] +[2024-03-29 18:00:04,101][00476] Signal inference workers to stop experience collection... (27000 times) +[2024-03-29 18:00:04,175][00476] Signal inference workers to resume experience collection... (27000 times) +[2024-03-29 18:00:04,177][00497] InferenceWorker_p0-w0: stopping experience collection (27000 times) +[2024-03-29 18:00:04,200][00497] InferenceWorker_p0-w0: resuming experience collection (27000 times) +[2024-03-29 18:00:06,790][00497] Updated weights for policy 0, policy_version 53538 (0.0019) +[2024-03-29 18:00:08,839][00126] Fps is (10 sec: 39320.8, 60 sec: 41232.9, 300 sec: 41543.1). Total num frames: 877248512. Throughput: 0: 41781.7. Samples: 759464540. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 18:00:08,842][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:00:10,666][00497] Updated weights for policy 0, policy_version 53548 (0.0023) +[2024-03-29 18:00:13,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 877477888. Throughput: 0: 42036.4. Samples: 759699300. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 18:00:13,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 18:00:14,379][00497] Updated weights for policy 0, policy_version 53558 (0.0032) +[2024-03-29 18:00:17,628][00497] Updated weights for policy 0, policy_version 53568 (0.0028) +[2024-03-29 18:00:18,839][00126] Fps is (10 sec: 44237.6, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 877690880. Throughput: 0: 41882.1. Samples: 759816320. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 18:00:18,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 18:00:22,492][00497] Updated weights for policy 0, policy_version 53578 (0.0020) +[2024-03-29 18:00:23,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 41598.7). Total num frames: 877887488. Throughput: 0: 41895.8. Samples: 760090800. Policy #0 lag: (min: 0.0, avg: 19.1, max: 40.0) +[2024-03-29 18:00:23,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:00:26,063][00497] Updated weights for policy 0, policy_version 53588 (0.0032) +[2024-03-29 18:00:28,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 878116864. Throughput: 0: 42219.7. Samples: 760337560. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:28,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 18:00:29,610][00497] Updated weights for policy 0, policy_version 53598 (0.0034) +[2024-03-29 18:00:32,974][00497] Updated weights for policy 0, policy_version 53608 (0.0031) +[2024-03-29 18:00:33,839][00126] Fps is (10 sec: 45875.9, 60 sec: 42871.7, 300 sec: 41820.9). Total num frames: 878346240. Throughput: 0: 42180.8. Samples: 760456260. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:33,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:00:37,967][00497] Updated weights for policy 0, policy_version 53618 (0.0019) +[2024-03-29 18:00:38,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.3, 300 sec: 41654.3). Total num frames: 878510080. Throughput: 0: 42193.4. Samples: 760728480. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:38,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:00:40,430][00476] Signal inference workers to stop experience collection... (27050 times) +[2024-03-29 18:00:40,471][00497] InferenceWorker_p0-w0: stopping experience collection (27050 times) +[2024-03-29 18:00:40,631][00476] Signal inference workers to resume experience collection... (27050 times) +[2024-03-29 18:00:40,631][00497] InferenceWorker_p0-w0: resuming experience collection (27050 times) +[2024-03-29 18:00:41,716][00497] Updated weights for policy 0, policy_version 53628 (0.0021) +[2024-03-29 18:00:43,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.3, 300 sec: 41543.2). Total num frames: 878739456. Throughput: 0: 41955.5. Samples: 760955300. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:43,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 18:00:45,257][00497] Updated weights for policy 0, policy_version 53638 (0.0023) +[2024-03-29 18:00:48,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42598.3, 300 sec: 41820.8). Total num frames: 878952448. Throughput: 0: 41929.7. Samples: 761080200. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:48,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 18:00:48,888][00497] Updated weights for policy 0, policy_version 53648 (0.0025) +[2024-03-29 18:00:53,631][00497] Updated weights for policy 0, policy_version 53658 (0.0017) +[2024-03-29 18:00:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 879132672. Throughput: 0: 41785.9. Samples: 761344900. Policy #0 lag: (min: 0.0, avg: 19.0, max: 43.0) +[2024-03-29 18:00:53,840][00126] Avg episode reward: [(0, '0.654')] +[2024-03-29 18:00:57,278][00497] Updated weights for policy 0, policy_version 53668 (0.0022) +[2024-03-29 18:00:58,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.1, 300 sec: 41654.2). Total num frames: 879378432. Throughput: 0: 42091.4. Samples: 761593420. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:00:58,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 18:01:00,865][00497] Updated weights for policy 0, policy_version 53678 (0.0023) +[2024-03-29 18:01:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 879591424. Throughput: 0: 42415.5. Samples: 761725020. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:03,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 18:01:04,059][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053687_879607808.pth... +[2024-03-29 18:01:04,371][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053077_869613568.pth +[2024-03-29 18:01:04,641][00497] Updated weights for policy 0, policy_version 53688 (0.0030) +[2024-03-29 18:01:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 879771648. Throughput: 0: 41800.0. Samples: 761971800. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:08,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:01:09,264][00497] Updated weights for policy 0, policy_version 53698 (0.0031) +[2024-03-29 18:01:12,759][00497] Updated weights for policy 0, policy_version 53708 (0.0018) +[2024-03-29 18:01:13,452][00476] Signal inference workers to stop experience collection... (27100 times) +[2024-03-29 18:01:13,514][00497] InferenceWorker_p0-w0: stopping experience collection (27100 times) +[2024-03-29 18:01:13,642][00476] Signal inference workers to resume experience collection... (27100 times) +[2024-03-29 18:01:13,643][00497] InferenceWorker_p0-w0: resuming experience collection (27100 times) +[2024-03-29 18:01:13,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 880001024. Throughput: 0: 42096.5. Samples: 762231900. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:13,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 18:01:16,503][00497] Updated weights for policy 0, policy_version 53718 (0.0020) +[2024-03-29 18:01:18,839][00126] Fps is (10 sec: 45875.3, 60 sec: 42325.2, 300 sec: 41820.8). Total num frames: 880230400. Throughput: 0: 42207.4. Samples: 762355600. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:18,841][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:01:19,981][00497] Updated weights for policy 0, policy_version 53728 (0.0018) +[2024-03-29 18:01:23,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.4, 300 sec: 41654.2). Total num frames: 880410624. Throughput: 0: 41933.3. Samples: 762615480. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:23,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:01:24,600][00497] Updated weights for policy 0, policy_version 53738 (0.0016) +[2024-03-29 18:01:28,150][00497] Updated weights for policy 0, policy_version 53748 (0.0018) +[2024-03-29 18:01:28,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 880640000. Throughput: 0: 42776.4. Samples: 762880240. Policy #0 lag: (min: 1.0, avg: 21.5, max: 43.0) +[2024-03-29 18:01:28,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:01:31,860][00497] Updated weights for policy 0, policy_version 53758 (0.0023) +[2024-03-29 18:01:33,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 880852992. Throughput: 0: 42514.2. Samples: 762993340. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:33,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:01:35,574][00497] Updated weights for policy 0, policy_version 53768 (0.0026) +[2024-03-29 18:01:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 881049600. Throughput: 0: 42176.5. Samples: 763242840. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:38,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 18:01:40,110][00497] Updated weights for policy 0, policy_version 53778 (0.0018) +[2024-03-29 18:01:43,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 881246208. Throughput: 0: 42339.8. Samples: 763498700. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:43,841][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 18:01:43,887][00497] Updated weights for policy 0, policy_version 53788 (0.0019) +[2024-03-29 18:01:47,545][00497] Updated weights for policy 0, policy_version 53798 (0.0028) +[2024-03-29 18:01:48,368][00476] Signal inference workers to stop experience collection... (27150 times) +[2024-03-29 18:01:48,413][00497] InferenceWorker_p0-w0: stopping experience collection (27150 times) +[2024-03-29 18:01:48,448][00476] Signal inference workers to resume experience collection... (27150 times) +[2024-03-29 18:01:48,450][00497] InferenceWorker_p0-w0: resuming experience collection (27150 times) +[2024-03-29 18:01:48,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 881491968. Throughput: 0: 42084.5. Samples: 763618820. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:48,840][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 18:01:51,131][00497] Updated weights for policy 0, policy_version 53808 (0.0024) +[2024-03-29 18:01:53,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 41765.3). Total num frames: 881688576. Throughput: 0: 42157.1. Samples: 763868860. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:53,840][00126] Avg episode reward: [(0, '0.471')] +[2024-03-29 18:01:55,548][00497] Updated weights for policy 0, policy_version 53818 (0.0020) +[2024-03-29 18:01:58,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 881885184. Throughput: 0: 42323.5. Samples: 764136460. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:01:58,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:01:59,403][00497] Updated weights for policy 0, policy_version 53828 (0.0023) +[2024-03-29 18:02:02,935][00497] Updated weights for policy 0, policy_version 53838 (0.0019) +[2024-03-29 18:02:03,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 882114560. Throughput: 0: 42062.8. Samples: 764248420. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 18:02:03,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 18:02:06,673][00497] Updated weights for policy 0, policy_version 53848 (0.0020) +[2024-03-29 18:02:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.5, 300 sec: 41765.3). Total num frames: 882327552. Throughput: 0: 41903.5. Samples: 764501140. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:08,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 18:02:11,219][00497] Updated weights for policy 0, policy_version 53858 (0.0030) +[2024-03-29 18:02:13,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 882507776. Throughput: 0: 42153.7. Samples: 764777160. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:13,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:02:14,772][00497] Updated weights for policy 0, policy_version 53868 (0.0024) +[2024-03-29 18:02:18,485][00497] Updated weights for policy 0, policy_version 53878 (0.0020) +[2024-03-29 18:02:18,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 882737152. Throughput: 0: 42064.0. Samples: 764886220. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:18,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:02:19,289][00476] Signal inference workers to stop experience collection... (27200 times) +[2024-03-29 18:02:19,326][00497] InferenceWorker_p0-w0: stopping experience collection (27200 times) +[2024-03-29 18:02:19,513][00476] Signal inference workers to resume experience collection... (27200 times) +[2024-03-29 18:02:19,514][00497] InferenceWorker_p0-w0: resuming experience collection (27200 times) +[2024-03-29 18:02:22,387][00497] Updated weights for policy 0, policy_version 53888 (0.0023) +[2024-03-29 18:02:23,840][00126] Fps is (10 sec: 44235.9, 60 sec: 42325.1, 300 sec: 41876.3). Total num frames: 882950144. Throughput: 0: 41922.8. Samples: 765129380. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:23,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 18:02:26,952][00497] Updated weights for policy 0, policy_version 53898 (0.0022) +[2024-03-29 18:02:28,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 883130368. Throughput: 0: 42283.5. Samples: 765401460. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:28,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:02:30,327][00497] Updated weights for policy 0, policy_version 53908 (0.0029) +[2024-03-29 18:02:33,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 883376128. Throughput: 0: 42332.7. Samples: 765523800. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:33,840][00126] Avg episode reward: [(0, '0.458')] +[2024-03-29 18:02:34,023][00497] Updated weights for policy 0, policy_version 53918 (0.0030) +[2024-03-29 18:02:37,788][00497] Updated weights for policy 0, policy_version 53928 (0.0024) +[2024-03-29 18:02:38,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42325.2, 300 sec: 41876.4). Total num frames: 883589120. Throughput: 0: 42199.8. Samples: 765767860. Policy #0 lag: (min: 0.0, avg: 22.4, max: 42.0) +[2024-03-29 18:02:38,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:02:42,179][00497] Updated weights for policy 0, policy_version 53938 (0.0019) +[2024-03-29 18:02:43,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42325.3, 300 sec: 41820.9). Total num frames: 883785728. Throughput: 0: 42367.1. Samples: 766042980. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:02:43,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:02:45,741][00497] Updated weights for policy 0, policy_version 53948 (0.0025) +[2024-03-29 18:02:48,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.2, 300 sec: 41820.9). Total num frames: 884031488. Throughput: 0: 42519.9. Samples: 766161820. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:02:48,842][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 18:02:49,503][00497] Updated weights for policy 0, policy_version 53958 (0.0024) +[2024-03-29 18:02:52,095][00476] Signal inference workers to stop experience collection... (27250 times) +[2024-03-29 18:02:52,141][00497] InferenceWorker_p0-w0: stopping experience collection (27250 times) +[2024-03-29 18:02:52,174][00476] Signal inference workers to resume experience collection... (27250 times) +[2024-03-29 18:02:52,177][00497] InferenceWorker_p0-w0: resuming experience collection (27250 times) +[2024-03-29 18:02:53,067][00497] Updated weights for policy 0, policy_version 53968 (0.0023) +[2024-03-29 18:02:53,839][00126] Fps is (10 sec: 45874.6, 60 sec: 42598.3, 300 sec: 42043.0). Total num frames: 884244480. Throughput: 0: 42387.0. Samples: 766408560. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:02:53,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:02:57,281][00497] Updated weights for policy 0, policy_version 53978 (0.0017) +[2024-03-29 18:02:58,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42598.4, 300 sec: 41987.5). Total num frames: 884441088. Throughput: 0: 42464.1. Samples: 766688040. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:02:58,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:03:00,869][00497] Updated weights for policy 0, policy_version 53988 (0.0021) +[2024-03-29 18:03:03,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42871.4, 300 sec: 42043.0). Total num frames: 884686848. Throughput: 0: 42576.9. Samples: 766802180. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:03:03,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:03:03,861][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053997_884686848.pth... +[2024-03-29 18:03:04,161][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053378_874545152.pth +[2024-03-29 18:03:04,778][00497] Updated weights for policy 0, policy_version 53998 (0.0021) +[2024-03-29 18:03:08,688][00497] Updated weights for policy 0, policy_version 54008 (0.0020) +[2024-03-29 18:03:08,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 884867072. Throughput: 0: 42536.3. Samples: 767043500. Policy #0 lag: (min: 2.0, avg: 21.4, max: 42.0) +[2024-03-29 18:03:08,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 18:03:12,943][00497] Updated weights for policy 0, policy_version 54018 (0.0028) +[2024-03-29 18:03:13,839][00126] Fps is (10 sec: 36044.9, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 885047296. Throughput: 0: 42400.9. Samples: 767309500. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:13,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:03:16,695][00497] Updated weights for policy 0, policy_version 54028 (0.0028) +[2024-03-29 18:03:18,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 885276672. Throughput: 0: 42481.4. Samples: 767435460. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:18,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 18:03:20,381][00497] Updated weights for policy 0, policy_version 54038 (0.0024) +[2024-03-29 18:03:23,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.6, 300 sec: 41987.5). Total num frames: 885489664. Throughput: 0: 42114.0. Samples: 767662980. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:23,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:03:24,368][00497] Updated weights for policy 0, policy_version 54048 (0.0024) +[2024-03-29 18:03:27,752][00476] Signal inference workers to stop experience collection... (27300 times) +[2024-03-29 18:03:27,800][00497] InferenceWorker_p0-w0: stopping experience collection (27300 times) +[2024-03-29 18:03:27,931][00476] Signal inference workers to resume experience collection... (27300 times) +[2024-03-29 18:03:27,932][00497] InferenceWorker_p0-w0: resuming experience collection (27300 times) +[2024-03-29 18:03:28,810][00497] Updated weights for policy 0, policy_version 54058 (0.0019) +[2024-03-29 18:03:28,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42598.4, 300 sec: 41987.5). Total num frames: 885686272. Throughput: 0: 41844.0. Samples: 767925960. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:28,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 18:03:32,529][00497] Updated weights for policy 0, policy_version 54068 (0.0022) +[2024-03-29 18:03:33,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 885899264. Throughput: 0: 42304.5. Samples: 768065520. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:33,840][00126] Avg episode reward: [(0, '0.645')] +[2024-03-29 18:03:36,489][00497] Updated weights for policy 0, policy_version 54078 (0.0020) +[2024-03-29 18:03:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 886095872. Throughput: 0: 41821.9. Samples: 768290540. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:38,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:03:40,236][00497] Updated weights for policy 0, policy_version 54088 (0.0024) +[2024-03-29 18:03:43,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 886308864. Throughput: 0: 41239.2. Samples: 768543800. Policy #0 lag: (min: 0.0, avg: 19.3, max: 42.0) +[2024-03-29 18:03:43,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:03:44,576][00497] Updated weights for policy 0, policy_version 54098 (0.0019) +[2024-03-29 18:03:48,488][00497] Updated weights for policy 0, policy_version 54108 (0.0032) +[2024-03-29 18:03:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 41931.9). Total num frames: 886521856. Throughput: 0: 41574.7. Samples: 768673040. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:03:48,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:03:52,518][00497] Updated weights for policy 0, policy_version 54118 (0.0042) +[2024-03-29 18:03:53,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.3, 300 sec: 41932.0). Total num frames: 886734848. Throughput: 0: 41363.1. Samples: 768904840. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:03:53,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 18:03:56,085][00497] Updated weights for policy 0, policy_version 54128 (0.0024) +[2024-03-29 18:03:58,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 886931456. Throughput: 0: 41184.0. Samples: 769162780. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:03:58,840][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 18:04:00,242][00497] Updated weights for policy 0, policy_version 54138 (0.0022) +[2024-03-29 18:04:03,088][00476] Signal inference workers to stop experience collection... (27350 times) +[2024-03-29 18:04:03,118][00497] InferenceWorker_p0-w0: stopping experience collection (27350 times) +[2024-03-29 18:04:03,304][00476] Signal inference workers to resume experience collection... (27350 times) +[2024-03-29 18:04:03,305][00497] InferenceWorker_p0-w0: resuming experience collection (27350 times) +[2024-03-29 18:04:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40960.0, 300 sec: 41931.9). Total num frames: 887144448. Throughput: 0: 41469.8. Samples: 769301600. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:04:03,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:04:04,120][00497] Updated weights for policy 0, policy_version 54148 (0.0021) +[2024-03-29 18:04:07,743][00497] Updated weights for policy 0, policy_version 54158 (0.0029) +[2024-03-29 18:04:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 887373824. Throughput: 0: 41524.4. Samples: 769531580. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:04:08,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 18:04:11,812][00497] Updated weights for policy 0, policy_version 54168 (0.0019) +[2024-03-29 18:04:13,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 887570432. Throughput: 0: 41385.7. Samples: 769788320. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:04:13,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 18:04:15,991][00497] Updated weights for policy 0, policy_version 54178 (0.0021) +[2024-03-29 18:04:18,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41233.0, 300 sec: 41987.5). Total num frames: 887750656. Throughput: 0: 41458.2. Samples: 769931140. Policy #0 lag: (min: 1.0, avg: 18.0, max: 41.0) +[2024-03-29 18:04:18,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:04:19,832][00497] Updated weights for policy 0, policy_version 54188 (0.0028) +[2024-03-29 18:04:23,428][00497] Updated weights for policy 0, policy_version 54198 (0.0023) +[2024-03-29 18:04:23,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.0, 300 sec: 41931.9). Total num frames: 887980032. Throughput: 0: 41735.9. Samples: 770168660. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:23,840][00126] Avg episode reward: [(0, '0.467')] +[2024-03-29 18:04:27,573][00497] Updated weights for policy 0, policy_version 54208 (0.0031) +[2024-03-29 18:04:28,839][00126] Fps is (10 sec: 44237.5, 60 sec: 41779.3, 300 sec: 42098.6). Total num frames: 888193024. Throughput: 0: 41408.9. Samples: 770407200. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:28,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:04:31,811][00497] Updated weights for policy 0, policy_version 54218 (0.0019) +[2024-03-29 18:04:33,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40959.9, 300 sec: 41931.9). Total num frames: 888356864. Throughput: 0: 41558.6. Samples: 770543180. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:33,841][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:04:35,577][00497] Updated weights for policy 0, policy_version 54228 (0.0020) +[2024-03-29 18:04:35,689][00476] Signal inference workers to stop experience collection... (27400 times) +[2024-03-29 18:04:35,756][00497] InferenceWorker_p0-w0: stopping experience collection (27400 times) +[2024-03-29 18:04:35,850][00476] Signal inference workers to resume experience collection... (27400 times) +[2024-03-29 18:04:35,851][00497] InferenceWorker_p0-w0: resuming experience collection (27400 times) +[2024-03-29 18:04:38,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 888619008. Throughput: 0: 41896.0. Samples: 770790160. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:38,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 18:04:39,078][00497] Updated weights for policy 0, policy_version 54238 (0.0020) +[2024-03-29 18:04:43,248][00497] Updated weights for policy 0, policy_version 54248 (0.0022) +[2024-03-29 18:04:43,839][00126] Fps is (10 sec: 45875.7, 60 sec: 41779.1, 300 sec: 42098.5). Total num frames: 888815616. Throughput: 0: 41648.0. Samples: 771036940. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:43,840][00126] Avg episode reward: [(0, '0.672')] +[2024-03-29 18:04:47,533][00497] Updated weights for policy 0, policy_version 54258 (0.0019) +[2024-03-29 18:04:48,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 889012224. Throughput: 0: 41572.0. Samples: 771172340. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:48,840][00126] Avg episode reward: [(0, '0.647')] +[2024-03-29 18:04:51,307][00497] Updated weights for policy 0, policy_version 54268 (0.0022) +[2024-03-29 18:04:53,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.1, 300 sec: 41987.4). Total num frames: 889241600. Throughput: 0: 41772.3. Samples: 771411340. Policy #0 lag: (min: 0.0, avg: 21.1, max: 42.0) +[2024-03-29 18:04:53,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:04:54,840][00497] Updated weights for policy 0, policy_version 54278 (0.0019) +[2024-03-29 18:04:58,784][00497] Updated weights for policy 0, policy_version 54288 (0.0019) +[2024-03-29 18:04:58,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 889454592. Throughput: 0: 41870.3. Samples: 771672480. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:04:58,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:05:03,143][00497] Updated weights for policy 0, policy_version 54298 (0.0020) +[2024-03-29 18:05:03,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 889651200. Throughput: 0: 41611.6. Samples: 771803660. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:03,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:05:04,117][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054301_889667584.pth... +[2024-03-29 18:05:04,452][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053687_879607808.pth +[2024-03-29 18:05:06,891][00497] Updated weights for policy 0, policy_version 54308 (0.0030) +[2024-03-29 18:05:08,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 889864192. Throughput: 0: 41706.8. Samples: 772045460. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:08,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 18:05:10,807][00497] Updated weights for policy 0, policy_version 54318 (0.0019) +[2024-03-29 18:05:13,408][00476] Signal inference workers to stop experience collection... (27450 times) +[2024-03-29 18:05:13,408][00476] Signal inference workers to resume experience collection... (27450 times) +[2024-03-29 18:05:13,444][00497] InferenceWorker_p0-w0: stopping experience collection (27450 times) +[2024-03-29 18:05:13,444][00497] InferenceWorker_p0-w0: resuming experience collection (27450 times) +[2024-03-29 18:05:13,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 890060800. Throughput: 0: 42088.7. Samples: 772301200. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:13,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 18:05:14,604][00497] Updated weights for policy 0, policy_version 54328 (0.0030) +[2024-03-29 18:05:18,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41779.3, 300 sec: 41932.0). Total num frames: 890257408. Throughput: 0: 41811.7. Samples: 772424700. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:18,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 18:05:18,892][00497] Updated weights for policy 0, policy_version 54338 (0.0023) +[2024-03-29 18:05:22,563][00497] Updated weights for policy 0, policy_version 54348 (0.0025) +[2024-03-29 18:05:23,839][00126] Fps is (10 sec: 44237.7, 60 sec: 42052.4, 300 sec: 41987.5). Total num frames: 890503168. Throughput: 0: 42081.3. Samples: 772683820. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:23,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 18:05:26,180][00497] Updated weights for policy 0, policy_version 54358 (0.0023) +[2024-03-29 18:05:28,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41506.1, 300 sec: 41820.8). Total num frames: 890683392. Throughput: 0: 41852.9. Samples: 772920320. Policy #0 lag: (min: 0.0, avg: 21.8, max: 42.0) +[2024-03-29 18:05:28,842][00126] Avg episode reward: [(0, '0.647')] +[2024-03-29 18:05:30,260][00497] Updated weights for policy 0, policy_version 54368 (0.0021) +[2024-03-29 18:05:33,839][00126] Fps is (10 sec: 39321.1, 60 sec: 42325.4, 300 sec: 41987.4). Total num frames: 890896384. Throughput: 0: 41687.4. Samples: 773048280. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:33,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:05:34,474][00497] Updated weights for policy 0, policy_version 54378 (0.0021) +[2024-03-29 18:05:38,177][00497] Updated weights for policy 0, policy_version 54388 (0.0020) +[2024-03-29 18:05:38,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41779.1, 300 sec: 41987.4). Total num frames: 891125760. Throughput: 0: 42471.5. Samples: 773322560. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:38,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:05:41,954][00497] Updated weights for policy 0, policy_version 54398 (0.0031) +[2024-03-29 18:05:42,954][00476] Signal inference workers to stop experience collection... (27500 times) +[2024-03-29 18:05:42,954][00476] Signal inference workers to resume experience collection... (27500 times) +[2024-03-29 18:05:42,998][00497] InferenceWorker_p0-w0: stopping experience collection (27500 times) +[2024-03-29 18:05:42,999][00497] InferenceWorker_p0-w0: resuming experience collection (27500 times) +[2024-03-29 18:05:43,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 891322368. Throughput: 0: 41796.3. Samples: 773553320. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:43,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 18:05:45,782][00497] Updated weights for policy 0, policy_version 54408 (0.0024) +[2024-03-29 18:05:48,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 891535360. Throughput: 0: 41817.0. Samples: 773685420. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:48,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:05:49,969][00497] Updated weights for policy 0, policy_version 54418 (0.0026) +[2024-03-29 18:05:53,715][00497] Updated weights for policy 0, policy_version 54428 (0.0034) +[2024-03-29 18:05:53,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41779.3, 300 sec: 41932.0). Total num frames: 891748352. Throughput: 0: 42506.7. Samples: 773958260. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:53,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 18:05:57,415][00497] Updated weights for policy 0, policy_version 54438 (0.0026) +[2024-03-29 18:05:58,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 891961344. Throughput: 0: 41853.1. Samples: 774184580. Policy #0 lag: (min: 0.0, avg: 20.9, max: 42.0) +[2024-03-29 18:05:58,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 18:06:01,295][00497] Updated weights for policy 0, policy_version 54448 (0.0022) +[2024-03-29 18:06:03,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 892157952. Throughput: 0: 42194.1. Samples: 774323440. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:03,841][00126] Avg episode reward: [(0, '0.428')] +[2024-03-29 18:06:05,447][00497] Updated weights for policy 0, policy_version 54458 (0.0026) +[2024-03-29 18:06:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 892370944. Throughput: 0: 42166.2. Samples: 774581300. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:08,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 18:06:09,296][00497] Updated weights for policy 0, policy_version 54468 (0.0020) +[2024-03-29 18:06:12,779][00497] Updated weights for policy 0, policy_version 54478 (0.0024) +[2024-03-29 18:06:13,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42325.5, 300 sec: 41932.0). Total num frames: 892600320. Throughput: 0: 42292.1. Samples: 774823460. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:13,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 18:06:15,243][00476] Signal inference workers to stop experience collection... (27550 times) +[2024-03-29 18:06:15,244][00476] Signal inference workers to resume experience collection... (27550 times) +[2024-03-29 18:06:15,291][00497] InferenceWorker_p0-w0: stopping experience collection (27550 times) +[2024-03-29 18:06:15,292][00497] InferenceWorker_p0-w0: resuming experience collection (27550 times) +[2024-03-29 18:06:16,845][00497] Updated weights for policy 0, policy_version 54488 (0.0028) +[2024-03-29 18:06:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 892796928. Throughput: 0: 42412.2. Samples: 774956820. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:18,840][00126] Avg episode reward: [(0, '0.676')] +[2024-03-29 18:06:21,001][00497] Updated weights for policy 0, policy_version 54498 (0.0017) +[2024-03-29 18:06:23,839][00126] Fps is (10 sec: 39320.9, 60 sec: 41506.0, 300 sec: 41876.4). Total num frames: 892993536. Throughput: 0: 41813.4. Samples: 775204160. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:23,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:06:25,065][00497] Updated weights for policy 0, policy_version 54508 (0.0022) +[2024-03-29 18:06:28,471][00497] Updated weights for policy 0, policy_version 54518 (0.0018) +[2024-03-29 18:06:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.4, 300 sec: 41931.9). Total num frames: 893222912. Throughput: 0: 42006.8. Samples: 775443620. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:28,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 18:06:32,582][00497] Updated weights for policy 0, policy_version 54528 (0.0025) +[2024-03-29 18:06:33,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 41987.4). Total num frames: 893435904. Throughput: 0: 42088.8. Samples: 775579420. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:06:33,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 18:06:36,704][00497] Updated weights for policy 0, policy_version 54538 (0.0018) +[2024-03-29 18:06:38,839][00126] Fps is (10 sec: 39320.9, 60 sec: 41506.2, 300 sec: 41931.9). Total num frames: 893616128. Throughput: 0: 41747.8. Samples: 775836920. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:06:38,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 18:06:40,632][00497] Updated weights for policy 0, policy_version 54548 (0.0026) +[2024-03-29 18:06:43,839][00126] Fps is (10 sec: 42599.3, 60 sec: 42325.5, 300 sec: 41931.9). Total num frames: 893861888. Throughput: 0: 41789.8. Samples: 776065120. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:06:43,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:06:44,235][00497] Updated weights for policy 0, policy_version 54558 (0.0028) +[2024-03-29 18:06:48,377][00497] Updated weights for policy 0, policy_version 54568 (0.0022) +[2024-03-29 18:06:48,596][00476] Signal inference workers to stop experience collection... (27600 times) +[2024-03-29 18:06:48,638][00497] InferenceWorker_p0-w0: stopping experience collection (27600 times) +[2024-03-29 18:06:48,676][00476] Signal inference workers to resume experience collection... (27600 times) +[2024-03-29 18:06:48,678][00497] InferenceWorker_p0-w0: resuming experience collection (27600 times) +[2024-03-29 18:06:48,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 894058496. Throughput: 0: 41548.9. Samples: 776193140. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:06:48,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:06:52,618][00497] Updated weights for policy 0, policy_version 54578 (0.0019) +[2024-03-29 18:06:53,839][00126] Fps is (10 sec: 39320.9, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 894255104. Throughput: 0: 41772.8. Samples: 776461080. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:06:53,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:06:56,462][00497] Updated weights for policy 0, policy_version 54588 (0.0026) +[2024-03-29 18:06:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 894484480. Throughput: 0: 41588.8. Samples: 776694960. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:06:58,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:06:59,910][00497] Updated weights for policy 0, policy_version 54598 (0.0022) +[2024-03-29 18:07:03,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 894681088. Throughput: 0: 41443.0. Samples: 776821760. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:07:03,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 18:07:04,051][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054608_894697472.pth... +[2024-03-29 18:07:04,064][00497] Updated weights for policy 0, policy_version 54608 (0.0027) +[2024-03-29 18:07:04,365][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000053997_884686848.pth +[2024-03-29 18:07:08,363][00497] Updated weights for policy 0, policy_version 54618 (0.0022) +[2024-03-29 18:07:08,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 894877696. Throughput: 0: 41837.1. Samples: 777086820. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 18:07:08,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 18:07:12,199][00497] Updated weights for policy 0, policy_version 54628 (0.0028) +[2024-03-29 18:07:13,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 895107072. Throughput: 0: 41688.8. Samples: 777319620. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:13,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:07:15,660][00497] Updated weights for policy 0, policy_version 54638 (0.0022) +[2024-03-29 18:07:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 895287296. Throughput: 0: 41285.5. Samples: 777437260. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:18,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:07:19,990][00497] Updated weights for policy 0, policy_version 54648 (0.0024) +[2024-03-29 18:07:23,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41506.3, 300 sec: 41876.4). Total num frames: 895483904. Throughput: 0: 41418.8. Samples: 777700760. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:23,840][00126] Avg episode reward: [(0, '0.632')] +[2024-03-29 18:07:24,263][00497] Updated weights for policy 0, policy_version 54658 (0.0027) +[2024-03-29 18:07:26,714][00476] Signal inference workers to stop experience collection... (27650 times) +[2024-03-29 18:07:26,715][00476] Signal inference workers to resume experience collection... (27650 times) +[2024-03-29 18:07:26,737][00497] InferenceWorker_p0-w0: stopping experience collection (27650 times) +[2024-03-29 18:07:26,738][00497] InferenceWorker_p0-w0: resuming experience collection (27650 times) +[2024-03-29 18:07:28,227][00497] Updated weights for policy 0, policy_version 54668 (0.0022) +[2024-03-29 18:07:28,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 895713280. Throughput: 0: 41805.7. Samples: 777946380. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:28,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 18:07:31,406][00497] Updated weights for policy 0, policy_version 54678 (0.0028) +[2024-03-29 18:07:33,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41506.3, 300 sec: 41820.9). Total num frames: 895926272. Throughput: 0: 41462.8. Samples: 778058960. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:33,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 18:07:35,819][00497] Updated weights for policy 0, policy_version 54688 (0.0027) +[2024-03-29 18:07:38,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 896106496. Throughput: 0: 41075.6. Samples: 778309480. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:38,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:07:40,116][00497] Updated weights for policy 0, policy_version 54698 (0.0019) +[2024-03-29 18:07:43,839][00126] Fps is (10 sec: 39320.9, 60 sec: 40959.9, 300 sec: 41654.2). Total num frames: 896319488. Throughput: 0: 41759.9. Samples: 778574160. Policy #0 lag: (min: 1.0, avg: 19.8, max: 42.0) +[2024-03-29 18:07:43,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:07:44,034][00497] Updated weights for policy 0, policy_version 54708 (0.0021) +[2024-03-29 18:07:47,169][00497] Updated weights for policy 0, policy_version 54718 (0.0022) +[2024-03-29 18:07:48,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 896548864. Throughput: 0: 41270.4. Samples: 778678920. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:07:48,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:07:51,470][00497] Updated weights for policy 0, policy_version 54728 (0.0025) +[2024-03-29 18:07:53,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 896745472. Throughput: 0: 41027.1. Samples: 778933040. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:07:53,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:07:56,015][00497] Updated weights for policy 0, policy_version 54738 (0.0018) +[2024-03-29 18:07:58,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40687.0, 300 sec: 41487.6). Total num frames: 896925696. Throughput: 0: 41763.6. Samples: 779198980. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:07:58,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 18:07:59,177][00476] Signal inference workers to stop experience collection... (27700 times) +[2024-03-29 18:07:59,215][00497] InferenceWorker_p0-w0: stopping experience collection (27700 times) +[2024-03-29 18:07:59,405][00476] Signal inference workers to resume experience collection... (27700 times) +[2024-03-29 18:07:59,406][00497] InferenceWorker_p0-w0: resuming experience collection (27700 times) +[2024-03-29 18:07:59,746][00497] Updated weights for policy 0, policy_version 54748 (0.0028) +[2024-03-29 18:08:02,915][00497] Updated weights for policy 0, policy_version 54758 (0.0029) +[2024-03-29 18:08:03,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 897187840. Throughput: 0: 41488.8. Samples: 779304260. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:08:03,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:08:07,307][00497] Updated weights for policy 0, policy_version 54768 (0.0031) +[2024-03-29 18:08:08,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 897351680. Throughput: 0: 41204.4. Samples: 779554960. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:08:08,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:08:11,859][00497] Updated weights for policy 0, policy_version 54778 (0.0021) +[2024-03-29 18:08:13,839][00126] Fps is (10 sec: 36044.4, 60 sec: 40686.8, 300 sec: 41598.7). Total num frames: 897548288. Throughput: 0: 41493.2. Samples: 779813580. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:08:13,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 18:08:15,547][00497] Updated weights for policy 0, policy_version 54788 (0.0019) +[2024-03-29 18:08:18,775][00497] Updated weights for policy 0, policy_version 54798 (0.0020) +[2024-03-29 18:08:18,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 897810432. Throughput: 0: 41597.7. Samples: 779930860. Policy #0 lag: (min: 0.0, avg: 21.9, max: 42.0) +[2024-03-29 18:08:18,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 18:08:23,194][00497] Updated weights for policy 0, policy_version 54808 (0.0019) +[2024-03-29 18:08:23,839][00126] Fps is (10 sec: 45876.0, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 898007040. Throughput: 0: 41622.3. Samples: 780182480. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:23,840][00126] Avg episode reward: [(0, '0.674')] +[2024-03-29 18:08:27,300][00497] Updated weights for policy 0, policy_version 54818 (0.0026) +[2024-03-29 18:08:28,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 898187264. Throughput: 0: 41559.2. Samples: 780444320. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:28,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:08:31,118][00497] Updated weights for policy 0, policy_version 54828 (0.0024) +[2024-03-29 18:08:31,705][00476] Signal inference workers to stop experience collection... (27750 times) +[2024-03-29 18:08:31,742][00497] InferenceWorker_p0-w0: stopping experience collection (27750 times) +[2024-03-29 18:08:31,933][00476] Signal inference workers to resume experience collection... (27750 times) +[2024-03-29 18:08:31,933][00497] InferenceWorker_p0-w0: resuming experience collection (27750 times) +[2024-03-29 18:08:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 898433024. Throughput: 0: 42072.8. Samples: 780572200. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:33,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 18:08:34,384][00497] Updated weights for policy 0, policy_version 54838 (0.0025) +[2024-03-29 18:08:38,581][00497] Updated weights for policy 0, policy_version 54848 (0.0027) +[2024-03-29 18:08:38,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 898629632. Throughput: 0: 41805.2. Samples: 780814280. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:38,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 18:08:42,737][00497] Updated weights for policy 0, policy_version 54858 (0.0027) +[2024-03-29 18:08:43,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 898826240. Throughput: 0: 41780.9. Samples: 781079120. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:43,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:08:46,696][00497] Updated weights for policy 0, policy_version 54868 (0.0028) +[2024-03-29 18:08:48,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.2, 300 sec: 41820.9). Total num frames: 899072000. Throughput: 0: 42309.0. Samples: 781208160. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:48,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 18:08:49,852][00497] Updated weights for policy 0, policy_version 54878 (0.0020) +[2024-03-29 18:08:53,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 899268608. Throughput: 0: 41882.3. Samples: 781439660. Policy #0 lag: (min: 0.0, avg: 22.0, max: 41.0) +[2024-03-29 18:08:53,841][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:08:54,174][00497] Updated weights for policy 0, policy_version 54888 (0.0022) +[2024-03-29 18:08:58,504][00497] Updated weights for policy 0, policy_version 54898 (0.0023) +[2024-03-29 18:08:58,839][00126] Fps is (10 sec: 37682.9, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 899448832. Throughput: 0: 42053.9. Samples: 781706000. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:08:58,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:09:02,312][00497] Updated weights for policy 0, policy_version 54908 (0.0020) +[2024-03-29 18:09:03,839][00126] Fps is (10 sec: 40959.2, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 899678208. Throughput: 0: 42437.3. Samples: 781840540. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:03,840][00126] Avg episode reward: [(0, '0.479')] +[2024-03-29 18:09:04,269][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054914_899710976.pth... +[2024-03-29 18:09:04,612][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054301_889667584.pth +[2024-03-29 18:09:05,772][00497] Updated weights for policy 0, policy_version 54918 (0.0026) +[2024-03-29 18:09:08,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 899874816. Throughput: 0: 41655.9. Samples: 782057000. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:08,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:09:09,541][00476] Signal inference workers to stop experience collection... (27800 times) +[2024-03-29 18:09:09,612][00497] InferenceWorker_p0-w0: stopping experience collection (27800 times) +[2024-03-29 18:09:09,619][00476] Signal inference workers to resume experience collection... (27800 times) +[2024-03-29 18:09:09,643][00497] InferenceWorker_p0-w0: resuming experience collection (27800 times) +[2024-03-29 18:09:09,929][00497] Updated weights for policy 0, policy_version 54928 (0.0024) +[2024-03-29 18:09:13,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 900071424. Throughput: 0: 41789.6. Samples: 782324860. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:13,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 18:09:14,359][00497] Updated weights for policy 0, policy_version 54938 (0.0029) +[2024-03-29 18:09:18,206][00497] Updated weights for policy 0, policy_version 54948 (0.0028) +[2024-03-29 18:09:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 900300800. Throughput: 0: 41800.0. Samples: 782453200. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:18,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 18:09:21,672][00497] Updated weights for policy 0, policy_version 54958 (0.0024) +[2024-03-29 18:09:23,839][00126] Fps is (10 sec: 44237.8, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 900513792. Throughput: 0: 41347.7. Samples: 782674920. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:23,841][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:09:25,909][00497] Updated weights for policy 0, policy_version 54968 (0.0028) +[2024-03-29 18:09:28,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41820.9). Total num frames: 900694016. Throughput: 0: 41357.2. Samples: 782940200. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 18:09:28,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 18:09:30,108][00497] Updated weights for policy 0, policy_version 54978 (0.0023) +[2024-03-29 18:09:33,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40960.0, 300 sec: 41598.7). Total num frames: 900890624. Throughput: 0: 41228.9. Samples: 783063460. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:33,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:09:34,176][00497] Updated weights for policy 0, policy_version 54988 (0.0020) +[2024-03-29 18:09:37,380][00497] Updated weights for policy 0, policy_version 54998 (0.0019) +[2024-03-29 18:09:38,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 901136384. Throughput: 0: 41353.6. Samples: 783300580. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:38,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:09:41,611][00497] Updated weights for policy 0, policy_version 55008 (0.0023) +[2024-03-29 18:09:43,841][00126] Fps is (10 sec: 42590.1, 60 sec: 41504.8, 300 sec: 41709.5). Total num frames: 901316608. Throughput: 0: 41267.6. Samples: 783563120. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:43,842][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:09:45,807][00497] Updated weights for policy 0, policy_version 55018 (0.0030) +[2024-03-29 18:09:46,237][00476] Signal inference workers to stop experience collection... (27850 times) +[2024-03-29 18:09:46,237][00476] Signal inference workers to resume experience collection... (27850 times) +[2024-03-29 18:09:46,271][00497] InferenceWorker_p0-w0: stopping experience collection (27850 times) +[2024-03-29 18:09:46,271][00497] InferenceWorker_p0-w0: resuming experience collection (27850 times) +[2024-03-29 18:09:48,839][00126] Fps is (10 sec: 39322.2, 60 sec: 40960.0, 300 sec: 41654.3). Total num frames: 901529600. Throughput: 0: 41255.7. Samples: 783697040. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:48,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:09:49,748][00497] Updated weights for policy 0, policy_version 55028 (0.0026) +[2024-03-29 18:09:52,919][00497] Updated weights for policy 0, policy_version 55038 (0.0032) +[2024-03-29 18:09:53,839][00126] Fps is (10 sec: 47522.8, 60 sec: 42052.2, 300 sec: 41820.9). Total num frames: 901791744. Throughput: 0: 41822.3. Samples: 783939000. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:53,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:09:56,997][00497] Updated weights for policy 0, policy_version 55048 (0.0019) +[2024-03-29 18:09:58,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 901955584. Throughput: 0: 41527.2. Samples: 784193580. Policy #0 lag: (min: 1.0, avg: 20.0, max: 41.0) +[2024-03-29 18:09:58,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 18:10:01,582][00497] Updated weights for policy 0, policy_version 55058 (0.0029) +[2024-03-29 18:10:03,839][00126] Fps is (10 sec: 36044.8, 60 sec: 41233.2, 300 sec: 41654.2). Total num frames: 902152192. Throughput: 0: 41550.7. Samples: 784322980. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:03,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 18:10:05,453][00497] Updated weights for policy 0, policy_version 55068 (0.0022) +[2024-03-29 18:10:08,599][00497] Updated weights for policy 0, policy_version 55078 (0.0021) +[2024-03-29 18:10:08,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 902397952. Throughput: 0: 42160.0. Samples: 784572120. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:08,840][00126] Avg episode reward: [(0, '0.638')] +[2024-03-29 18:10:12,897][00497] Updated weights for policy 0, policy_version 55088 (0.0028) +[2024-03-29 18:10:13,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.4, 300 sec: 41820.9). Total num frames: 902594560. Throughput: 0: 41656.2. Samples: 784814720. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:13,840][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 18:10:17,108][00497] Updated weights for policy 0, policy_version 55098 (0.0034) +[2024-03-29 18:10:18,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 902774784. Throughput: 0: 42002.6. Samples: 784953580. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:18,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:10:21,024][00497] Updated weights for policy 0, policy_version 55108 (0.0018) +[2024-03-29 18:10:21,389][00476] Signal inference workers to stop experience collection... (27900 times) +[2024-03-29 18:10:21,447][00497] InferenceWorker_p0-w0: stopping experience collection (27900 times) +[2024-03-29 18:10:21,479][00476] Signal inference workers to resume experience collection... (27900 times) +[2024-03-29 18:10:21,481][00497] InferenceWorker_p0-w0: resuming experience collection (27900 times) +[2024-03-29 18:10:23,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 903036928. Throughput: 0: 42182.3. Samples: 785198780. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:23,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:10:24,372][00497] Updated weights for policy 0, policy_version 55118 (0.0033) +[2024-03-29 18:10:28,619][00497] Updated weights for policy 0, policy_version 55128 (0.0024) +[2024-03-29 18:10:28,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.4, 300 sec: 41765.3). Total num frames: 903217152. Throughput: 0: 41848.0. Samples: 785446200. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:28,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 18:10:32,618][00497] Updated weights for policy 0, policy_version 55138 (0.0022) +[2024-03-29 18:10:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 903430144. Throughput: 0: 41850.2. Samples: 785580300. Policy #0 lag: (min: 0.0, avg: 19.0, max: 40.0) +[2024-03-29 18:10:33,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:10:36,537][00497] Updated weights for policy 0, policy_version 55148 (0.0024) +[2024-03-29 18:10:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 903659520. Throughput: 0: 42292.8. Samples: 785842180. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:10:38,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 18:10:39,693][00497] Updated weights for policy 0, policy_version 55158 (0.0030) +[2024-03-29 18:10:43,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42326.6, 300 sec: 41765.3). Total num frames: 903856128. Throughput: 0: 42165.2. Samples: 786091020. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:10:43,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 18:10:43,942][00497] Updated weights for policy 0, policy_version 55168 (0.0027) +[2024-03-29 18:10:47,843][00497] Updated weights for policy 0, policy_version 55178 (0.0023) +[2024-03-29 18:10:48,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42325.2, 300 sec: 41765.3). Total num frames: 904069120. Throughput: 0: 42229.7. Samples: 786223320. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:10:48,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 18:10:51,895][00497] Updated weights for policy 0, policy_version 55188 (0.0018) +[2024-03-29 18:10:53,146][00476] Signal inference workers to stop experience collection... (27950 times) +[2024-03-29 18:10:53,176][00497] InferenceWorker_p0-w0: stopping experience collection (27950 times) +[2024-03-29 18:10:53,342][00476] Signal inference workers to resume experience collection... (27950 times) +[2024-03-29 18:10:53,343][00497] InferenceWorker_p0-w0: resuming experience collection (27950 times) +[2024-03-29 18:10:53,839][00126] Fps is (10 sec: 44237.5, 60 sec: 41779.2, 300 sec: 41820.8). Total num frames: 904298496. Throughput: 0: 42441.7. Samples: 786482000. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:10:53,840][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 18:10:55,009][00497] Updated weights for policy 0, policy_version 55198 (0.0025) +[2024-03-29 18:10:58,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42598.5, 300 sec: 41876.4). Total num frames: 904511488. Throughput: 0: 42459.1. Samples: 786725380. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:10:58,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 18:10:59,293][00497] Updated weights for policy 0, policy_version 55208 (0.0018) +[2024-03-29 18:11:03,252][00497] Updated weights for policy 0, policy_version 55218 (0.0027) +[2024-03-29 18:11:03,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42598.3, 300 sec: 41820.8). Total num frames: 904708096. Throughput: 0: 42346.2. Samples: 786859160. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:11:03,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 18:11:04,003][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055220_904724480.pth... +[2024-03-29 18:11:04,380][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054608_894697472.pth +[2024-03-29 18:11:07,246][00497] Updated weights for policy 0, policy_version 55228 (0.0028) +[2024-03-29 18:11:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 904921088. Throughput: 0: 42817.8. Samples: 787125580. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:11:08,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 18:11:10,598][00497] Updated weights for policy 0, policy_version 55238 (0.0018) +[2024-03-29 18:11:13,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.2, 300 sec: 41820.8). Total num frames: 905134080. Throughput: 0: 42450.1. Samples: 787356460. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:13,841][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:11:14,949][00497] Updated weights for policy 0, policy_version 55248 (0.0031) +[2024-03-29 18:11:18,802][00497] Updated weights for policy 0, policy_version 55258 (0.0019) +[2024-03-29 18:11:18,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42871.5, 300 sec: 41876.4). Total num frames: 905347072. Throughput: 0: 42355.5. Samples: 787486300. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:18,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 18:11:22,820][00497] Updated weights for policy 0, policy_version 55268 (0.0018) +[2024-03-29 18:11:23,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 41820.8). Total num frames: 905560064. Throughput: 0: 42443.0. Samples: 787752120. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:23,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:11:26,174][00497] Updated weights for policy 0, policy_version 55278 (0.0033) +[2024-03-29 18:11:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 905756672. Throughput: 0: 42136.2. Samples: 787987140. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:28,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 18:11:28,909][00476] Signal inference workers to stop experience collection... (28000 times) +[2024-03-29 18:11:28,910][00476] Signal inference workers to resume experience collection... (28000 times) +[2024-03-29 18:11:28,955][00497] InferenceWorker_p0-w0: stopping experience collection (28000 times) +[2024-03-29 18:11:28,956][00497] InferenceWorker_p0-w0: resuming experience collection (28000 times) +[2024-03-29 18:11:30,575][00497] Updated weights for policy 0, policy_version 55288 (0.0022) +[2024-03-29 18:11:33,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.2, 300 sec: 41876.4). Total num frames: 905969664. Throughput: 0: 42166.6. Samples: 788120820. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:33,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:11:34,303][00497] Updated weights for policy 0, policy_version 55298 (0.0024) +[2024-03-29 18:11:38,352][00497] Updated weights for policy 0, policy_version 55308 (0.0026) +[2024-03-29 18:11:38,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 906182656. Throughput: 0: 42388.0. Samples: 788389460. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:38,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:11:41,456][00497] Updated weights for policy 0, policy_version 55318 (0.0023) +[2024-03-29 18:11:43,839][00126] Fps is (10 sec: 44237.6, 60 sec: 42598.5, 300 sec: 41876.4). Total num frames: 906412032. Throughput: 0: 42270.7. Samples: 788627560. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:11:43,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 18:11:46,044][00497] Updated weights for policy 0, policy_version 55328 (0.0019) +[2024-03-29 18:11:48,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42325.4, 300 sec: 41876.4). Total num frames: 906608640. Throughput: 0: 42420.5. Samples: 788768080. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:11:48,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 18:11:49,815][00497] Updated weights for policy 0, policy_version 55338 (0.0023) +[2024-03-29 18:11:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 906805248. Throughput: 0: 42174.2. Samples: 789023420. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:11:53,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 18:11:54,005][00497] Updated weights for policy 0, policy_version 55348 (0.0022) +[2024-03-29 18:11:57,070][00497] Updated weights for policy 0, policy_version 55358 (0.0019) +[2024-03-29 18:11:57,344][00476] Signal inference workers to stop experience collection... (28050 times) +[2024-03-29 18:11:57,379][00497] InferenceWorker_p0-w0: stopping experience collection (28050 times) +[2024-03-29 18:11:57,569][00476] Signal inference workers to resume experience collection... (28050 times) +[2024-03-29 18:11:57,570][00497] InferenceWorker_p0-w0: resuming experience collection (28050 times) +[2024-03-29 18:11:58,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.2, 300 sec: 41931.9). Total num frames: 907051008. Throughput: 0: 41952.0. Samples: 789244300. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:11:58,841][00126] Avg episode reward: [(0, '0.645')] +[2024-03-29 18:12:01,547][00497] Updated weights for policy 0, policy_version 55368 (0.0024) +[2024-03-29 18:12:03,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.4, 300 sec: 41931.9). Total num frames: 907247616. Throughput: 0: 42375.5. Samples: 789393200. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:12:03,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:12:05,359][00497] Updated weights for policy 0, policy_version 55378 (0.0025) +[2024-03-29 18:12:08,839][00126] Fps is (10 sec: 39322.4, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 907444224. Throughput: 0: 42379.3. Samples: 789659180. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:12:08,841][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:12:09,275][00497] Updated weights for policy 0, policy_version 55388 (0.0020) +[2024-03-29 18:12:12,555][00497] Updated weights for policy 0, policy_version 55398 (0.0028) +[2024-03-29 18:12:13,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42598.4, 300 sec: 42043.0). Total num frames: 907689984. Throughput: 0: 42078.1. Samples: 789880660. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:12:13,840][00126] Avg episode reward: [(0, '0.500')] +[2024-03-29 18:12:17,227][00497] Updated weights for policy 0, policy_version 55408 (0.0023) +[2024-03-29 18:12:18,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 907870208. Throughput: 0: 42375.6. Samples: 790027720. Policy #0 lag: (min: 0.0, avg: 21.4, max: 43.0) +[2024-03-29 18:12:18,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 18:12:20,947][00497] Updated weights for policy 0, policy_version 55418 (0.0030) +[2024-03-29 18:12:23,839][00126] Fps is (10 sec: 39322.2, 60 sec: 42052.4, 300 sec: 41931.9). Total num frames: 908083200. Throughput: 0: 42122.2. Samples: 790284960. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:23,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 18:12:24,954][00497] Updated weights for policy 0, policy_version 55428 (0.0020) +[2024-03-29 18:12:28,160][00497] Updated weights for policy 0, policy_version 55438 (0.0020) +[2024-03-29 18:12:28,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42871.4, 300 sec: 42043.0). Total num frames: 908328960. Throughput: 0: 42096.4. Samples: 790521900. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:28,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 18:12:32,853][00497] Updated weights for policy 0, policy_version 55448 (0.0021) +[2024-03-29 18:12:33,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42325.3, 300 sec: 42043.0). Total num frames: 908509184. Throughput: 0: 41928.8. Samples: 790654880. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:33,840][00126] Avg episode reward: [(0, '0.659')] +[2024-03-29 18:12:36,523][00497] Updated weights for policy 0, policy_version 55458 (0.0019) +[2024-03-29 18:12:38,815][00476] Signal inference workers to stop experience collection... (28100 times) +[2024-03-29 18:12:38,816][00476] Signal inference workers to resume experience collection... (28100 times) +[2024-03-29 18:12:38,839][00126] Fps is (10 sec: 37682.9, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 908705792. Throughput: 0: 41716.3. Samples: 790900660. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:38,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:12:38,854][00497] InferenceWorker_p0-w0: stopping experience collection (28100 times) +[2024-03-29 18:12:38,854][00497] InferenceWorker_p0-w0: resuming experience collection (28100 times) +[2024-03-29 18:12:40,766][00497] Updated weights for policy 0, policy_version 55468 (0.0023) +[2024-03-29 18:12:43,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 908935168. Throughput: 0: 42184.1. Samples: 791142580. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:43,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:12:43,902][00497] Updated weights for policy 0, policy_version 55478 (0.0027) +[2024-03-29 18:12:48,405][00497] Updated weights for policy 0, policy_version 55488 (0.0025) +[2024-03-29 18:12:48,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 909131776. Throughput: 0: 41761.8. Samples: 791272480. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:48,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:12:52,061][00497] Updated weights for policy 0, policy_version 55498 (0.0023) +[2024-03-29 18:12:53,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 909328384. Throughput: 0: 41544.8. Samples: 791528700. Policy #0 lag: (min: 0.0, avg: 20.2, max: 42.0) +[2024-03-29 18:12:53,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:12:56,311][00497] Updated weights for policy 0, policy_version 55508 (0.0028) +[2024-03-29 18:12:58,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 909574144. Throughput: 0: 42316.1. Samples: 791784880. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:12:58,840][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 18:12:59,407][00497] Updated weights for policy 0, policy_version 55518 (0.0019) +[2024-03-29 18:13:03,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 909754368. Throughput: 0: 41551.2. Samples: 791897520. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:03,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:13:03,892][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055528_909770752.pth... +[2024-03-29 18:13:03,917][00497] Updated weights for policy 0, policy_version 55528 (0.0022) +[2024-03-29 18:13:04,197][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000054914_899710976.pth +[2024-03-29 18:13:07,930][00497] Updated weights for policy 0, policy_version 55538 (0.0019) +[2024-03-29 18:13:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.2, 300 sec: 42098.6). Total num frames: 909967360. Throughput: 0: 41630.2. Samples: 792158320. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:08,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:13:12,087][00497] Updated weights for policy 0, policy_version 55548 (0.0024) +[2024-03-29 18:13:13,660][00476] Signal inference workers to stop experience collection... (28150 times) +[2024-03-29 18:13:13,695][00497] InferenceWorker_p0-w0: stopping experience collection (28150 times) +[2024-03-29 18:13:13,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.3, 300 sec: 41932.0). Total num frames: 910180352. Throughput: 0: 41996.9. Samples: 792411760. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:13,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:13:13,886][00476] Signal inference workers to resume experience collection... (28150 times) +[2024-03-29 18:13:13,887][00497] InferenceWorker_p0-w0: resuming experience collection (28150 times) +[2024-03-29 18:13:15,186][00497] Updated weights for policy 0, policy_version 55558 (0.0026) +[2024-03-29 18:13:18,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 910376960. Throughput: 0: 41337.5. Samples: 792515060. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:18,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:13:19,824][00497] Updated weights for policy 0, policy_version 55568 (0.0024) +[2024-03-29 18:13:23,571][00497] Updated weights for policy 0, policy_version 55578 (0.0019) +[2024-03-29 18:13:23,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 910589952. Throughput: 0: 41801.8. Samples: 792781740. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:23,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:13:27,859][00497] Updated weights for policy 0, policy_version 55588 (0.0022) +[2024-03-29 18:13:28,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41233.1, 300 sec: 41931.9). Total num frames: 910802944. Throughput: 0: 42396.1. Samples: 793050400. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 18:13:28,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:13:30,791][00497] Updated weights for policy 0, policy_version 55598 (0.0037) +[2024-03-29 18:13:33,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 911015936. Throughput: 0: 41866.6. Samples: 793156480. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:33,841][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:13:35,259][00497] Updated weights for policy 0, policy_version 55608 (0.0027) +[2024-03-29 18:13:38,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.4, 300 sec: 42043.0). Total num frames: 911228928. Throughput: 0: 42045.0. Samples: 793420720. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:38,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:13:38,941][00497] Updated weights for policy 0, policy_version 55618 (0.0023) +[2024-03-29 18:13:43,167][00497] Updated weights for policy 0, policy_version 55628 (0.0027) +[2024-03-29 18:13:43,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 911441920. Throughput: 0: 42313.4. Samples: 793688980. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:43,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:13:46,172][00497] Updated weights for policy 0, policy_version 55638 (0.0034) +[2024-03-29 18:13:48,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 911654912. Throughput: 0: 42038.2. Samples: 793789240. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:48,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 18:13:49,089][00476] Signal inference workers to stop experience collection... (28200 times) +[2024-03-29 18:13:49,128][00497] InferenceWorker_p0-w0: stopping experience collection (28200 times) +[2024-03-29 18:13:49,176][00476] Signal inference workers to resume experience collection... (28200 times) +[2024-03-29 18:13:49,177][00497] InferenceWorker_p0-w0: resuming experience collection (28200 times) +[2024-03-29 18:13:50,553][00497] Updated weights for policy 0, policy_version 55648 (0.0021) +[2024-03-29 18:13:53,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.4, 300 sec: 42098.6). Total num frames: 911867904. Throughput: 0: 42276.5. Samples: 794060760. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:53,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:13:54,311][00497] Updated weights for policy 0, policy_version 55658 (0.0019) +[2024-03-29 18:13:58,428][00497] Updated weights for policy 0, policy_version 55668 (0.0028) +[2024-03-29 18:13:58,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 912064512. Throughput: 0: 42477.7. Samples: 794323260. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:13:58,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 18:14:01,469][00497] Updated weights for policy 0, policy_version 55678 (0.0036) +[2024-03-29 18:14:03,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42598.3, 300 sec: 42154.1). Total num frames: 912310272. Throughput: 0: 42631.9. Samples: 794433500. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 18:14:03,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:14:05,991][00497] Updated weights for policy 0, policy_version 55688 (0.0022) +[2024-03-29 18:14:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 912506880. Throughput: 0: 42628.5. Samples: 794700020. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:08,840][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 18:14:09,937][00497] Updated weights for policy 0, policy_version 55698 (0.0020) +[2024-03-29 18:14:13,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 912703488. Throughput: 0: 42320.8. Samples: 794954840. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:13,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 18:14:14,222][00497] Updated weights for policy 0, policy_version 55708 (0.0028) +[2024-03-29 18:14:17,199][00497] Updated weights for policy 0, policy_version 55718 (0.0023) +[2024-03-29 18:14:18,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42598.3, 300 sec: 42098.5). Total num frames: 912932864. Throughput: 0: 42410.2. Samples: 795064940. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:18,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 18:14:21,666][00497] Updated weights for policy 0, policy_version 55728 (0.0027) +[2024-03-29 18:14:23,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42598.5, 300 sec: 42209.7). Total num frames: 913145856. Throughput: 0: 42442.7. Samples: 795330640. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:23,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:14:25,572][00497] Updated weights for policy 0, policy_version 55738 (0.0033) +[2024-03-29 18:14:25,582][00476] Signal inference workers to stop experience collection... (28250 times) +[2024-03-29 18:14:25,583][00476] Signal inference workers to resume experience collection... (28250 times) +[2024-03-29 18:14:25,622][00497] InferenceWorker_p0-w0: stopping experience collection (28250 times) +[2024-03-29 18:14:25,622][00497] InferenceWorker_p0-w0: resuming experience collection (28250 times) +[2024-03-29 18:14:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 913342464. Throughput: 0: 42160.0. Samples: 795586180. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:28,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 18:14:29,774][00497] Updated weights for policy 0, policy_version 55748 (0.0024) +[2024-03-29 18:14:32,606][00497] Updated weights for policy 0, policy_version 55758 (0.0025) +[2024-03-29 18:14:33,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42598.4, 300 sec: 42154.1). Total num frames: 913571840. Throughput: 0: 42707.1. Samples: 795711060. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:33,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:14:36,924][00497] Updated weights for policy 0, policy_version 55768 (0.0019) +[2024-03-29 18:14:38,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42598.3, 300 sec: 42265.4). Total num frames: 913784832. Throughput: 0: 42325.3. Samples: 795965400. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:14:38,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 18:14:40,898][00497] Updated weights for policy 0, policy_version 55778 (0.0018) +[2024-03-29 18:14:43,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42325.4, 300 sec: 42209.6). Total num frames: 913981440. Throughput: 0: 42275.2. Samples: 796225640. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:14:43,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 18:14:45,319][00497] Updated weights for policy 0, policy_version 55788 (0.0023) +[2024-03-29 18:14:48,247][00497] Updated weights for policy 0, policy_version 55798 (0.0029) +[2024-03-29 18:14:48,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42598.4, 300 sec: 42098.5). Total num frames: 914210816. Throughput: 0: 42641.4. Samples: 796352360. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:14:48,840][00126] Avg episode reward: [(0, '0.655')] +[2024-03-29 18:14:52,647][00497] Updated weights for policy 0, policy_version 55808 (0.0018) +[2024-03-29 18:14:53,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 914391040. Throughput: 0: 41986.2. Samples: 796589400. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:14:53,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 18:14:56,527][00497] Updated weights for policy 0, policy_version 55818 (0.0018) +[2024-03-29 18:14:58,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42325.3, 300 sec: 42209.6). Total num frames: 914604032. Throughput: 0: 42131.1. Samples: 796850740. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:14:58,840][00126] Avg episode reward: [(0, '0.680')] +[2024-03-29 18:15:00,953][00497] Updated weights for policy 0, policy_version 55828 (0.0023) +[2024-03-29 18:15:02,492][00476] Signal inference workers to stop experience collection... (28300 times) +[2024-03-29 18:15:02,524][00497] InferenceWorker_p0-w0: stopping experience collection (28300 times) +[2024-03-29 18:15:02,712][00476] Signal inference workers to resume experience collection... (28300 times) +[2024-03-29 18:15:02,712][00497] InferenceWorker_p0-w0: resuming experience collection (28300 times) +[2024-03-29 18:15:03,839][00126] Fps is (10 sec: 44236.3, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 914833408. Throughput: 0: 42566.7. Samples: 796980440. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:15:03,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:15:03,863][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055838_914849792.pth... +[2024-03-29 18:15:03,876][00497] Updated weights for policy 0, policy_version 55838 (0.0023) +[2024-03-29 18:15:04,200][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055220_904724480.pth +[2024-03-29 18:15:08,450][00497] Updated weights for policy 0, policy_version 55848 (0.0023) +[2024-03-29 18:15:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 915030016. Throughput: 0: 41911.4. Samples: 797216660. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:15:08,841][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 18:15:12,274][00497] Updated weights for policy 0, policy_version 55858 (0.0022) +[2024-03-29 18:15:13,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 915243008. Throughput: 0: 41973.7. Samples: 797475000. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 18:15:13,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 18:15:16,640][00497] Updated weights for policy 0, policy_version 55868 (0.0019) +[2024-03-29 18:15:18,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 915456000. Throughput: 0: 41863.1. Samples: 797594900. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:18,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 18:15:19,737][00497] Updated weights for policy 0, policy_version 55878 (0.0023) +[2024-03-29 18:15:23,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41506.0, 300 sec: 42098.5). Total num frames: 915636224. Throughput: 0: 41721.7. Samples: 797842880. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:23,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 18:15:24,301][00497] Updated weights for policy 0, policy_version 55888 (0.0022) +[2024-03-29 18:15:28,028][00497] Updated weights for policy 0, policy_version 55898 (0.0020) +[2024-03-29 18:15:28,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.3, 300 sec: 42154.1). Total num frames: 915865600. Throughput: 0: 41561.7. Samples: 798095920. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:28,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 18:15:31,971][00497] Updated weights for policy 0, policy_version 55908 (0.0033) +[2024-03-29 18:15:33,839][00126] Fps is (10 sec: 44237.3, 60 sec: 41779.2, 300 sec: 42098.5). Total num frames: 916078592. Throughput: 0: 41661.8. Samples: 798227140. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:33,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:15:35,303][00497] Updated weights for policy 0, policy_version 55918 (0.0022) +[2024-03-29 18:15:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 42098.6). Total num frames: 916275200. Throughput: 0: 41725.4. Samples: 798467040. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:38,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 18:15:39,842][00497] Updated weights for policy 0, policy_version 55928 (0.0031) +[2024-03-29 18:15:40,042][00476] Signal inference workers to stop experience collection... (28350 times) +[2024-03-29 18:15:40,067][00497] InferenceWorker_p0-w0: stopping experience collection (28350 times) +[2024-03-29 18:15:40,220][00476] Signal inference workers to resume experience collection... (28350 times) +[2024-03-29 18:15:40,220][00497] InferenceWorker_p0-w0: resuming experience collection (28350 times) +[2024-03-29 18:15:43,630][00497] Updated weights for policy 0, policy_version 55938 (0.0025) +[2024-03-29 18:15:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.1, 300 sec: 42098.6). Total num frames: 916488192. Throughput: 0: 41670.2. Samples: 798725900. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:43,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 18:15:47,604][00497] Updated weights for policy 0, policy_version 55948 (0.0018) +[2024-03-29 18:15:48,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 916701184. Throughput: 0: 41728.6. Samples: 798858220. Policy #0 lag: (min: 1.0, avg: 20.9, max: 41.0) +[2024-03-29 18:15:48,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:15:50,882][00497] Updated weights for policy 0, policy_version 55958 (0.0020) +[2024-03-29 18:15:53,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 916914176. Throughput: 0: 41831.2. Samples: 799099060. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:15:53,840][00126] Avg episode reward: [(0, '0.471')] +[2024-03-29 18:15:55,265][00497] Updated weights for policy 0, policy_version 55968 (0.0023) +[2024-03-29 18:15:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 917127168. Throughput: 0: 41958.3. Samples: 799363120. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:15:58,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 18:15:58,868][00497] Updated weights for policy 0, policy_version 55978 (0.0022) +[2024-03-29 18:16:02,883][00497] Updated weights for policy 0, policy_version 55988 (0.0027) +[2024-03-29 18:16:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.2, 300 sec: 42098.5). Total num frames: 917340160. Throughput: 0: 42347.5. Samples: 799500540. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:16:03,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:16:06,371][00497] Updated weights for policy 0, policy_version 55998 (0.0033) +[2024-03-29 18:16:08,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 917553152. Throughput: 0: 42116.5. Samples: 799738120. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:16:08,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 18:16:10,640][00497] Updated weights for policy 0, policy_version 56008 (0.0022) +[2024-03-29 18:16:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.3, 300 sec: 42098.5). Total num frames: 917766144. Throughput: 0: 42418.6. Samples: 800004760. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:16:13,840][00126] Avg episode reward: [(0, '0.657')] +[2024-03-29 18:16:14,226][00497] Updated weights for policy 0, policy_version 56018 (0.0031) +[2024-03-29 18:16:14,549][00476] Signal inference workers to stop experience collection... (28400 times) +[2024-03-29 18:16:14,580][00497] InferenceWorker_p0-w0: stopping experience collection (28400 times) +[2024-03-29 18:16:14,750][00476] Signal inference workers to resume experience collection... (28400 times) +[2024-03-29 18:16:14,750][00497] InferenceWorker_p0-w0: resuming experience collection (28400 times) +[2024-03-29 18:16:18,261][00497] Updated weights for policy 0, policy_version 56028 (0.0024) +[2024-03-29 18:16:18,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 917962752. Throughput: 0: 42421.8. Samples: 800136120. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:16:18,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 18:16:21,642][00497] Updated weights for policy 0, policy_version 56038 (0.0034) +[2024-03-29 18:16:23,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42598.5, 300 sec: 42154.1). Total num frames: 918192128. Throughput: 0: 42366.2. Samples: 800373520. Policy #0 lag: (min: 0.0, avg: 23.3, max: 43.0) +[2024-03-29 18:16:23,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:16:26,124][00497] Updated weights for policy 0, policy_version 56048 (0.0020) +[2024-03-29 18:16:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 918405120. Throughput: 0: 42637.3. Samples: 800644580. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:28,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 18:16:29,676][00497] Updated weights for policy 0, policy_version 56058 (0.0019) +[2024-03-29 18:16:33,507][00497] Updated weights for policy 0, policy_version 56068 (0.0026) +[2024-03-29 18:16:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 918618112. Throughput: 0: 42620.3. Samples: 800776140. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:33,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 18:16:36,807][00497] Updated weights for policy 0, policy_version 56078 (0.0025) +[2024-03-29 18:16:38,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42871.4, 300 sec: 42154.1). Total num frames: 918847488. Throughput: 0: 42588.9. Samples: 801015560. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:38,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 18:16:41,477][00497] Updated weights for policy 0, policy_version 56088 (0.0018) +[2024-03-29 18:16:43,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42325.4, 300 sec: 42098.6). Total num frames: 919027712. Throughput: 0: 42540.4. Samples: 801277440. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:43,840][00126] Avg episode reward: [(0, '0.634')] +[2024-03-29 18:16:45,202][00497] Updated weights for policy 0, policy_version 56098 (0.0024) +[2024-03-29 18:16:48,095][00476] Signal inference workers to stop experience collection... (28450 times) +[2024-03-29 18:16:48,139][00497] InferenceWorker_p0-w0: stopping experience collection (28450 times) +[2024-03-29 18:16:48,177][00476] Signal inference workers to resume experience collection... (28450 times) +[2024-03-29 18:16:48,179][00497] InferenceWorker_p0-w0: resuming experience collection (28450 times) +[2024-03-29 18:16:48,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 919257088. Throughput: 0: 42174.4. Samples: 801398380. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:48,840][00126] Avg episode reward: [(0, '0.633')] +[2024-03-29 18:16:49,050][00497] Updated weights for policy 0, policy_version 56108 (0.0027) +[2024-03-29 18:16:52,550][00497] Updated weights for policy 0, policy_version 56118 (0.0019) +[2024-03-29 18:16:53,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42598.4, 300 sec: 42098.6). Total num frames: 919470080. Throughput: 0: 42557.8. Samples: 801653220. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:53,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:16:57,183][00497] Updated weights for policy 0, policy_version 56128 (0.0022) +[2024-03-29 18:16:58,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.3, 300 sec: 42098.6). Total num frames: 919666688. Throughput: 0: 42445.0. Samples: 801914780. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:16:58,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 18:17:00,795][00497] Updated weights for policy 0, policy_version 56138 (0.0023) +[2024-03-29 18:17:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 919896064. Throughput: 0: 42320.3. Samples: 802040540. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:03,842][00126] Avg episode reward: [(0, '0.465')] +[2024-03-29 18:17:04,002][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056147_919912448.pth... +[2024-03-29 18:17:04,313][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055528_909770752.pth +[2024-03-29 18:17:04,652][00497] Updated weights for policy 0, policy_version 56148 (0.0026) +[2024-03-29 18:17:07,976][00497] Updated weights for policy 0, policy_version 56158 (0.0030) +[2024-03-29 18:17:08,840][00126] Fps is (10 sec: 45872.7, 60 sec: 42871.2, 300 sec: 42154.0). Total num frames: 920125440. Throughput: 0: 42704.5. Samples: 802295240. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:08,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 18:17:12,611][00497] Updated weights for policy 0, policy_version 56168 (0.0021) +[2024-03-29 18:17:13,839][00126] Fps is (10 sec: 40960.8, 60 sec: 42325.4, 300 sec: 42154.1). Total num frames: 920305664. Throughput: 0: 42406.3. Samples: 802552860. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:13,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 18:17:16,186][00497] Updated weights for policy 0, policy_version 56178 (0.0022) +[2024-03-29 18:17:18,839][00126] Fps is (10 sec: 40962.0, 60 sec: 42871.5, 300 sec: 42209.6). Total num frames: 920535040. Throughput: 0: 42366.3. Samples: 802682620. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:18,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:17:19,715][00497] Updated weights for policy 0, policy_version 56188 (0.0018) +[2024-03-29 18:17:23,280][00497] Updated weights for policy 0, policy_version 56198 (0.0021) +[2024-03-29 18:17:23,839][00126] Fps is (10 sec: 45875.3, 60 sec: 42871.5, 300 sec: 42154.1). Total num frames: 920764416. Throughput: 0: 42786.3. Samples: 802940940. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:23,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:17:27,812][00497] Updated weights for policy 0, policy_version 56208 (0.0030) +[2024-03-29 18:17:27,814][00476] Signal inference workers to stop experience collection... (28500 times) +[2024-03-29 18:17:27,815][00476] Signal inference workers to resume experience collection... (28500 times) +[2024-03-29 18:17:27,858][00497] InferenceWorker_p0-w0: stopping experience collection (28500 times) +[2024-03-29 18:17:27,859][00497] InferenceWorker_p0-w0: resuming experience collection (28500 times) +[2024-03-29 18:17:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42598.4, 300 sec: 42209.6). Total num frames: 920961024. Throughput: 0: 42687.1. Samples: 803198360. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:28,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 18:17:31,517][00497] Updated weights for policy 0, policy_version 56218 (0.0019) +[2024-03-29 18:17:33,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42598.4, 300 sec: 42265.2). Total num frames: 921174016. Throughput: 0: 42675.0. Samples: 803318760. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:17:33,841][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:17:35,385][00497] Updated weights for policy 0, policy_version 56228 (0.0026) +[2024-03-29 18:17:38,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.4, 300 sec: 42209.6). Total num frames: 921387008. Throughput: 0: 42605.0. Samples: 803570440. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:17:38,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:17:39,023][00497] Updated weights for policy 0, policy_version 56238 (0.0023) +[2024-03-29 18:17:43,632][00497] Updated weights for policy 0, policy_version 56248 (0.0020) +[2024-03-29 18:17:43,839][00126] Fps is (10 sec: 39321.4, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 921567232. Throughput: 0: 42266.1. Samples: 803816760. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:17:43,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 18:17:47,297][00497] Updated weights for policy 0, policy_version 56258 (0.0022) +[2024-03-29 18:17:48,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42325.3, 300 sec: 42265.2). Total num frames: 921796608. Throughput: 0: 42372.2. Samples: 803947280. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:17:48,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:17:50,994][00497] Updated weights for policy 0, policy_version 56268 (0.0022) +[2024-03-29 18:17:53,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 921993216. Throughput: 0: 42037.3. Samples: 804186900. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:17:53,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 18:17:54,863][00497] Updated weights for policy 0, policy_version 56278 (0.0025) +[2024-03-29 18:17:58,839][00126] Fps is (10 sec: 39321.6, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 922189824. Throughput: 0: 41855.1. Samples: 804436340. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:17:58,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 18:17:59,218][00497] Updated weights for policy 0, policy_version 56288 (0.0023) +[2024-03-29 18:18:02,918][00497] Updated weights for policy 0, policy_version 56298 (0.0022) +[2024-03-29 18:18:03,441][00476] Signal inference workers to stop experience collection... (28550 times) +[2024-03-29 18:18:03,473][00497] InferenceWorker_p0-w0: stopping experience collection (28550 times) +[2024-03-29 18:18:03,636][00476] Signal inference workers to resume experience collection... (28550 times) +[2024-03-29 18:18:03,637][00497] InferenceWorker_p0-w0: resuming experience collection (28550 times) +[2024-03-29 18:18:03,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 922419200. Throughput: 0: 41836.7. Samples: 804565280. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:18:03,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 18:18:06,568][00497] Updated weights for policy 0, policy_version 56308 (0.0037) +[2024-03-29 18:18:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.4, 300 sec: 42154.1). Total num frames: 922615808. Throughput: 0: 41605.7. Samples: 804813200. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 18:18:08,842][00126] Avg episode reward: [(0, '0.618')] +[2024-03-29 18:18:10,595][00497] Updated weights for policy 0, policy_version 56318 (0.0022) +[2024-03-29 18:18:13,839][00126] Fps is (10 sec: 40960.8, 60 sec: 42052.3, 300 sec: 42209.6). Total num frames: 922828800. Throughput: 0: 41505.0. Samples: 805066080. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:13,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:18:14,906][00497] Updated weights for policy 0, policy_version 56328 (0.0026) +[2024-03-29 18:18:18,420][00497] Updated weights for policy 0, policy_version 56338 (0.0027) +[2024-03-29 18:18:18,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.2, 300 sec: 42209.6). Total num frames: 923041792. Throughput: 0: 41721.0. Samples: 805196200. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:18,840][00126] Avg episode reward: [(0, '0.636')] +[2024-03-29 18:18:22,049][00497] Updated weights for policy 0, policy_version 56348 (0.0031) +[2024-03-29 18:18:23,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.1, 300 sec: 42209.6). Total num frames: 923254784. Throughput: 0: 41832.8. Samples: 805452920. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:23,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 18:18:25,905][00497] Updated weights for policy 0, policy_version 56358 (0.0020) +[2024-03-29 18:18:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42052.3, 300 sec: 42265.2). Total num frames: 923484160. Throughput: 0: 42023.2. Samples: 805707800. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:28,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 18:18:30,312][00497] Updated weights for policy 0, policy_version 56368 (0.0030) +[2024-03-29 18:18:33,838][00497] Updated weights for policy 0, policy_version 56378 (0.0024) +[2024-03-29 18:18:33,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.2, 300 sec: 42265.1). Total num frames: 923697152. Throughput: 0: 41867.0. Samples: 805831300. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:33,840][00126] Avg episode reward: [(0, '0.435')] +[2024-03-29 18:18:37,174][00476] Signal inference workers to stop experience collection... (28600 times) +[2024-03-29 18:18:37,209][00497] InferenceWorker_p0-w0: stopping experience collection (28600 times) +[2024-03-29 18:18:37,398][00476] Signal inference workers to resume experience collection... (28600 times) +[2024-03-29 18:18:37,399][00497] InferenceWorker_p0-w0: resuming experience collection (28600 times) +[2024-03-29 18:18:37,657][00497] Updated weights for policy 0, policy_version 56388 (0.0020) +[2024-03-29 18:18:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 42209.6). Total num frames: 923893760. Throughput: 0: 42438.7. Samples: 806096640. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:38,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:18:41,697][00497] Updated weights for policy 0, policy_version 56398 (0.0033) +[2024-03-29 18:18:43,839][00126] Fps is (10 sec: 40960.8, 60 sec: 42325.5, 300 sec: 42209.6). Total num frames: 924106752. Throughput: 0: 42132.5. Samples: 806332300. Policy #0 lag: (min: 0.0, avg: 21.1, max: 41.0) +[2024-03-29 18:18:43,840][00126] Avg episode reward: [(0, '0.463')] +[2024-03-29 18:18:45,873][00497] Updated weights for policy 0, policy_version 56408 (0.0025) +[2024-03-29 18:18:48,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 42154.1). Total num frames: 924303360. Throughput: 0: 42163.7. Samples: 806462640. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:18:48,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:18:49,437][00497] Updated weights for policy 0, policy_version 56418 (0.0027) +[2024-03-29 18:18:53,248][00497] Updated weights for policy 0, policy_version 56428 (0.0024) +[2024-03-29 18:18:53,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42598.5, 300 sec: 42320.7). Total num frames: 924549120. Throughput: 0: 42424.1. Samples: 806722280. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:18:53,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:18:57,111][00497] Updated weights for policy 0, policy_version 56438 (0.0030) +[2024-03-29 18:18:58,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42325.3, 300 sec: 42098.6). Total num frames: 924729344. Throughput: 0: 42049.7. Samples: 806958320. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:18:58,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:19:01,451][00497] Updated weights for policy 0, policy_version 56448 (0.0023) +[2024-03-29 18:19:03,839][00126] Fps is (10 sec: 39320.7, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 924942336. Throughput: 0: 42220.7. Samples: 807096140. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:19:03,840][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 18:19:04,049][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056455_924958720.pth... +[2024-03-29 18:19:04,389][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000055838_914849792.pth +[2024-03-29 18:19:05,223][00497] Updated weights for policy 0, policy_version 56458 (0.0027) +[2024-03-29 18:19:07,818][00476] Signal inference workers to stop experience collection... (28650 times) +[2024-03-29 18:19:07,892][00497] InferenceWorker_p0-w0: stopping experience collection (28650 times) +[2024-03-29 18:19:07,894][00476] Signal inference workers to resume experience collection... (28650 times) +[2024-03-29 18:19:07,916][00497] InferenceWorker_p0-w0: resuming experience collection (28650 times) +[2024-03-29 18:19:08,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.4, 300 sec: 42209.6). Total num frames: 925155328. Throughput: 0: 42067.7. Samples: 807345960. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:19:08,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 18:19:08,926][00497] Updated weights for policy 0, policy_version 56468 (0.0025) +[2024-03-29 18:19:12,884][00497] Updated weights for policy 0, policy_version 56478 (0.0021) +[2024-03-29 18:19:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.2, 300 sec: 42154.1). Total num frames: 925368320. Throughput: 0: 41390.6. Samples: 807570380. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:19:13,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:19:17,639][00497] Updated weights for policy 0, policy_version 56488 (0.0032) +[2024-03-29 18:19:18,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 925548544. Throughput: 0: 41609.8. Samples: 807703740. Policy #0 lag: (min: 1.0, avg: 20.5, max: 42.0) +[2024-03-29 18:19:18,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:19:21,389][00497] Updated weights for policy 0, policy_version 56498 (0.0032) +[2024-03-29 18:19:23,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.2, 300 sec: 42098.5). Total num frames: 925761536. Throughput: 0: 41306.6. Samples: 807955440. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:23,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 18:19:24,998][00497] Updated weights for policy 0, policy_version 56508 (0.0020) +[2024-03-29 18:19:28,834][00497] Updated weights for policy 0, policy_version 56518 (0.0024) +[2024-03-29 18:19:28,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41779.2, 300 sec: 42098.6). Total num frames: 925990912. Throughput: 0: 41617.7. Samples: 808205100. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:28,840][00126] Avg episode reward: [(0, '0.619')] +[2024-03-29 18:19:33,359][00497] Updated weights for policy 0, policy_version 56528 (0.0025) +[2024-03-29 18:19:33,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41233.1, 300 sec: 41987.5). Total num frames: 926171136. Throughput: 0: 41430.2. Samples: 808327000. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:33,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:19:37,080][00497] Updated weights for policy 0, policy_version 56538 (0.0025) +[2024-03-29 18:19:38,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 926384128. Throughput: 0: 41282.5. Samples: 808580000. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:38,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:19:40,813][00497] Updated weights for policy 0, policy_version 56548 (0.0021) +[2024-03-29 18:19:40,836][00476] Signal inference workers to stop experience collection... (28700 times) +[2024-03-29 18:19:40,870][00497] InferenceWorker_p0-w0: stopping experience collection (28700 times) +[2024-03-29 18:19:41,059][00476] Signal inference workers to resume experience collection... (28700 times) +[2024-03-29 18:19:41,060][00497] InferenceWorker_p0-w0: resuming experience collection (28700 times) +[2024-03-29 18:19:43,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.0, 300 sec: 41987.5). Total num frames: 926597120. Throughput: 0: 41629.2. Samples: 808831640. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:43,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:19:44,578][00497] Updated weights for policy 0, policy_version 56558 (0.0027) +[2024-03-29 18:19:48,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 926793728. Throughput: 0: 41165.6. Samples: 808948580. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:48,841][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:19:49,090][00497] Updated weights for policy 0, policy_version 56568 (0.0020) +[2024-03-29 18:19:52,840][00497] Updated weights for policy 0, policy_version 56578 (0.0021) +[2024-03-29 18:19:53,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41233.0, 300 sec: 42098.6). Total num frames: 927023104. Throughput: 0: 41397.7. Samples: 809208860. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 18:19:53,840][00126] Avg episode reward: [(0, '0.648')] +[2024-03-29 18:19:56,307][00497] Updated weights for policy 0, policy_version 56588 (0.0031) +[2024-03-29 18:19:58,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 927219712. Throughput: 0: 41844.1. Samples: 809453360. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:19:58,840][00126] Avg episode reward: [(0, '0.641')] +[2024-03-29 18:20:00,223][00497] Updated weights for policy 0, policy_version 56598 (0.0019) +[2024-03-29 18:20:03,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 927432704. Throughput: 0: 41491.0. Samples: 809570840. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:03,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 18:20:04,763][00497] Updated weights for policy 0, policy_version 56608 (0.0026) +[2024-03-29 18:20:08,409][00497] Updated weights for policy 0, policy_version 56618 (0.0028) +[2024-03-29 18:20:08,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 42043.0). Total num frames: 927645696. Throughput: 0: 41944.9. Samples: 809842960. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:08,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 18:20:11,481][00476] Signal inference workers to stop experience collection... (28750 times) +[2024-03-29 18:20:11,504][00497] InferenceWorker_p0-w0: stopping experience collection (28750 times) +[2024-03-29 18:20:11,680][00476] Signal inference workers to resume experience collection... (28750 times) +[2024-03-29 18:20:11,681][00497] InferenceWorker_p0-w0: resuming experience collection (28750 times) +[2024-03-29 18:20:11,685][00497] Updated weights for policy 0, policy_version 56628 (0.0027) +[2024-03-29 18:20:13,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 927858688. Throughput: 0: 41877.3. Samples: 810089580. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:13,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 18:20:15,709][00497] Updated weights for policy 0, policy_version 56638 (0.0034) +[2024-03-29 18:20:18,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42052.2, 300 sec: 42154.1). Total num frames: 928071680. Throughput: 0: 41791.0. Samples: 810207600. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:18,842][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:20:20,195][00497] Updated weights for policy 0, policy_version 56648 (0.0019) +[2024-03-29 18:20:23,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 928268288. Throughput: 0: 42205.3. Samples: 810479240. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:23,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:20:23,979][00497] Updated weights for policy 0, policy_version 56658 (0.0025) +[2024-03-29 18:20:27,335][00497] Updated weights for policy 0, policy_version 56668 (0.0020) +[2024-03-29 18:20:28,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 928481280. Throughput: 0: 41822.4. Samples: 810713640. Policy #0 lag: (min: 2.0, avg: 21.7, max: 41.0) +[2024-03-29 18:20:28,840][00126] Avg episode reward: [(0, '0.479')] +[2024-03-29 18:20:31,311][00497] Updated weights for policy 0, policy_version 56678 (0.0018) +[2024-03-29 18:20:33,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 928694272. Throughput: 0: 42168.9. Samples: 810846180. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:33,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:20:35,503][00497] Updated weights for policy 0, policy_version 56688 (0.0020) +[2024-03-29 18:20:38,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.3, 300 sec: 42043.0). Total num frames: 928890880. Throughput: 0: 42157.8. Samples: 811105960. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:38,840][00126] Avg episode reward: [(0, '0.484')] +[2024-03-29 18:20:39,453][00497] Updated weights for policy 0, policy_version 56698 (0.0033) +[2024-03-29 18:20:42,861][00497] Updated weights for policy 0, policy_version 56708 (0.0024) +[2024-03-29 18:20:43,839][00126] Fps is (10 sec: 44235.8, 60 sec: 42325.3, 300 sec: 42154.1). Total num frames: 929136640. Throughput: 0: 42180.3. Samples: 811351480. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:43,840][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 18:20:45,850][00476] Signal inference workers to stop experience collection... (28800 times) +[2024-03-29 18:20:45,914][00497] InferenceWorker_p0-w0: stopping experience collection (28800 times) +[2024-03-29 18:20:45,922][00476] Signal inference workers to resume experience collection... (28800 times) +[2024-03-29 18:20:45,941][00497] InferenceWorker_p0-w0: resuming experience collection (28800 times) +[2024-03-29 18:20:46,753][00497] Updated weights for policy 0, policy_version 56718 (0.0031) +[2024-03-29 18:20:48,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.3, 300 sec: 42098.5). Total num frames: 929333248. Throughput: 0: 42584.5. Samples: 811487140. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:48,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:20:50,895][00497] Updated weights for policy 0, policy_version 56728 (0.0019) +[2024-03-29 18:20:53,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 929529856. Throughput: 0: 42092.0. Samples: 811737100. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:53,841][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:20:54,993][00497] Updated weights for policy 0, policy_version 56738 (0.0021) +[2024-03-29 18:20:58,485][00497] Updated weights for policy 0, policy_version 56748 (0.0021) +[2024-03-29 18:20:58,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42598.4, 300 sec: 42154.1). Total num frames: 929775616. Throughput: 0: 42265.0. Samples: 811991500. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:20:58,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:21:02,489][00497] Updated weights for policy 0, policy_version 56758 (0.0025) +[2024-03-29 18:21:03,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 929955840. Throughput: 0: 42412.0. Samples: 812116140. Policy #0 lag: (min: 1.0, avg: 22.8, max: 43.0) +[2024-03-29 18:21:03,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 18:21:04,172][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056762_929988608.pth... +[2024-03-29 18:21:04,485][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056147_919912448.pth +[2024-03-29 18:21:06,717][00497] Updated weights for policy 0, policy_version 56768 (0.0026) +[2024-03-29 18:21:08,839][00126] Fps is (10 sec: 39321.0, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 930168832. Throughput: 0: 41903.5. Samples: 812364900. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:08,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:21:10,931][00497] Updated weights for policy 0, policy_version 56778 (0.0027) +[2024-03-29 18:21:13,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42052.3, 300 sec: 42098.6). Total num frames: 930381824. Throughput: 0: 42122.6. Samples: 812609160. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:13,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 18:21:14,198][00497] Updated weights for policy 0, policy_version 56788 (0.0035) +[2024-03-29 18:21:18,285][00476] Signal inference workers to stop experience collection... (28850 times) +[2024-03-29 18:21:18,327][00497] InferenceWorker_p0-w0: stopping experience collection (28850 times) +[2024-03-29 18:21:18,362][00476] Signal inference workers to resume experience collection... (28850 times) +[2024-03-29 18:21:18,365][00497] InferenceWorker_p0-w0: resuming experience collection (28850 times) +[2024-03-29 18:21:18,368][00497] Updated weights for policy 0, policy_version 56798 (0.0028) +[2024-03-29 18:21:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 930594816. Throughput: 0: 41985.6. Samples: 812735540. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:18,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:21:22,478][00497] Updated weights for policy 0, policy_version 56808 (0.0022) +[2024-03-29 18:21:23,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 930791424. Throughput: 0: 41716.0. Samples: 812983180. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:23,840][00126] Avg episode reward: [(0, '0.645')] +[2024-03-29 18:21:26,650][00497] Updated weights for policy 0, policy_version 56818 (0.0023) +[2024-03-29 18:21:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 931004416. Throughput: 0: 41825.9. Samples: 813233640. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:28,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:21:29,895][00497] Updated weights for policy 0, policy_version 56828 (0.0024) +[2024-03-29 18:21:33,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 931217408. Throughput: 0: 41560.5. Samples: 813357360. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:33,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:21:34,041][00497] Updated weights for policy 0, policy_version 56838 (0.0024) +[2024-03-29 18:21:38,253][00497] Updated weights for policy 0, policy_version 56848 (0.0022) +[2024-03-29 18:21:38,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 931414016. Throughput: 0: 41408.9. Samples: 813600500. Policy #0 lag: (min: 0.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:38,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:21:42,281][00497] Updated weights for policy 0, policy_version 56858 (0.0027) +[2024-03-29 18:21:43,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41506.2, 300 sec: 41931.9). Total num frames: 931627008. Throughput: 0: 41520.8. Samples: 813859940. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:43,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 18:21:45,724][00497] Updated weights for policy 0, policy_version 56868 (0.0022) +[2024-03-29 18:21:48,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.2, 300 sec: 41876.4). Total num frames: 931823616. Throughput: 0: 41567.3. Samples: 813986660. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:48,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 18:21:49,668][00497] Updated weights for policy 0, policy_version 56878 (0.0020) +[2024-03-29 18:21:53,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 932036608. Throughput: 0: 41449.4. Samples: 814230120. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:53,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 18:21:54,145][00497] Updated weights for policy 0, policy_version 56888 (0.0020) +[2024-03-29 18:21:58,002][00476] Signal inference workers to stop experience collection... (28900 times) +[2024-03-29 18:21:58,003][00476] Signal inference workers to resume experience collection... (28900 times) +[2024-03-29 18:21:58,017][00497] Updated weights for policy 0, policy_version 56898 (0.0020) +[2024-03-29 18:21:58,037][00497] InferenceWorker_p0-w0: stopping experience collection (28900 times) +[2024-03-29 18:21:58,038][00497] InferenceWorker_p0-w0: resuming experience collection (28900 times) +[2024-03-29 18:21:58,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41233.0, 300 sec: 41876.4). Total num frames: 932249600. Throughput: 0: 41740.8. Samples: 814487500. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:21:58,840][00126] Avg episode reward: [(0, '0.638')] +[2024-03-29 18:22:01,275][00497] Updated weights for policy 0, policy_version 56908 (0.0021) +[2024-03-29 18:22:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41765.4). Total num frames: 932446208. Throughput: 0: 41772.0. Samples: 814615280. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:22:03,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 18:22:05,400][00497] Updated weights for policy 0, policy_version 56918 (0.0024) +[2024-03-29 18:22:08,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 932675584. Throughput: 0: 41742.7. Samples: 814861600. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:22:08,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 18:22:09,798][00497] Updated weights for policy 0, policy_version 56928 (0.0024) +[2024-03-29 18:22:13,551][00497] Updated weights for policy 0, policy_version 56938 (0.0023) +[2024-03-29 18:22:13,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 932872192. Throughput: 0: 41923.7. Samples: 815120200. Policy #0 lag: (min: 2.0, avg: 20.7, max: 42.0) +[2024-03-29 18:22:13,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:22:17,013][00497] Updated weights for policy 0, policy_version 56948 (0.0023) +[2024-03-29 18:22:18,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 933085184. Throughput: 0: 41810.2. Samples: 815238820. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:18,840][00126] Avg episode reward: [(0, '0.645')] +[2024-03-29 18:22:21,135][00497] Updated weights for policy 0, policy_version 56958 (0.0024) +[2024-03-29 18:22:23,839][00126] Fps is (10 sec: 44235.8, 60 sec: 42052.1, 300 sec: 41876.4). Total num frames: 933314560. Throughput: 0: 41858.5. Samples: 815484140. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:23,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 18:22:25,779][00497] Updated weights for policy 0, policy_version 56968 (0.0025) +[2024-03-29 18:22:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 933494784. Throughput: 0: 41787.7. Samples: 815740380. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:28,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:22:29,083][00476] Signal inference workers to stop experience collection... (28950 times) +[2024-03-29 18:22:29,137][00497] InferenceWorker_p0-w0: stopping experience collection (28950 times) +[2024-03-29 18:22:29,250][00476] Signal inference workers to resume experience collection... (28950 times) +[2024-03-29 18:22:29,251][00497] InferenceWorker_p0-w0: resuming experience collection (28950 times) +[2024-03-29 18:22:29,254][00497] Updated weights for policy 0, policy_version 56978 (0.0023) +[2024-03-29 18:22:32,914][00497] Updated weights for policy 0, policy_version 56988 (0.0031) +[2024-03-29 18:22:33,839][00126] Fps is (10 sec: 40960.9, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 933724160. Throughput: 0: 41797.7. Samples: 815867560. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:33,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 18:22:36,742][00497] Updated weights for policy 0, policy_version 56998 (0.0022) +[2024-03-29 18:22:38,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.3, 300 sec: 41932.0). Total num frames: 933937152. Throughput: 0: 41866.4. Samples: 816114100. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:38,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 18:22:41,107][00497] Updated weights for policy 0, policy_version 57008 (0.0020) +[2024-03-29 18:22:43,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 934133760. Throughput: 0: 41950.8. Samples: 816375280. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:43,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:22:44,754][00497] Updated weights for policy 0, policy_version 57018 (0.0024) +[2024-03-29 18:22:48,477][00497] Updated weights for policy 0, policy_version 57028 (0.0025) +[2024-03-29 18:22:48,839][00126] Fps is (10 sec: 42598.1, 60 sec: 42325.3, 300 sec: 41931.9). Total num frames: 934363136. Throughput: 0: 41870.8. Samples: 816499460. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:48,840][00126] Avg episode reward: [(0, '0.636')] +[2024-03-29 18:22:52,349][00497] Updated weights for policy 0, policy_version 57038 (0.0021) +[2024-03-29 18:22:53,839][00126] Fps is (10 sec: 42597.4, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 934559744. Throughput: 0: 41836.7. Samples: 816744260. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 18:22:53,841][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:22:56,665][00497] Updated weights for policy 0, policy_version 57048 (0.0022) +[2024-03-29 18:22:58,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41820.9). Total num frames: 934756352. Throughput: 0: 41919.4. Samples: 817006580. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:22:58,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:23:00,420][00497] Updated weights for policy 0, policy_version 57058 (0.0028) +[2024-03-29 18:23:03,839][00126] Fps is (10 sec: 42599.3, 60 sec: 42325.4, 300 sec: 41931.9). Total num frames: 934985728. Throughput: 0: 41774.7. Samples: 817118680. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:03,841][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:23:03,860][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057067_934985728.pth... +[2024-03-29 18:23:04,206][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056455_924958720.pth +[2024-03-29 18:23:04,476][00497] Updated weights for policy 0, policy_version 57068 (0.0019) +[2024-03-29 18:23:04,500][00476] Signal inference workers to stop experience collection... (29000 times) +[2024-03-29 18:23:04,535][00497] InferenceWorker_p0-w0: stopping experience collection (29000 times) +[2024-03-29 18:23:04,725][00476] Signal inference workers to resume experience collection... (29000 times) +[2024-03-29 18:23:04,726][00497] InferenceWorker_p0-w0: resuming experience collection (29000 times) +[2024-03-29 18:23:08,232][00497] Updated weights for policy 0, policy_version 57078 (0.0028) +[2024-03-29 18:23:08,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.1, 300 sec: 41876.4). Total num frames: 935182336. Throughput: 0: 42024.1. Samples: 817375220. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:08,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:23:12,534][00497] Updated weights for policy 0, policy_version 57088 (0.0023) +[2024-03-29 18:23:13,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41820.8). Total num frames: 935378944. Throughput: 0: 41848.3. Samples: 817623560. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:13,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 18:23:16,131][00497] Updated weights for policy 0, policy_version 57098 (0.0029) +[2024-03-29 18:23:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 935591936. Throughput: 0: 41365.7. Samples: 817729020. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:18,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:23:20,225][00497] Updated weights for policy 0, policy_version 57108 (0.0026) +[2024-03-29 18:23:23,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 935804928. Throughput: 0: 41672.3. Samples: 817989360. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:23,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:23:24,125][00497] Updated weights for policy 0, policy_version 57118 (0.0027) +[2024-03-29 18:23:28,103][00497] Updated weights for policy 0, policy_version 57128 (0.0029) +[2024-03-29 18:23:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 936001536. Throughput: 0: 41707.0. Samples: 818252100. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 18:23:28,840][00126] Avg episode reward: [(0, '0.654')] +[2024-03-29 18:23:31,741][00497] Updated weights for policy 0, policy_version 57138 (0.0019) +[2024-03-29 18:23:33,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.1, 300 sec: 41820.8). Total num frames: 936230912. Throughput: 0: 41527.9. Samples: 818368220. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:33,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:23:33,895][00476] Signal inference workers to stop experience collection... (29050 times) +[2024-03-29 18:23:33,928][00497] InferenceWorker_p0-w0: stopping experience collection (29050 times) +[2024-03-29 18:23:34,111][00476] Signal inference workers to resume experience collection... (29050 times) +[2024-03-29 18:23:34,111][00497] InferenceWorker_p0-w0: resuming experience collection (29050 times) +[2024-03-29 18:23:35,833][00497] Updated weights for policy 0, policy_version 57148 (0.0020) +[2024-03-29 18:23:38,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 936427520. Throughput: 0: 41701.5. Samples: 818620820. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:38,841][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 18:23:39,667][00497] Updated weights for policy 0, policy_version 57158 (0.0023) +[2024-03-29 18:23:43,713][00497] Updated weights for policy 0, policy_version 57168 (0.0024) +[2024-03-29 18:23:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.1, 300 sec: 41820.8). Total num frames: 936640512. Throughput: 0: 41482.7. Samples: 818873300. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:43,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 18:23:47,287][00497] Updated weights for policy 0, policy_version 57178 (0.0025) +[2024-03-29 18:23:48,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 936869888. Throughput: 0: 41877.6. Samples: 819003180. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:48,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 18:23:51,259][00497] Updated weights for policy 0, policy_version 57188 (0.0025) +[2024-03-29 18:23:53,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 937066496. Throughput: 0: 41884.1. Samples: 819260000. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:53,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 18:23:55,136][00497] Updated weights for policy 0, policy_version 57198 (0.0021) +[2024-03-29 18:23:58,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 937279488. Throughput: 0: 41800.9. Samples: 819504600. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:23:58,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:23:59,135][00497] Updated weights for policy 0, policy_version 57208 (0.0023) +[2024-03-29 18:24:02,968][00497] Updated weights for policy 0, policy_version 57218 (0.0027) +[2024-03-29 18:24:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.1, 300 sec: 41820.8). Total num frames: 937492480. Throughput: 0: 42381.8. Samples: 819636200. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:24:03,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:24:06,841][00497] Updated weights for policy 0, policy_version 57228 (0.0024) +[2024-03-29 18:24:08,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 937689088. Throughput: 0: 42005.9. Samples: 819879620. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:08,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 18:24:10,705][00476] Signal inference workers to stop experience collection... (29100 times) +[2024-03-29 18:24:10,780][00497] InferenceWorker_p0-w0: stopping experience collection (29100 times) +[2024-03-29 18:24:10,876][00476] Signal inference workers to resume experience collection... (29100 times) +[2024-03-29 18:24:10,876][00497] InferenceWorker_p0-w0: resuming experience collection (29100 times) +[2024-03-29 18:24:10,879][00497] Updated weights for policy 0, policy_version 57238 (0.0020) +[2024-03-29 18:24:13,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.4, 300 sec: 41931.9). Total num frames: 937918464. Throughput: 0: 41803.1. Samples: 820133240. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:13,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:24:14,618][00497] Updated weights for policy 0, policy_version 57248 (0.0017) +[2024-03-29 18:24:18,409][00497] Updated weights for policy 0, policy_version 57258 (0.0027) +[2024-03-29 18:24:18,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 938115072. Throughput: 0: 42253.9. Samples: 820269640. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:18,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:24:22,271][00497] Updated weights for policy 0, policy_version 57268 (0.0019) +[2024-03-29 18:24:23,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 938311680. Throughput: 0: 42133.7. Samples: 820516840. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:23,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:24:26,207][00497] Updated weights for policy 0, policy_version 57278 (0.0019) +[2024-03-29 18:24:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42598.4, 300 sec: 41987.5). Total num frames: 938557440. Throughput: 0: 42102.3. Samples: 820767900. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:28,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:24:30,003][00497] Updated weights for policy 0, policy_version 57288 (0.0021) +[2024-03-29 18:24:33,839][00126] Fps is (10 sec: 44237.5, 60 sec: 42052.4, 300 sec: 41931.9). Total num frames: 938754048. Throughput: 0: 42433.5. Samples: 820912680. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:33,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:24:34,005][00497] Updated weights for policy 0, policy_version 57298 (0.0029) +[2024-03-29 18:24:37,942][00497] Updated weights for policy 0, policy_version 57308 (0.0025) +[2024-03-29 18:24:38,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42325.3, 300 sec: 41931.9). Total num frames: 938967040. Throughput: 0: 41874.6. Samples: 821144360. Policy #0 lag: (min: 0.0, avg: 21.8, max: 44.0) +[2024-03-29 18:24:38,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 18:24:41,708][00497] Updated weights for policy 0, policy_version 57318 (0.0027) +[2024-03-29 18:24:43,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42598.5, 300 sec: 42043.0). Total num frames: 939196416. Throughput: 0: 42169.0. Samples: 821402200. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:24:43,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:24:45,553][00497] Updated weights for policy 0, policy_version 57328 (0.0019) +[2024-03-29 18:24:46,300][00476] Signal inference workers to stop experience collection... (29150 times) +[2024-03-29 18:24:46,340][00497] InferenceWorker_p0-w0: stopping experience collection (29150 times) +[2024-03-29 18:24:46,376][00476] Signal inference workers to resume experience collection... (29150 times) +[2024-03-29 18:24:46,378][00497] InferenceWorker_p0-w0: resuming experience collection (29150 times) +[2024-03-29 18:24:48,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.4, 300 sec: 41931.9). Total num frames: 939393024. Throughput: 0: 42398.8. Samples: 821544140. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:24:48,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 18:24:49,230][00497] Updated weights for policy 0, policy_version 57338 (0.0023) +[2024-03-29 18:24:53,384][00497] Updated weights for policy 0, policy_version 57348 (0.0018) +[2024-03-29 18:24:53,839][00126] Fps is (10 sec: 40960.4, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 939606016. Throughput: 0: 42506.7. Samples: 821792420. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:24:53,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 18:24:57,032][00497] Updated weights for policy 0, policy_version 57358 (0.0024) +[2024-03-29 18:24:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.5, 300 sec: 41987.5). Total num frames: 939819008. Throughput: 0: 42469.0. Samples: 822044340. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:24:58,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:25:00,856][00497] Updated weights for policy 0, policy_version 57368 (0.0023) +[2024-03-29 18:25:03,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 940032000. Throughput: 0: 42358.7. Samples: 822175780. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:25:03,840][00126] Avg episode reward: [(0, '0.618')] +[2024-03-29 18:25:04,087][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057376_940048384.pth... +[2024-03-29 18:25:04,416][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000056762_929988608.pth +[2024-03-29 18:25:04,972][00497] Updated weights for policy 0, policy_version 57378 (0.0019) +[2024-03-29 18:25:08,743][00497] Updated weights for policy 0, policy_version 57388 (0.0023) +[2024-03-29 18:25:08,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42598.3, 300 sec: 41987.5). Total num frames: 940244992. Throughput: 0: 42422.3. Samples: 822425840. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:25:08,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 18:25:12,525][00497] Updated weights for policy 0, policy_version 57398 (0.0025) +[2024-03-29 18:25:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 940457984. Throughput: 0: 42572.1. Samples: 822683640. Policy #0 lag: (min: 2.0, avg: 21.3, max: 43.0) +[2024-03-29 18:25:13,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:25:15,552][00476] Signal inference workers to stop experience collection... (29200 times) +[2024-03-29 18:25:15,624][00497] InferenceWorker_p0-w0: stopping experience collection (29200 times) +[2024-03-29 18:25:15,634][00476] Signal inference workers to resume experience collection... (29200 times) +[2024-03-29 18:25:15,654][00497] InferenceWorker_p0-w0: resuming experience collection (29200 times) +[2024-03-29 18:25:16,202][00497] Updated weights for policy 0, policy_version 57408 (0.0028) +[2024-03-29 18:25:18,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 940654592. Throughput: 0: 42070.1. Samples: 822805840. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:18,842][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 18:25:20,273][00497] Updated weights for policy 0, policy_version 57418 (0.0019) +[2024-03-29 18:25:23,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42871.5, 300 sec: 42043.0). Total num frames: 940883968. Throughput: 0: 42558.1. Samples: 823059480. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:23,840][00126] Avg episode reward: [(0, '0.502')] +[2024-03-29 18:25:24,196][00497] Updated weights for policy 0, policy_version 57428 (0.0024) +[2024-03-29 18:25:28,220][00497] Updated weights for policy 0, policy_version 57438 (0.0021) +[2024-03-29 18:25:28,839][00126] Fps is (10 sec: 44237.6, 60 sec: 42325.5, 300 sec: 42043.0). Total num frames: 941096960. Throughput: 0: 42465.9. Samples: 823313160. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:28,840][00126] Avg episode reward: [(0, '0.443')] +[2024-03-29 18:25:31,840][00497] Updated weights for policy 0, policy_version 57448 (0.0018) +[2024-03-29 18:25:33,839][00126] Fps is (10 sec: 39322.1, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 941277184. Throughput: 0: 42124.4. Samples: 823439740. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:33,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 18:25:36,039][00497] Updated weights for policy 0, policy_version 57458 (0.0023) +[2024-03-29 18:25:38,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42598.5, 300 sec: 41987.5). Total num frames: 941522944. Throughput: 0: 42209.8. Samples: 823691860. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:38,840][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 18:25:39,972][00497] Updated weights for policy 0, policy_version 57468 (0.0025) +[2024-03-29 18:25:43,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 941703168. Throughput: 0: 42084.0. Samples: 823938120. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:43,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:25:43,908][00497] Updated weights for policy 0, policy_version 57478 (0.0017) +[2024-03-29 18:25:47,647][00497] Updated weights for policy 0, policy_version 57488 (0.0024) +[2024-03-29 18:25:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 941916160. Throughput: 0: 41876.9. Samples: 824060240. Policy #0 lag: (min: 0.0, avg: 19.6, max: 41.0) +[2024-03-29 18:25:48,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 18:25:50,667][00476] Signal inference workers to stop experience collection... (29250 times) +[2024-03-29 18:25:50,670][00476] Signal inference workers to resume experience collection... (29250 times) +[2024-03-29 18:25:50,716][00497] InferenceWorker_p0-w0: stopping experience collection (29250 times) +[2024-03-29 18:25:50,716][00497] InferenceWorker_p0-w0: resuming experience collection (29250 times) +[2024-03-29 18:25:51,574][00497] Updated weights for policy 0, policy_version 57498 (0.0027) +[2024-03-29 18:25:53,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42325.2, 300 sec: 41931.9). Total num frames: 942145536. Throughput: 0: 42257.3. Samples: 824327420. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:25:53,842][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 18:25:55,470][00497] Updated weights for policy 0, policy_version 57508 (0.0018) +[2024-03-29 18:25:58,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 942342144. Throughput: 0: 42252.0. Samples: 824584980. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:25:58,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:25:59,254][00497] Updated weights for policy 0, policy_version 57518 (0.0023) +[2024-03-29 18:26:03,147][00497] Updated weights for policy 0, policy_version 57528 (0.0023) +[2024-03-29 18:26:03,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 942538752. Throughput: 0: 42037.8. Samples: 824697540. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:03,840][00126] Avg episode reward: [(0, '0.639')] +[2024-03-29 18:26:07,205][00497] Updated weights for policy 0, policy_version 57538 (0.0020) +[2024-03-29 18:26:08,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.4, 300 sec: 41987.5). Total num frames: 942768128. Throughput: 0: 42154.8. Samples: 824956440. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:08,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:26:10,971][00497] Updated weights for policy 0, policy_version 57548 (0.0028) +[2024-03-29 18:26:13,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41779.2, 300 sec: 41932.0). Total num frames: 942964736. Throughput: 0: 42000.4. Samples: 825203180. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:13,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 18:26:14,970][00497] Updated weights for policy 0, policy_version 57558 (0.0026) +[2024-03-29 18:26:16,829][00476] Signal inference workers to stop experience collection... (29300 times) +[2024-03-29 18:26:16,870][00497] InferenceWorker_p0-w0: stopping experience collection (29300 times) +[2024-03-29 18:26:16,910][00476] Signal inference workers to resume experience collection... (29300 times) +[2024-03-29 18:26:16,912][00497] InferenceWorker_p0-w0: resuming experience collection (29300 times) +[2024-03-29 18:26:18,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 943177728. Throughput: 0: 41980.8. Samples: 825328880. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:18,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:26:19,065][00497] Updated weights for policy 0, policy_version 57568 (0.0026) +[2024-03-29 18:26:22,895][00497] Updated weights for policy 0, policy_version 57578 (0.0030) +[2024-03-29 18:26:23,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.3, 300 sec: 41987.5). Total num frames: 943390720. Throughput: 0: 42040.4. Samples: 825583680. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:23,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:26:26,353][00497] Updated weights for policy 0, policy_version 57588 (0.0022) +[2024-03-29 18:26:28,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 943603712. Throughput: 0: 42258.2. Samples: 825839740. Policy #0 lag: (min: 2.0, avg: 21.3, max: 42.0) +[2024-03-29 18:26:28,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 18:26:30,291][00497] Updated weights for policy 0, policy_version 57598 (0.0023) +[2024-03-29 18:26:33,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 943816704. Throughput: 0: 42126.3. Samples: 825955920. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:33,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:26:34,521][00497] Updated weights for policy 0, policy_version 57608 (0.0019) +[2024-03-29 18:26:38,571][00497] Updated weights for policy 0, policy_version 57618 (0.0024) +[2024-03-29 18:26:38,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 944013312. Throughput: 0: 41699.3. Samples: 826203880. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:38,840][00126] Avg episode reward: [(0, '0.458')] +[2024-03-29 18:26:42,125][00497] Updated weights for policy 0, policy_version 57628 (0.0022) +[2024-03-29 18:26:43,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 944226304. Throughput: 0: 41859.5. Samples: 826468660. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:43,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:26:46,093][00497] Updated weights for policy 0, policy_version 57638 (0.0021) +[2024-03-29 18:26:48,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.2, 300 sec: 42043.0). Total num frames: 944439296. Throughput: 0: 41908.5. Samples: 826583420. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:48,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:26:50,589][00497] Updated weights for policy 0, policy_version 57648 (0.0028) +[2024-03-29 18:26:51,629][00476] Signal inference workers to stop experience collection... (29350 times) +[2024-03-29 18:26:51,670][00497] InferenceWorker_p0-w0: stopping experience collection (29350 times) +[2024-03-29 18:26:51,864][00476] Signal inference workers to resume experience collection... (29350 times) +[2024-03-29 18:26:51,865][00497] InferenceWorker_p0-w0: resuming experience collection (29350 times) +[2024-03-29 18:26:53,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 944635904. Throughput: 0: 41804.4. Samples: 826837640. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:53,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:26:54,252][00497] Updated weights for policy 0, policy_version 57658 (0.0030) +[2024-03-29 18:26:58,162][00497] Updated weights for policy 0, policy_version 57668 (0.0024) +[2024-03-29 18:26:58,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 944848896. Throughput: 0: 41767.6. Samples: 827082720. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:26:58,840][00126] Avg episode reward: [(0, '0.644')] +[2024-03-29 18:27:01,897][00497] Updated weights for policy 0, policy_version 57678 (0.0019) +[2024-03-29 18:27:03,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42052.2, 300 sec: 41987.4). Total num frames: 945061888. Throughput: 0: 41874.7. Samples: 827213240. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:27:03,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:27:04,058][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057683_945078272.pth... +[2024-03-29 18:27:04,388][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057067_934985728.pth +[2024-03-29 18:27:06,664][00497] Updated weights for policy 0, policy_version 57688 (0.0020) +[2024-03-29 18:27:08,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 945274880. Throughput: 0: 41858.1. Samples: 827467300. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:08,841][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:27:10,381][00497] Updated weights for policy 0, policy_version 57698 (0.0035) +[2024-03-29 18:27:13,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 945471488. Throughput: 0: 41340.0. Samples: 827700040. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:13,840][00126] Avg episode reward: [(0, '0.653')] +[2024-03-29 18:27:13,895][00497] Updated weights for policy 0, policy_version 57708 (0.0029) +[2024-03-29 18:27:17,690][00497] Updated weights for policy 0, policy_version 57718 (0.0024) +[2024-03-29 18:27:18,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41779.3, 300 sec: 41932.0). Total num frames: 945684480. Throughput: 0: 41821.7. Samples: 827837900. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:18,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 18:27:22,332][00497] Updated weights for policy 0, policy_version 57728 (0.0020) +[2024-03-29 18:27:23,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.1, 300 sec: 42043.0). Total num frames: 945897472. Throughput: 0: 41800.3. Samples: 828084900. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:23,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 18:27:26,022][00497] Updated weights for policy 0, policy_version 57738 (0.0022) +[2024-03-29 18:27:26,689][00476] Signal inference workers to stop experience collection... (29400 times) +[2024-03-29 18:27:26,766][00497] InferenceWorker_p0-w0: stopping experience collection (29400 times) +[2024-03-29 18:27:26,776][00476] Signal inference workers to resume experience collection... (29400 times) +[2024-03-29 18:27:26,795][00497] InferenceWorker_p0-w0: resuming experience collection (29400 times) +[2024-03-29 18:27:28,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 946094080. Throughput: 0: 41049.3. Samples: 828315880. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:28,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 18:27:29,735][00497] Updated weights for policy 0, policy_version 57748 (0.0027) +[2024-03-29 18:27:33,425][00497] Updated weights for policy 0, policy_version 57758 (0.0024) +[2024-03-29 18:27:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.1, 300 sec: 41987.4). Total num frames: 946323456. Throughput: 0: 41692.4. Samples: 828459580. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:33,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:27:37,767][00497] Updated weights for policy 0, policy_version 57768 (0.0030) +[2024-03-29 18:27:38,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 946503680. Throughput: 0: 41520.0. Samples: 828706040. Policy #0 lag: (min: 1.0, avg: 21.1, max: 41.0) +[2024-03-29 18:27:38,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:27:41,596][00497] Updated weights for policy 0, policy_version 57778 (0.0028) +[2024-03-29 18:27:43,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 946733056. Throughput: 0: 41667.5. Samples: 828957760. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:27:43,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 18:27:45,680][00497] Updated weights for policy 0, policy_version 57788 (0.0023) +[2024-03-29 18:27:48,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 946929664. Throughput: 0: 41560.9. Samples: 829083480. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:27:48,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 18:27:49,224][00497] Updated weights for policy 0, policy_version 57798 (0.0025) +[2024-03-29 18:27:53,502][00497] Updated weights for policy 0, policy_version 57808 (0.0022) +[2024-03-29 18:27:53,839][00126] Fps is (10 sec: 39320.8, 60 sec: 41506.0, 300 sec: 41931.9). Total num frames: 947126272. Throughput: 0: 41431.1. Samples: 829331700. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:27:53,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 18:27:57,125][00497] Updated weights for policy 0, policy_version 57818 (0.0029) +[2024-03-29 18:27:57,922][00476] Signal inference workers to stop experience collection... (29450 times) +[2024-03-29 18:27:57,959][00497] InferenceWorker_p0-w0: stopping experience collection (29450 times) +[2024-03-29 18:27:58,146][00476] Signal inference workers to resume experience collection... (29450 times) +[2024-03-29 18:27:58,147][00497] InferenceWorker_p0-w0: resuming experience collection (29450 times) +[2024-03-29 18:27:58,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 947372032. Throughput: 0: 42172.9. Samples: 829597820. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:27:58,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 18:28:00,852][00497] Updated weights for policy 0, policy_version 57828 (0.0024) +[2024-03-29 18:28:03,839][00126] Fps is (10 sec: 44237.9, 60 sec: 41779.3, 300 sec: 41987.5). Total num frames: 947568640. Throughput: 0: 41709.9. Samples: 829714840. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:28:03,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 18:28:04,662][00497] Updated weights for policy 0, policy_version 57838 (0.0027) +[2024-03-29 18:28:08,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41506.2, 300 sec: 41987.5). Total num frames: 947765248. Throughput: 0: 41809.0. Samples: 829966300. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:28:08,841][00126] Avg episode reward: [(0, '0.658')] +[2024-03-29 18:28:09,055][00497] Updated weights for policy 0, policy_version 57848 (0.0023) +[2024-03-29 18:28:12,612][00497] Updated weights for policy 0, policy_version 57858 (0.0027) +[2024-03-29 18:28:13,839][00126] Fps is (10 sec: 44235.9, 60 sec: 42325.3, 300 sec: 42098.5). Total num frames: 948011008. Throughput: 0: 42658.6. Samples: 830235520. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 18:28:13,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 18:28:16,273][00497] Updated weights for policy 0, policy_version 57868 (0.0020) +[2024-03-29 18:28:18,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 948191232. Throughput: 0: 42034.7. Samples: 830351140. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:18,840][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 18:28:20,205][00497] Updated weights for policy 0, policy_version 57878 (0.0025) +[2024-03-29 18:28:23,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.2, 300 sec: 42098.5). Total num frames: 948420608. Throughput: 0: 42125.7. Samples: 830601700. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:23,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:28:24,640][00497] Updated weights for policy 0, policy_version 57888 (0.0022) +[2024-03-29 18:28:28,392][00497] Updated weights for policy 0, policy_version 57898 (0.0029) +[2024-03-29 18:28:28,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 948617216. Throughput: 0: 42136.8. Samples: 830853920. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:28,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:28:29,871][00476] Signal inference workers to stop experience collection... (29500 times) +[2024-03-29 18:28:29,953][00476] Signal inference workers to resume experience collection... (29500 times) +[2024-03-29 18:28:29,954][00497] InferenceWorker_p0-w0: stopping experience collection (29500 times) +[2024-03-29 18:28:29,985][00497] InferenceWorker_p0-w0: resuming experience collection (29500 times) +[2024-03-29 18:28:32,169][00497] Updated weights for policy 0, policy_version 57908 (0.0022) +[2024-03-29 18:28:33,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41506.1, 300 sec: 41987.5). Total num frames: 948813824. Throughput: 0: 41844.9. Samples: 830966500. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:33,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:28:36,023][00497] Updated weights for policy 0, policy_version 57918 (0.0019) +[2024-03-29 18:28:38,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42598.4, 300 sec: 42098.6). Total num frames: 949059584. Throughput: 0: 42288.1. Samples: 831234660. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:38,842][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 18:28:40,221][00497] Updated weights for policy 0, policy_version 57928 (0.0033) +[2024-03-29 18:28:43,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 949239808. Throughput: 0: 41966.2. Samples: 831486300. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:43,840][00126] Avg episode reward: [(0, '0.417')] +[2024-03-29 18:28:44,024][00497] Updated weights for policy 0, policy_version 57938 (0.0022) +[2024-03-29 18:28:47,868][00497] Updated weights for policy 0, policy_version 57948 (0.0026) +[2024-03-29 18:28:48,839][00126] Fps is (10 sec: 36045.2, 60 sec: 41506.2, 300 sec: 41876.4). Total num frames: 949420032. Throughput: 0: 41768.8. Samples: 831594440. Policy #0 lag: (min: 0.0, avg: 20.1, max: 41.0) +[2024-03-29 18:28:48,840][00126] Avg episode reward: [(0, '0.455')] +[2024-03-29 18:28:51,857][00497] Updated weights for policy 0, policy_version 57958 (0.0019) +[2024-03-29 18:28:53,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42325.5, 300 sec: 41987.5). Total num frames: 949665792. Throughput: 0: 41956.0. Samples: 831854320. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:28:53,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:28:56,080][00497] Updated weights for policy 0, policy_version 57968 (0.0032) +[2024-03-29 18:28:58,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41233.0, 300 sec: 41876.4). Total num frames: 949846016. Throughput: 0: 41592.9. Samples: 832107200. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:28:58,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:28:59,784][00497] Updated weights for policy 0, policy_version 57978 (0.0019) +[2024-03-29 18:29:00,280][00476] Signal inference workers to stop experience collection... (29550 times) +[2024-03-29 18:29:00,312][00497] InferenceWorker_p0-w0: stopping experience collection (29550 times) +[2024-03-29 18:29:00,462][00476] Signal inference workers to resume experience collection... (29550 times) +[2024-03-29 18:29:00,462][00497] InferenceWorker_p0-w0: resuming experience collection (29550 times) +[2024-03-29 18:29:03,433][00497] Updated weights for policy 0, policy_version 57988 (0.0024) +[2024-03-29 18:29:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 950075392. Throughput: 0: 41605.0. Samples: 832223360. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:03,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 18:29:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057988_950075392.pth... +[2024-03-29 18:29:04,181][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057376_940048384.pth +[2024-03-29 18:29:07,484][00497] Updated weights for policy 0, policy_version 57998 (0.0022) +[2024-03-29 18:29:08,839][00126] Fps is (10 sec: 45875.5, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 950304768. Throughput: 0: 41757.9. Samples: 832480800. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:08,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:29:11,771][00497] Updated weights for policy 0, policy_version 58008 (0.0022) +[2024-03-29 18:29:13,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40960.1, 300 sec: 41876.4). Total num frames: 950468608. Throughput: 0: 41728.0. Samples: 832731680. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:13,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 18:29:15,613][00497] Updated weights for policy 0, policy_version 58018 (0.0023) +[2024-03-29 18:29:18,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 950714368. Throughput: 0: 41807.7. Samples: 832847840. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:18,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 18:29:19,625][00497] Updated weights for policy 0, policy_version 58028 (0.0026) +[2024-03-29 18:29:23,592][00497] Updated weights for policy 0, policy_version 58038 (0.0023) +[2024-03-29 18:29:23,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41233.2, 300 sec: 41820.9). Total num frames: 950894592. Throughput: 0: 41249.0. Samples: 833090860. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:23,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 18:29:27,689][00497] Updated weights for policy 0, policy_version 58048 (0.0020) +[2024-03-29 18:29:28,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41233.1, 300 sec: 41820.8). Total num frames: 951091200. Throughput: 0: 41288.0. Samples: 833344260. Policy #0 lag: (min: 0.0, avg: 21.7, max: 43.0) +[2024-03-29 18:29:28,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:29:31,560][00497] Updated weights for policy 0, policy_version 58058 (0.0027) +[2024-03-29 18:29:32,306][00476] Signal inference workers to stop experience collection... (29600 times) +[2024-03-29 18:29:32,306][00476] Signal inference workers to resume experience collection... (29600 times) +[2024-03-29 18:29:32,349][00497] InferenceWorker_p0-w0: stopping experience collection (29600 times) +[2024-03-29 18:29:32,349][00497] InferenceWorker_p0-w0: resuming experience collection (29600 times) +[2024-03-29 18:29:33,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 951320576. Throughput: 0: 41534.2. Samples: 833463480. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:33,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:29:35,435][00497] Updated weights for policy 0, policy_version 58068 (0.0026) +[2024-03-29 18:29:38,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40960.0, 300 sec: 41765.3). Total num frames: 951517184. Throughput: 0: 41282.6. Samples: 833712040. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:38,840][00126] Avg episode reward: [(0, '0.647')] +[2024-03-29 18:29:39,588][00497] Updated weights for policy 0, policy_version 58078 (0.0025) +[2024-03-29 18:29:43,478][00497] Updated weights for policy 0, policy_version 58088 (0.0021) +[2024-03-29 18:29:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41233.0, 300 sec: 41765.3). Total num frames: 951713792. Throughput: 0: 41003.6. Samples: 833952360. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:43,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:29:47,291][00497] Updated weights for policy 0, policy_version 58098 (0.0022) +[2024-03-29 18:29:48,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 41820.8). Total num frames: 951943168. Throughput: 0: 41563.6. Samples: 834093720. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:48,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:29:51,398][00497] Updated weights for policy 0, policy_version 58108 (0.0023) +[2024-03-29 18:29:53,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41233.0, 300 sec: 41765.3). Total num frames: 952139776. Throughput: 0: 41465.8. Samples: 834346760. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:53,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:29:55,457][00497] Updated weights for policy 0, policy_version 58118 (0.0032) +[2024-03-29 18:29:58,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 952352768. Throughput: 0: 40771.0. Samples: 834566380. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:29:58,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 18:29:59,468][00497] Updated weights for policy 0, policy_version 58128 (0.0021) +[2024-03-29 18:30:03,334][00497] Updated weights for policy 0, policy_version 58138 (0.0027) +[2024-03-29 18:30:03,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 952549376. Throughput: 0: 41403.0. Samples: 834710980. Policy #0 lag: (min: 0.0, avg: 19.8, max: 43.0) +[2024-03-29 18:30:03,841][00126] Avg episode reward: [(0, '0.509')] +[2024-03-29 18:30:07,082][00497] Updated weights for policy 0, policy_version 58148 (0.0017) +[2024-03-29 18:30:08,105][00476] Signal inference workers to stop experience collection... (29650 times) +[2024-03-29 18:30:08,147][00497] InferenceWorker_p0-w0: stopping experience collection (29650 times) +[2024-03-29 18:30:08,264][00476] Signal inference workers to resume experience collection... (29650 times) +[2024-03-29 18:30:08,264][00497] InferenceWorker_p0-w0: resuming experience collection (29650 times) +[2024-03-29 18:30:08,839][00126] Fps is (10 sec: 40960.7, 60 sec: 40960.1, 300 sec: 41709.8). Total num frames: 952762368. Throughput: 0: 41488.9. Samples: 834957860. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:08,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 18:30:11,150][00497] Updated weights for policy 0, policy_version 58158 (0.0022) +[2024-03-29 18:30:13,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42052.1, 300 sec: 41820.8). Total num frames: 952991744. Throughput: 0: 41171.4. Samples: 835196980. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:13,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 18:30:14,951][00497] Updated weights for policy 0, policy_version 58168 (0.0030) +[2024-03-29 18:30:18,809][00497] Updated weights for policy 0, policy_version 58178 (0.0023) +[2024-03-29 18:30:18,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 953188352. Throughput: 0: 41637.7. Samples: 835337180. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:18,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:30:22,667][00497] Updated weights for policy 0, policy_version 58188 (0.0023) +[2024-03-29 18:30:23,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 953368576. Throughput: 0: 41381.8. Samples: 835574220. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:23,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 18:30:26,772][00497] Updated weights for policy 0, policy_version 58198 (0.0025) +[2024-03-29 18:30:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 41820.8). Total num frames: 953614336. Throughput: 0: 41627.1. Samples: 835825580. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:28,840][00126] Avg episode reward: [(0, '0.648')] +[2024-03-29 18:30:30,574][00497] Updated weights for policy 0, policy_version 58208 (0.0024) +[2024-03-29 18:30:33,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 953794560. Throughput: 0: 41506.2. Samples: 835961500. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:33,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 18:30:34,715][00497] Updated weights for policy 0, policy_version 58218 (0.0022) +[2024-03-29 18:30:38,627][00497] Updated weights for policy 0, policy_version 58228 (0.0031) +[2024-03-29 18:30:38,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 954007552. Throughput: 0: 40995.0. Samples: 836191540. Policy #0 lag: (min: 0.0, avg: 22.6, max: 45.0) +[2024-03-29 18:30:38,841][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 18:30:41,366][00476] Signal inference workers to stop experience collection... (29700 times) +[2024-03-29 18:30:41,381][00497] InferenceWorker_p0-w0: stopping experience collection (29700 times) +[2024-03-29 18:30:41,580][00476] Signal inference workers to resume experience collection... (29700 times) +[2024-03-29 18:30:41,581][00497] InferenceWorker_p0-w0: resuming experience collection (29700 times) +[2024-03-29 18:30:42,922][00497] Updated weights for policy 0, policy_version 58238 (0.0020) +[2024-03-29 18:30:43,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 954204160. Throughput: 0: 41786.4. Samples: 836446760. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:30:43,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:30:46,587][00497] Updated weights for policy 0, policy_version 58248 (0.0032) +[2024-03-29 18:30:48,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 954417152. Throughput: 0: 41308.6. Samples: 836569860. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:30:48,841][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:30:50,643][00497] Updated weights for policy 0, policy_version 58258 (0.0024) +[2024-03-29 18:30:53,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 954630144. Throughput: 0: 41192.3. Samples: 836811520. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:30:53,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:30:54,742][00497] Updated weights for policy 0, policy_version 58268 (0.0029) +[2024-03-29 18:30:58,375][00497] Updated weights for policy 0, policy_version 58278 (0.0022) +[2024-03-29 18:30:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.3, 300 sec: 41709.8). Total num frames: 954843136. Throughput: 0: 41704.2. Samples: 837073660. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:30:58,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:31:02,357][00497] Updated weights for policy 0, policy_version 58288 (0.0025) +[2024-03-29 18:31:03,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 955056128. Throughput: 0: 41256.0. Samples: 837193700. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:31:03,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:31:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058292_955056128.pth... +[2024-03-29 18:31:04,170][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057683_945078272.pth +[2024-03-29 18:31:06,228][00497] Updated weights for policy 0, policy_version 58298 (0.0021) +[2024-03-29 18:31:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 955269120. Throughput: 0: 41729.8. Samples: 837452060. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:31:08,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:31:10,339][00497] Updated weights for policy 0, policy_version 58308 (0.0022) +[2024-03-29 18:31:11,829][00476] Signal inference workers to stop experience collection... (29750 times) +[2024-03-29 18:31:11,870][00497] InferenceWorker_p0-w0: stopping experience collection (29750 times) +[2024-03-29 18:31:12,054][00476] Signal inference workers to resume experience collection... (29750 times) +[2024-03-29 18:31:12,055][00497] InferenceWorker_p0-w0: resuming experience collection (29750 times) +[2024-03-29 18:31:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 955465728. Throughput: 0: 41579.6. Samples: 837696660. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 18:31:13,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:31:14,044][00497] Updated weights for policy 0, policy_version 58318 (0.0026) +[2024-03-29 18:31:18,082][00497] Updated weights for policy 0, policy_version 58328 (0.0021) +[2024-03-29 18:31:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 955678720. Throughput: 0: 41235.1. Samples: 837817080. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:18,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:31:21,976][00497] Updated weights for policy 0, policy_version 58338 (0.0030) +[2024-03-29 18:31:23,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 955908096. Throughput: 0: 41986.3. Samples: 838080920. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:23,841][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:31:26,009][00497] Updated weights for policy 0, policy_version 58348 (0.0021) +[2024-03-29 18:31:28,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.2, 300 sec: 41598.7). Total num frames: 956088320. Throughput: 0: 41736.4. Samples: 838324900. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:28,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:31:29,641][00497] Updated weights for policy 0, policy_version 58358 (0.0019) +[2024-03-29 18:31:33,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 956284928. Throughput: 0: 41625.3. Samples: 838443000. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:33,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:31:34,038][00497] Updated weights for policy 0, policy_version 58368 (0.0026) +[2024-03-29 18:31:37,858][00497] Updated weights for policy 0, policy_version 58378 (0.0020) +[2024-03-29 18:31:38,839][00126] Fps is (10 sec: 42597.5, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 956514304. Throughput: 0: 42078.6. Samples: 838705060. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:38,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 18:31:41,927][00497] Updated weights for policy 0, policy_version 58388 (0.0025) +[2024-03-29 18:31:43,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.0, 300 sec: 41543.1). Total num frames: 956694528. Throughput: 0: 41702.1. Samples: 838950260. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:43,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:31:44,004][00476] Signal inference workers to stop experience collection... (29800 times) +[2024-03-29 18:31:44,034][00497] InferenceWorker_p0-w0: stopping experience collection (29800 times) +[2024-03-29 18:31:44,190][00476] Signal inference workers to resume experience collection... (29800 times) +[2024-03-29 18:31:44,191][00497] InferenceWorker_p0-w0: resuming experience collection (29800 times) +[2024-03-29 18:31:45,546][00497] Updated weights for policy 0, policy_version 58398 (0.0030) +[2024-03-29 18:31:48,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 956923904. Throughput: 0: 41664.1. Samples: 839068580. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:48,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 18:31:49,742][00497] Updated weights for policy 0, policy_version 58408 (0.0030) +[2024-03-29 18:31:53,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 957104128. Throughput: 0: 41347.0. Samples: 839312680. Policy #0 lag: (min: 0.0, avg: 20.3, max: 40.0) +[2024-03-29 18:31:53,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:31:53,868][00497] Updated weights for policy 0, policy_version 58418 (0.0027) +[2024-03-29 18:31:57,889][00497] Updated weights for policy 0, policy_version 58428 (0.0026) +[2024-03-29 18:31:58,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.0, 300 sec: 41543.2). Total num frames: 957317120. Throughput: 0: 41609.9. Samples: 839569100. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:31:58,841][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 18:32:01,548][00497] Updated weights for policy 0, policy_version 58438 (0.0023) +[2024-03-29 18:32:03,839][00126] Fps is (10 sec: 45876.0, 60 sec: 41779.3, 300 sec: 41654.3). Total num frames: 957562880. Throughput: 0: 41480.5. Samples: 839683700. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:03,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:32:05,438][00497] Updated weights for policy 0, policy_version 58448 (0.0025) +[2024-03-29 18:32:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40960.0, 300 sec: 41543.2). Total num frames: 957726720. Throughput: 0: 41028.5. Samples: 839927200. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:08,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 18:32:09,696][00497] Updated weights for policy 0, policy_version 58458 (0.0026) +[2024-03-29 18:32:13,611][00497] Updated weights for policy 0, policy_version 58468 (0.0023) +[2024-03-29 18:32:13,839][00126] Fps is (10 sec: 37683.0, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 957939712. Throughput: 0: 41290.6. Samples: 840182980. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:13,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:32:17,607][00497] Updated weights for policy 0, policy_version 58478 (0.0021) +[2024-03-29 18:32:17,723][00476] Signal inference workers to stop experience collection... (29850 times) +[2024-03-29 18:32:17,766][00497] InferenceWorker_p0-w0: stopping experience collection (29850 times) +[2024-03-29 18:32:17,889][00476] Signal inference workers to resume experience collection... (29850 times) +[2024-03-29 18:32:17,890][00497] InferenceWorker_p0-w0: resuming experience collection (29850 times) +[2024-03-29 18:32:18,839][00126] Fps is (10 sec: 44236.1, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 958169088. Throughput: 0: 41570.1. Samples: 840313660. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:18,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:32:21,400][00497] Updated weights for policy 0, policy_version 58488 (0.0021) +[2024-03-29 18:32:23,839][00126] Fps is (10 sec: 40960.1, 60 sec: 40686.9, 300 sec: 41543.2). Total num frames: 958349312. Throughput: 0: 40979.2. Samples: 840549120. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:23,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 18:32:25,522][00497] Updated weights for policy 0, policy_version 58498 (0.0023) +[2024-03-29 18:32:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 958578688. Throughput: 0: 41129.0. Samples: 840801060. Policy #0 lag: (min: 0.0, avg: 20.6, max: 43.0) +[2024-03-29 18:32:28,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:32:29,524][00497] Updated weights for policy 0, policy_version 58508 (0.0024) +[2024-03-29 18:32:33,270][00497] Updated weights for policy 0, policy_version 58518 (0.0019) +[2024-03-29 18:32:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 958775296. Throughput: 0: 41512.0. Samples: 840936620. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:33,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 18:32:36,983][00497] Updated weights for policy 0, policy_version 58528 (0.0025) +[2024-03-29 18:32:38,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41233.2, 300 sec: 41543.2). Total num frames: 958988288. Throughput: 0: 41483.2. Samples: 841179420. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:38,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:32:41,122][00497] Updated weights for policy 0, policy_version 58538 (0.0025) +[2024-03-29 18:32:43,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42052.4, 300 sec: 41654.2). Total num frames: 959217664. Throughput: 0: 41223.1. Samples: 841424140. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:43,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 18:32:45,130][00497] Updated weights for policy 0, policy_version 58548 (0.0036) +[2024-03-29 18:32:48,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 959397888. Throughput: 0: 41911.1. Samples: 841569700. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:48,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:32:48,939][00497] Updated weights for policy 0, policy_version 58558 (0.0037) +[2024-03-29 18:32:51,537][00476] Signal inference workers to stop experience collection... (29900 times) +[2024-03-29 18:32:51,617][00497] InferenceWorker_p0-w0: stopping experience collection (29900 times) +[2024-03-29 18:32:51,617][00476] Signal inference workers to resume experience collection... (29900 times) +[2024-03-29 18:32:51,642][00497] InferenceWorker_p0-w0: resuming experience collection (29900 times) +[2024-03-29 18:32:52,591][00497] Updated weights for policy 0, policy_version 58568 (0.0028) +[2024-03-29 18:32:53,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42052.3, 300 sec: 41543.2). Total num frames: 959627264. Throughput: 0: 41622.5. Samples: 841800220. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:53,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:32:56,801][00497] Updated weights for policy 0, policy_version 58578 (0.0023) +[2024-03-29 18:32:58,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.2, 300 sec: 41598.7). Total num frames: 959840256. Throughput: 0: 41555.1. Samples: 842052960. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:32:58,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:33:00,826][00497] Updated weights for policy 0, policy_version 58588 (0.0023) +[2024-03-29 18:33:03,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 960036864. Throughput: 0: 41861.8. Samples: 842197440. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 18:33:03,841][00126] Avg episode reward: [(0, '0.660')] +[2024-03-29 18:33:04,037][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058597_960053248.pth... +[2024-03-29 18:33:04,357][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000057988_950075392.pth +[2024-03-29 18:33:04,737][00497] Updated weights for policy 0, policy_version 58598 (0.0021) +[2024-03-29 18:33:08,187][00497] Updated weights for policy 0, policy_version 58608 (0.0019) +[2024-03-29 18:33:08,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42325.3, 300 sec: 41543.2). Total num frames: 960266240. Throughput: 0: 41712.1. Samples: 842426160. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:08,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:33:12,552][00497] Updated weights for policy 0, policy_version 58618 (0.0029) +[2024-03-29 18:33:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 960462848. Throughput: 0: 41682.7. Samples: 842676780. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:13,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:33:16,648][00497] Updated weights for policy 0, policy_version 58628 (0.0020) +[2024-03-29 18:33:18,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.3, 300 sec: 41487.7). Total num frames: 960659456. Throughput: 0: 41534.8. Samples: 842805680. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:18,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 18:33:20,325][00497] Updated weights for policy 0, policy_version 58638 (0.0025) +[2024-03-29 18:33:23,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.3, 300 sec: 41543.2). Total num frames: 960872448. Throughput: 0: 41627.1. Samples: 843052640. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:23,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:33:23,864][00497] Updated weights for policy 0, policy_version 58648 (0.0029) +[2024-03-29 18:33:27,538][00476] Signal inference workers to stop experience collection... (29950 times) +[2024-03-29 18:33:27,617][00476] Signal inference workers to resume experience collection... (29950 times) +[2024-03-29 18:33:27,619][00497] InferenceWorker_p0-w0: stopping experience collection (29950 times) +[2024-03-29 18:33:27,643][00497] InferenceWorker_p0-w0: resuming experience collection (29950 times) +[2024-03-29 18:33:28,223][00497] Updated weights for policy 0, policy_version 58658 (0.0023) +[2024-03-29 18:33:28,839][00126] Fps is (10 sec: 42597.6, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 961085440. Throughput: 0: 42027.5. Samples: 843315380. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:28,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:33:32,299][00497] Updated weights for policy 0, policy_version 58668 (0.0021) +[2024-03-29 18:33:33,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 961282048. Throughput: 0: 41476.8. Samples: 843436160. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:33,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:33:35,772][00497] Updated weights for policy 0, policy_version 58678 (0.0029) +[2024-03-29 18:33:38,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 961511424. Throughput: 0: 41956.9. Samples: 843688280. Policy #0 lag: (min: 1.0, avg: 22.2, max: 41.0) +[2024-03-29 18:33:38,841][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 18:33:39,238][00497] Updated weights for policy 0, policy_version 58688 (0.0020) +[2024-03-29 18:33:43,751][00497] Updated weights for policy 0, policy_version 58698 (0.0025) +[2024-03-29 18:33:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 961708032. Throughput: 0: 42173.7. Samples: 843950780. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:33:43,840][00126] Avg episode reward: [(0, '0.636')] +[2024-03-29 18:33:47,546][00497] Updated weights for policy 0, policy_version 58708 (0.0022) +[2024-03-29 18:33:48,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 961904640. Throughput: 0: 41569.4. Samples: 844068060. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:33:48,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:33:51,688][00497] Updated weights for policy 0, policy_version 58718 (0.0029) +[2024-03-29 18:33:53,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.3, 300 sec: 41654.3). Total num frames: 962134016. Throughput: 0: 42148.4. Samples: 844322840. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:33:53,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:33:54,906][00497] Updated weights for policy 0, policy_version 58728 (0.0027) +[2024-03-29 18:33:57,438][00476] Signal inference workers to stop experience collection... (30000 times) +[2024-03-29 18:33:57,438][00476] Signal inference workers to resume experience collection... (30000 times) +[2024-03-29 18:33:57,463][00497] InferenceWorker_p0-w0: stopping experience collection (30000 times) +[2024-03-29 18:33:57,463][00497] InferenceWorker_p0-w0: resuming experience collection (30000 times) +[2024-03-29 18:33:58,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 962330624. Throughput: 0: 42181.3. Samples: 844574940. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:33:58,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 18:33:59,263][00497] Updated weights for policy 0, policy_version 58738 (0.0027) +[2024-03-29 18:34:03,150][00497] Updated weights for policy 0, policy_version 58748 (0.0019) +[2024-03-29 18:34:03,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.2, 300 sec: 41432.1). Total num frames: 962527232. Throughput: 0: 41891.5. Samples: 844690800. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:34:03,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 18:34:07,279][00497] Updated weights for policy 0, policy_version 58758 (0.0021) +[2024-03-29 18:34:08,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 962772992. Throughput: 0: 42288.5. Samples: 844955620. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:34:08,842][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 18:34:10,555][00497] Updated weights for policy 0, policy_version 58768 (0.0022) +[2024-03-29 18:34:13,839][00126] Fps is (10 sec: 44236.1, 60 sec: 41779.1, 300 sec: 41543.1). Total num frames: 962969600. Throughput: 0: 42059.9. Samples: 845208080. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:34:13,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:34:14,922][00497] Updated weights for policy 0, policy_version 58778 (0.0025) +[2024-03-29 18:34:18,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 963166208. Throughput: 0: 41728.5. Samples: 845313940. Policy #0 lag: (min: 2.0, avg: 20.1, max: 42.0) +[2024-03-29 18:34:18,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 18:34:18,893][00497] Updated weights for policy 0, policy_version 58788 (0.0019) +[2024-03-29 18:34:22,958][00497] Updated weights for policy 0, policy_version 58798 (0.0019) +[2024-03-29 18:34:23,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 963395584. Throughput: 0: 41979.5. Samples: 845577360. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:23,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 18:34:26,092][00497] Updated weights for policy 0, policy_version 58808 (0.0031) +[2024-03-29 18:34:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 963592192. Throughput: 0: 41797.4. Samples: 845831660. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:28,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 18:34:29,999][00476] Signal inference workers to stop experience collection... (30050 times) +[2024-03-29 18:34:30,033][00497] InferenceWorker_p0-w0: stopping experience collection (30050 times) +[2024-03-29 18:34:30,213][00476] Signal inference workers to resume experience collection... (30050 times) +[2024-03-29 18:34:30,213][00497] InferenceWorker_p0-w0: resuming experience collection (30050 times) +[2024-03-29 18:34:30,493][00497] Updated weights for policy 0, policy_version 58818 (0.0026) +[2024-03-29 18:34:33,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 963821568. Throughput: 0: 41931.0. Samples: 845954960. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:33,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:34:34,383][00497] Updated weights for policy 0, policy_version 58828 (0.0028) +[2024-03-29 18:34:38,642][00497] Updated weights for policy 0, policy_version 58838 (0.0022) +[2024-03-29 18:34:38,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 964001792. Throughput: 0: 41852.8. Samples: 846206220. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:38,840][00126] Avg episode reward: [(0, '0.647')] +[2024-03-29 18:34:41,892][00497] Updated weights for policy 0, policy_version 58848 (0.0019) +[2024-03-29 18:34:43,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 964214784. Throughput: 0: 41762.7. Samples: 846454260. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:43,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 18:34:46,138][00497] Updated weights for policy 0, policy_version 58858 (0.0022) +[2024-03-29 18:34:48,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.3, 300 sec: 41765.3). Total num frames: 964460544. Throughput: 0: 42074.6. Samples: 846584160. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:48,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 18:34:50,049][00497] Updated weights for policy 0, policy_version 58868 (0.0026) +[2024-03-29 18:34:53,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 964640768. Throughput: 0: 42075.1. Samples: 846849000. Policy #0 lag: (min: 1.0, avg: 18.4, max: 41.0) +[2024-03-29 18:34:53,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:34:54,314][00497] Updated weights for policy 0, policy_version 58878 (0.0028) +[2024-03-29 18:34:57,375][00497] Updated weights for policy 0, policy_version 58888 (0.0027) +[2024-03-29 18:34:58,839][00126] Fps is (10 sec: 39322.2, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 964853760. Throughput: 0: 41570.4. Samples: 847078740. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:34:58,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 18:35:01,952][00497] Updated weights for policy 0, policy_version 58898 (0.0027) +[2024-03-29 18:35:03,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.3, 300 sec: 41709.8). Total num frames: 965066752. Throughput: 0: 42143.1. Samples: 847210380. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:03,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:35:04,174][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058905_965099520.pth... +[2024-03-29 18:35:04,501][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058292_955056128.pth +[2024-03-29 18:35:04,562][00476] Signal inference workers to stop experience collection... (30100 times) +[2024-03-29 18:35:04,598][00497] InferenceWorker_p0-w0: stopping experience collection (30100 times) +[2024-03-29 18:35:04,774][00476] Signal inference workers to resume experience collection... (30100 times) +[2024-03-29 18:35:04,774][00497] InferenceWorker_p0-w0: resuming experience collection (30100 times) +[2024-03-29 18:35:05,935][00497] Updated weights for policy 0, policy_version 58908 (0.0022) +[2024-03-29 18:35:08,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 965263360. Throughput: 0: 42196.5. Samples: 847476200. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:08,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:35:10,206][00497] Updated weights for policy 0, policy_version 58918 (0.0027) +[2024-03-29 18:35:13,305][00497] Updated weights for policy 0, policy_version 58928 (0.0031) +[2024-03-29 18:35:13,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 965476352. Throughput: 0: 41219.9. Samples: 847686560. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:13,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:35:17,735][00497] Updated weights for policy 0, policy_version 58938 (0.0022) +[2024-03-29 18:35:18,839][00126] Fps is (10 sec: 42599.2, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 965689344. Throughput: 0: 41681.9. Samples: 847830640. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:18,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:35:21,565][00497] Updated weights for policy 0, policy_version 58948 (0.0025) +[2024-03-29 18:35:23,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.2, 300 sec: 41543.2). Total num frames: 965869568. Throughput: 0: 41870.7. Samples: 848090400. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:23,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 18:35:26,058][00497] Updated weights for policy 0, policy_version 58958 (0.0023) +[2024-03-29 18:35:28,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 966098944. Throughput: 0: 41178.6. Samples: 848307300. Policy #0 lag: (min: 0.0, avg: 22.3, max: 43.0) +[2024-03-29 18:35:28,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 18:35:29,199][00497] Updated weights for policy 0, policy_version 58968 (0.0026) +[2024-03-29 18:35:33,674][00497] Updated weights for policy 0, policy_version 58978 (0.0018) +[2024-03-29 18:35:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 966295552. Throughput: 0: 41249.5. Samples: 848440380. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:33,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:35:37,686][00497] Updated weights for policy 0, policy_version 58988 (0.0024) +[2024-03-29 18:35:38,736][00476] Signal inference workers to stop experience collection... (30150 times) +[2024-03-29 18:35:38,765][00497] InferenceWorker_p0-w0: stopping experience collection (30150 times) +[2024-03-29 18:35:38,839][00126] Fps is (10 sec: 36045.2, 60 sec: 40960.1, 300 sec: 41543.2). Total num frames: 966459392. Throughput: 0: 40784.5. Samples: 848684300. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:38,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:35:38,947][00476] Signal inference workers to resume experience collection... (30150 times) +[2024-03-29 18:35:38,948][00497] InferenceWorker_p0-w0: resuming experience collection (30150 times) +[2024-03-29 18:35:42,261][00497] Updated weights for policy 0, policy_version 58998 (0.0025) +[2024-03-29 18:35:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 966705152. Throughput: 0: 41040.9. Samples: 848925580. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:43,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:35:45,536][00497] Updated weights for policy 0, policy_version 59008 (0.0023) +[2024-03-29 18:35:48,839][00126] Fps is (10 sec: 42598.1, 60 sec: 40413.9, 300 sec: 41543.2). Total num frames: 966885376. Throughput: 0: 40836.9. Samples: 849048040. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:48,841][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 18:35:49,941][00497] Updated weights for policy 0, policy_version 59018 (0.0032) +[2024-03-29 18:35:53,839][00126] Fps is (10 sec: 39321.0, 60 sec: 40959.9, 300 sec: 41543.1). Total num frames: 967098368. Throughput: 0: 40289.3. Samples: 849289220. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:53,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 18:35:54,010][00497] Updated weights for policy 0, policy_version 59028 (0.0018) +[2024-03-29 18:35:58,103][00497] Updated weights for policy 0, policy_version 59038 (0.0023) +[2024-03-29 18:35:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40959.9, 300 sec: 41543.2). Total num frames: 967311360. Throughput: 0: 41300.9. Samples: 849545100. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:35:58,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 18:36:01,485][00497] Updated weights for policy 0, policy_version 59048 (0.0019) +[2024-03-29 18:36:03,839][00126] Fps is (10 sec: 40960.7, 60 sec: 40687.0, 300 sec: 41487.6). Total num frames: 967507968. Throughput: 0: 40624.4. Samples: 849658740. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:36:03,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:36:05,906][00497] Updated weights for policy 0, policy_version 59058 (0.0025) +[2024-03-29 18:36:08,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 967737344. Throughput: 0: 40594.2. Samples: 849917140. Policy #0 lag: (min: 1.0, avg: 19.7, max: 40.0) +[2024-03-29 18:36:08,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:36:09,827][00497] Updated weights for policy 0, policy_version 59068 (0.0026) +[2024-03-29 18:36:10,577][00476] Signal inference workers to stop experience collection... (30200 times) +[2024-03-29 18:36:10,612][00497] InferenceWorker_p0-w0: stopping experience collection (30200 times) +[2024-03-29 18:36:10,800][00476] Signal inference workers to resume experience collection... (30200 times) +[2024-03-29 18:36:10,800][00497] InferenceWorker_p0-w0: resuming experience collection (30200 times) +[2024-03-29 18:36:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 40687.0, 300 sec: 41487.6). Total num frames: 967917568. Throughput: 0: 41190.3. Samples: 850160860. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:13,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:36:13,964][00497] Updated weights for policy 0, policy_version 59078 (0.0023) +[2024-03-29 18:36:17,202][00497] Updated weights for policy 0, policy_version 59088 (0.0024) +[2024-03-29 18:36:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40959.9, 300 sec: 41487.6). Total num frames: 968146944. Throughput: 0: 40883.9. Samples: 850280160. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:18,840][00126] Avg episode reward: [(0, '0.650')] +[2024-03-29 18:36:21,551][00497] Updated weights for policy 0, policy_version 59098 (0.0020) +[2024-03-29 18:36:23,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41506.0, 300 sec: 41598.7). Total num frames: 968359936. Throughput: 0: 41284.3. Samples: 850542100. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:23,841][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 18:36:25,493][00497] Updated weights for policy 0, policy_version 59108 (0.0017) +[2024-03-29 18:36:28,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 968572928. Throughput: 0: 41407.5. Samples: 850788920. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:28,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 18:36:29,752][00497] Updated weights for policy 0, policy_version 59118 (0.0034) +[2024-03-29 18:36:33,298][00497] Updated weights for policy 0, policy_version 59128 (0.0024) +[2024-03-29 18:36:33,839][00126] Fps is (10 sec: 39322.4, 60 sec: 40960.0, 300 sec: 41487.7). Total num frames: 968753152. Throughput: 0: 41242.8. Samples: 850903960. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:33,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:36:37,367][00497] Updated weights for policy 0, policy_version 59138 (0.0021) +[2024-03-29 18:36:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.3, 300 sec: 41654.3). Total num frames: 968982528. Throughput: 0: 41747.7. Samples: 851167860. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:38,840][00126] Avg episode reward: [(0, '0.627')] +[2024-03-29 18:36:41,058][00497] Updated weights for policy 0, policy_version 59148 (0.0021) +[2024-03-29 18:36:42,757][00476] Signal inference workers to stop experience collection... (30250 times) +[2024-03-29 18:36:42,836][00497] InferenceWorker_p0-w0: stopping experience collection (30250 times) +[2024-03-29 18:36:42,924][00476] Signal inference workers to resume experience collection... (30250 times) +[2024-03-29 18:36:42,925][00497] InferenceWorker_p0-w0: resuming experience collection (30250 times) +[2024-03-29 18:36:43,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41233.0, 300 sec: 41543.2). Total num frames: 969179136. Throughput: 0: 41626.6. Samples: 851418300. Policy #0 lag: (min: 0.0, avg: 17.6, max: 41.0) +[2024-03-29 18:36:43,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:36:45,363][00497] Updated weights for policy 0, policy_version 59158 (0.0021) +[2024-03-29 18:36:48,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 41654.3). Total num frames: 969392128. Throughput: 0: 41831.5. Samples: 851541160. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:36:48,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:36:49,055][00497] Updated weights for policy 0, policy_version 59168 (0.0025) +[2024-03-29 18:36:53,051][00497] Updated weights for policy 0, policy_version 59178 (0.0017) +[2024-03-29 18:36:53,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 969605120. Throughput: 0: 41678.6. Samples: 851792680. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:36:53,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:36:56,797][00497] Updated weights for policy 0, policy_version 59188 (0.0019) +[2024-03-29 18:36:58,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 969801728. Throughput: 0: 42131.0. Samples: 852056760. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:36:58,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 18:37:01,086][00497] Updated weights for policy 0, policy_version 59198 (0.0038) +[2024-03-29 18:37:03,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 970031104. Throughput: 0: 42126.6. Samples: 852175860. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:37:03,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 18:37:04,134][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059207_970047488.pth... +[2024-03-29 18:37:04,470][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058597_960053248.pth +[2024-03-29 18:37:04,816][00497] Updated weights for policy 0, policy_version 59208 (0.0022) +[2024-03-29 18:37:08,829][00497] Updated weights for policy 0, policy_version 59218 (0.0023) +[2024-03-29 18:37:08,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 970227712. Throughput: 0: 41505.9. Samples: 852409860. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:37:08,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:37:12,806][00497] Updated weights for policy 0, policy_version 59228 (0.0023) +[2024-03-29 18:37:13,839][00126] Fps is (10 sec: 37683.7, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 970407936. Throughput: 0: 41806.7. Samples: 852670220. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:37:13,840][00126] Avg episode reward: [(0, '0.655')] +[2024-03-29 18:37:16,898][00497] Updated weights for policy 0, policy_version 59238 (0.0025) +[2024-03-29 18:37:18,814][00476] Signal inference workers to stop experience collection... (30300 times) +[2024-03-29 18:37:18,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 970653696. Throughput: 0: 41982.0. Samples: 852793160. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:37:18,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:37:18,849][00497] InferenceWorker_p0-w0: stopping experience collection (30300 times) +[2024-03-29 18:37:19,044][00476] Signal inference workers to resume experience collection... (30300 times) +[2024-03-29 18:37:19,045][00497] InferenceWorker_p0-w0: resuming experience collection (30300 times) +[2024-03-29 18:37:20,475][00497] Updated weights for policy 0, policy_version 59248 (0.0022) +[2024-03-29 18:37:23,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 970850304. Throughput: 0: 41476.8. Samples: 853034320. Policy #0 lag: (min: 1.0, avg: 22.1, max: 42.0) +[2024-03-29 18:37:23,840][00126] Avg episode reward: [(0, '0.642')] +[2024-03-29 18:37:24,652][00497] Updated weights for policy 0, policy_version 59258 (0.0026) +[2024-03-29 18:37:28,511][00497] Updated weights for policy 0, policy_version 59268 (0.0025) +[2024-03-29 18:37:28,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 971046912. Throughput: 0: 41625.8. Samples: 853291460. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:28,841][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:37:32,609][00497] Updated weights for policy 0, policy_version 59278 (0.0022) +[2024-03-29 18:37:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 971276288. Throughput: 0: 41671.1. Samples: 853416360. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:33,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:37:36,277][00497] Updated weights for policy 0, policy_version 59288 (0.0022) +[2024-03-29 18:37:38,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.0, 300 sec: 41543.1). Total num frames: 971472896. Throughput: 0: 41353.8. Samples: 853653600. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:38,840][00126] Avg episode reward: [(0, '0.673')] +[2024-03-29 18:37:40,236][00497] Updated weights for policy 0, policy_version 59298 (0.0017) +[2024-03-29 18:37:43,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 971685888. Throughput: 0: 41511.2. Samples: 853924760. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:43,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:37:43,925][00497] Updated weights for policy 0, policy_version 59308 (0.0018) +[2024-03-29 18:37:48,020][00497] Updated weights for policy 0, policy_version 59318 (0.0029) +[2024-03-29 18:37:48,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 971898880. Throughput: 0: 41552.1. Samples: 854045700. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:48,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:37:51,405][00497] Updated weights for policy 0, policy_version 59328 (0.0030) +[2024-03-29 18:37:53,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 972111872. Throughput: 0: 42065.7. Samples: 854302820. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:53,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:37:55,091][00476] Signal inference workers to stop experience collection... (30350 times) +[2024-03-29 18:37:55,166][00497] InferenceWorker_p0-w0: stopping experience collection (30350 times) +[2024-03-29 18:37:55,171][00476] Signal inference workers to resume experience collection... (30350 times) +[2024-03-29 18:37:55,192][00497] InferenceWorker_p0-w0: resuming experience collection (30350 times) +[2024-03-29 18:37:55,474][00497] Updated weights for policy 0, policy_version 59338 (0.0026) +[2024-03-29 18:37:58,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 972341248. Throughput: 0: 42096.0. Samples: 854564540. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:37:58,840][00126] Avg episode reward: [(0, '0.643')] +[2024-03-29 18:37:59,372][00497] Updated weights for policy 0, policy_version 59348 (0.0023) +[2024-03-29 18:38:03,363][00497] Updated weights for policy 0, policy_version 59358 (0.0024) +[2024-03-29 18:38:03,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 972537856. Throughput: 0: 42164.6. Samples: 854690560. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:03,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 18:38:06,690][00497] Updated weights for policy 0, policy_version 59368 (0.0022) +[2024-03-29 18:38:08,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 972750848. Throughput: 0: 42404.4. Samples: 854942520. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:08,840][00126] Avg episode reward: [(0, '0.663')] +[2024-03-29 18:38:10,968][00497] Updated weights for policy 0, policy_version 59378 (0.0022) +[2024-03-29 18:38:13,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42598.4, 300 sec: 41709.8). Total num frames: 972963840. Throughput: 0: 42471.2. Samples: 855202660. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:13,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 18:38:14,868][00497] Updated weights for policy 0, policy_version 59388 (0.0023) +[2024-03-29 18:38:18,838][00497] Updated weights for policy 0, policy_version 59398 (0.0025) +[2024-03-29 18:38:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 973176832. Throughput: 0: 42719.5. Samples: 855338740. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:18,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 18:38:22,458][00497] Updated weights for policy 0, policy_version 59408 (0.0037) +[2024-03-29 18:38:23,840][00126] Fps is (10 sec: 40958.2, 60 sec: 42052.0, 300 sec: 41654.2). Total num frames: 973373440. Throughput: 0: 42547.3. Samples: 855568240. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:23,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:38:26,506][00497] Updated weights for policy 0, policy_version 59418 (0.0022) +[2024-03-29 18:38:28,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 973586432. Throughput: 0: 42389.8. Samples: 855832300. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:28,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:38:30,460][00497] Updated weights for policy 0, policy_version 59428 (0.0024) +[2024-03-29 18:38:31,510][00476] Signal inference workers to stop experience collection... (30400 times) +[2024-03-29 18:38:31,578][00497] InferenceWorker_p0-w0: stopping experience collection (30400 times) +[2024-03-29 18:38:31,673][00476] Signal inference workers to resume experience collection... (30400 times) +[2024-03-29 18:38:31,674][00497] InferenceWorker_p0-w0: resuming experience collection (30400 times) +[2024-03-29 18:38:33,839][00126] Fps is (10 sec: 42600.2, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 973799424. Throughput: 0: 42676.0. Samples: 855966120. Policy #0 lag: (min: 2.0, avg: 20.1, max: 43.0) +[2024-03-29 18:38:33,840][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:38:34,456][00497] Updated weights for policy 0, policy_version 59438 (0.0028) +[2024-03-29 18:38:38,156][00497] Updated weights for policy 0, policy_version 59448 (0.0026) +[2024-03-29 18:38:38,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 974012416. Throughput: 0: 42054.2. Samples: 856195260. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:38:38,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 18:38:42,117][00497] Updated weights for policy 0, policy_version 59458 (0.0026) +[2024-03-29 18:38:43,839][00126] Fps is (10 sec: 42597.7, 60 sec: 42325.2, 300 sec: 41765.3). Total num frames: 974225408. Throughput: 0: 42322.1. Samples: 856469040. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:38:43,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:38:46,147][00497] Updated weights for policy 0, policy_version 59468 (0.0026) +[2024-03-29 18:38:48,839][00126] Fps is (10 sec: 40960.6, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 974422016. Throughput: 0: 42060.0. Samples: 856583260. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:38:48,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:38:50,100][00497] Updated weights for policy 0, policy_version 59478 (0.0021) +[2024-03-29 18:38:53,592][00497] Updated weights for policy 0, policy_version 59488 (0.0021) +[2024-03-29 18:38:53,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42325.2, 300 sec: 41765.3). Total num frames: 974651392. Throughput: 0: 41769.7. Samples: 856822160. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:38:53,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:38:57,739][00497] Updated weights for policy 0, policy_version 59498 (0.0024) +[2024-03-29 18:38:58,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 974848000. Throughput: 0: 42098.6. Samples: 857097100. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:38:58,840][00126] Avg episode reward: [(0, '0.658')] +[2024-03-29 18:39:01,999][00497] Updated weights for policy 0, policy_version 59508 (0.0022) +[2024-03-29 18:39:03,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 975060992. Throughput: 0: 41590.2. Samples: 857210300. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:39:03,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 18:39:04,126][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059514_975077376.pth... +[2024-03-29 18:39:04,454][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000058905_965099520.pth +[2024-03-29 18:39:04,508][00476] Signal inference workers to stop experience collection... (30450 times) +[2024-03-29 18:39:04,531][00497] InferenceWorker_p0-w0: stopping experience collection (30450 times) +[2024-03-29 18:39:04,721][00476] Signal inference workers to resume experience collection... (30450 times) +[2024-03-29 18:39:04,721][00497] InferenceWorker_p0-w0: resuming experience collection (30450 times) +[2024-03-29 18:39:05,681][00497] Updated weights for policy 0, policy_version 59518 (0.0019) +[2024-03-29 18:39:08,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 975290368. Throughput: 0: 41890.1. Samples: 857453280. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:39:08,841][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 18:39:08,971][00497] Updated weights for policy 0, policy_version 59528 (0.0031) +[2024-03-29 18:39:13,490][00497] Updated weights for policy 0, policy_version 59538 (0.0029) +[2024-03-29 18:39:13,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 975470592. Throughput: 0: 41899.8. Samples: 857717800. Policy #0 lag: (min: 0.0, avg: 21.3, max: 40.0) +[2024-03-29 18:39:13,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 18:39:17,843][00497] Updated weights for policy 0, policy_version 59548 (0.0026) +[2024-03-29 18:39:18,839][00126] Fps is (10 sec: 36045.2, 60 sec: 41233.2, 300 sec: 41543.2). Total num frames: 975650816. Throughput: 0: 41594.2. Samples: 857837860. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:18,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:39:21,477][00497] Updated weights for policy 0, policy_version 59558 (0.0023) +[2024-03-29 18:39:23,839][00126] Fps is (10 sec: 44237.8, 60 sec: 42325.6, 300 sec: 41765.3). Total num frames: 975912960. Throughput: 0: 41826.8. Samples: 858077460. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:23,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 18:39:24,935][00497] Updated weights for policy 0, policy_version 59568 (0.0019) +[2024-03-29 18:39:28,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 976109568. Throughput: 0: 41600.5. Samples: 858341060. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:28,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:39:29,248][00497] Updated weights for policy 0, policy_version 59578 (0.0018) +[2024-03-29 18:39:33,400][00497] Updated weights for policy 0, policy_version 59588 (0.0024) +[2024-03-29 18:39:33,839][00126] Fps is (10 sec: 37682.5, 60 sec: 41506.0, 300 sec: 41654.2). Total num frames: 976289792. Throughput: 0: 41696.2. Samples: 858459600. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:33,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:39:36,504][00476] Signal inference workers to stop experience collection... (30500 times) +[2024-03-29 18:39:36,574][00497] InferenceWorker_p0-w0: stopping experience collection (30500 times) +[2024-03-29 18:39:36,606][00476] Signal inference workers to resume experience collection... (30500 times) +[2024-03-29 18:39:36,608][00497] InferenceWorker_p0-w0: resuming experience collection (30500 times) +[2024-03-29 18:39:37,247][00497] Updated weights for policy 0, policy_version 59598 (0.0030) +[2024-03-29 18:39:38,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 976535552. Throughput: 0: 41843.7. Samples: 858705120. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:38,841][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 18:39:40,712][00497] Updated weights for policy 0, policy_version 59608 (0.0022) +[2024-03-29 18:39:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 976715776. Throughput: 0: 41371.5. Samples: 858958820. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:43,840][00126] Avg episode reward: [(0, '0.638')] +[2024-03-29 18:39:45,040][00497] Updated weights for policy 0, policy_version 59618 (0.0018) +[2024-03-29 18:39:48,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 976928768. Throughput: 0: 41616.0. Samples: 859083020. Policy #0 lag: (min: 1.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:48,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 18:39:49,044][00497] Updated weights for policy 0, policy_version 59628 (0.0035) +[2024-03-29 18:39:52,879][00497] Updated weights for policy 0, policy_version 59638 (0.0024) +[2024-03-29 18:39:53,839][00126] Fps is (10 sec: 44237.5, 60 sec: 41779.4, 300 sec: 41709.8). Total num frames: 977158144. Throughput: 0: 41790.3. Samples: 859333840. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:53,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:39:56,291][00497] Updated weights for policy 0, policy_version 59648 (0.0022) +[2024-03-29 18:39:58,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 977354752. Throughput: 0: 41429.0. Samples: 859582100. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:39:58,841][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 18:40:00,644][00497] Updated weights for policy 0, policy_version 59658 (0.0020) +[2024-03-29 18:40:03,839][00126] Fps is (10 sec: 40959.2, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 977567744. Throughput: 0: 41685.2. Samples: 859713700. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:03,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 18:40:04,745][00497] Updated weights for policy 0, policy_version 59668 (0.0021) +[2024-03-29 18:40:07,500][00476] Signal inference workers to stop experience collection... (30550 times) +[2024-03-29 18:40:07,500][00476] Signal inference workers to resume experience collection... (30550 times) +[2024-03-29 18:40:07,547][00497] InferenceWorker_p0-w0: stopping experience collection (30550 times) +[2024-03-29 18:40:07,548][00497] InferenceWorker_p0-w0: resuming experience collection (30550 times) +[2024-03-29 18:40:08,494][00497] Updated weights for policy 0, policy_version 59678 (0.0028) +[2024-03-29 18:40:08,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 977780736. Throughput: 0: 41914.7. Samples: 859963620. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:08,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 18:40:12,001][00497] Updated weights for policy 0, policy_version 59688 (0.0023) +[2024-03-29 18:40:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 977977344. Throughput: 0: 41628.4. Samples: 860214340. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:13,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:40:16,451][00497] Updated weights for policy 0, policy_version 59698 (0.0020) +[2024-03-29 18:40:18,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 978190336. Throughput: 0: 41541.0. Samples: 860328940. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:18,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 18:40:20,584][00497] Updated weights for policy 0, policy_version 59708 (0.0024) +[2024-03-29 18:40:23,839][00126] Fps is (10 sec: 39322.2, 60 sec: 40960.0, 300 sec: 41598.7). Total num frames: 978370560. Throughput: 0: 42105.3. Samples: 860599860. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:23,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:40:24,475][00497] Updated weights for policy 0, policy_version 59718 (0.0033) +[2024-03-29 18:40:27,934][00497] Updated weights for policy 0, policy_version 59728 (0.0021) +[2024-03-29 18:40:28,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 978583552. Throughput: 0: 41365.8. Samples: 860820280. Policy #0 lag: (min: 0.0, avg: 20.5, max: 41.0) +[2024-03-29 18:40:28,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 18:40:32,334][00497] Updated weights for policy 0, policy_version 59738 (0.0031) +[2024-03-29 18:40:33,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.2, 300 sec: 41820.8). Total num frames: 978796544. Throughput: 0: 41602.6. Samples: 860955140. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:33,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 18:40:36,216][00497] Updated weights for policy 0, policy_version 59748 (0.0019) +[2024-03-29 18:40:38,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40959.9, 300 sec: 41654.2). Total num frames: 978993152. Throughput: 0: 41890.1. Samples: 861218900. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:38,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 18:40:40,237][00497] Updated weights for policy 0, policy_version 59758 (0.0024) +[2024-03-29 18:40:40,459][00476] Signal inference workers to stop experience collection... (30600 times) +[2024-03-29 18:40:40,533][00476] Signal inference workers to resume experience collection... (30600 times) +[2024-03-29 18:40:40,535][00497] InferenceWorker_p0-w0: stopping experience collection (30600 times) +[2024-03-29 18:40:40,573][00497] InferenceWorker_p0-w0: resuming experience collection (30600 times) +[2024-03-29 18:40:43,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 979222528. Throughput: 0: 41153.0. Samples: 861433980. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:43,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 18:40:43,889][00497] Updated weights for policy 0, policy_version 59768 (0.0026) +[2024-03-29 18:40:48,255][00497] Updated weights for policy 0, policy_version 59778 (0.0027) +[2024-03-29 18:40:48,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 979419136. Throughput: 0: 41153.3. Samples: 861565600. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:48,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:40:52,141][00497] Updated weights for policy 0, policy_version 59788 (0.0031) +[2024-03-29 18:40:53,839][00126] Fps is (10 sec: 37682.8, 60 sec: 40686.8, 300 sec: 41654.2). Total num frames: 979599360. Throughput: 0: 41442.0. Samples: 861828520. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:53,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:40:56,057][00497] Updated weights for policy 0, policy_version 59798 (0.0025) +[2024-03-29 18:40:58,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41506.2, 300 sec: 41820.8). Total num frames: 979845120. Throughput: 0: 40840.5. Samples: 862052160. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:40:58,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:40:59,543][00497] Updated weights for policy 0, policy_version 59808 (0.0032) +[2024-03-29 18:41:03,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 980041728. Throughput: 0: 41295.1. Samples: 862187220. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:41:03,840][00126] Avg episode reward: [(0, '0.644')] +[2024-03-29 18:41:04,082][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059818_980058112.pth... +[2024-03-29 18:41:04,084][00497] Updated weights for policy 0, policy_version 59818 (0.0022) +[2024-03-29 18:41:04,397][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059207_970047488.pth +[2024-03-29 18:41:08,116][00497] Updated weights for policy 0, policy_version 59828 (0.0019) +[2024-03-29 18:41:08,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40686.9, 300 sec: 41709.8). Total num frames: 980221952. Throughput: 0: 40992.0. Samples: 862444500. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:08,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:41:12,095][00497] Updated weights for policy 0, policy_version 59838 (0.0021) +[2024-03-29 18:41:12,735][00476] Signal inference workers to stop experience collection... (30650 times) +[2024-03-29 18:41:12,774][00497] InferenceWorker_p0-w0: stopping experience collection (30650 times) +[2024-03-29 18:41:12,967][00476] Signal inference workers to resume experience collection... (30650 times) +[2024-03-29 18:41:12,968][00497] InferenceWorker_p0-w0: resuming experience collection (30650 times) +[2024-03-29 18:41:13,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 980484096. Throughput: 0: 41300.1. Samples: 862678780. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:13,840][00126] Avg episode reward: [(0, '0.676')] +[2024-03-29 18:41:15,676][00497] Updated weights for policy 0, policy_version 59848 (0.0022) +[2024-03-29 18:41:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40960.0, 300 sec: 41654.3). Total num frames: 980647936. Throughput: 0: 41083.2. Samples: 862803880. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:18,840][00126] Avg episode reward: [(0, '0.644')] +[2024-03-29 18:41:20,088][00497] Updated weights for policy 0, policy_version 59858 (0.0026) +[2024-03-29 18:41:23,839][00126] Fps is (10 sec: 37683.1, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 980860928. Throughput: 0: 40720.6. Samples: 863051320. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:23,841][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 18:41:24,003][00497] Updated weights for policy 0, policy_version 59868 (0.0022) +[2024-03-29 18:41:28,044][00497] Updated weights for policy 0, policy_version 59878 (0.0025) +[2024-03-29 18:41:28,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 981073920. Throughput: 0: 41426.7. Samples: 863298180. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:28,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 18:41:31,783][00497] Updated weights for policy 0, policy_version 59888 (0.0025) +[2024-03-29 18:41:33,839][00126] Fps is (10 sec: 42597.5, 60 sec: 41506.1, 300 sec: 41709.7). Total num frames: 981286912. Throughput: 0: 41044.0. Samples: 863412580. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:33,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 18:41:36,072][00497] Updated weights for policy 0, policy_version 59898 (0.0021) +[2024-03-29 18:41:38,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 981467136. Throughput: 0: 40883.6. Samples: 863668280. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 18:41:38,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 18:41:39,932][00497] Updated weights for policy 0, policy_version 59908 (0.0020) +[2024-03-29 18:41:43,839][00126] Fps is (10 sec: 39321.7, 60 sec: 40959.9, 300 sec: 41654.2). Total num frames: 981680128. Throughput: 0: 41315.9. Samples: 863911380. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:41:43,841][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 18:41:44,097][00497] Updated weights for policy 0, policy_version 59918 (0.0031) +[2024-03-29 18:41:45,883][00476] Signal inference workers to stop experience collection... (30700 times) +[2024-03-29 18:41:45,922][00497] InferenceWorker_p0-w0: stopping experience collection (30700 times) +[2024-03-29 18:41:46,111][00476] Signal inference workers to resume experience collection... (30700 times) +[2024-03-29 18:41:46,111][00497] InferenceWorker_p0-w0: resuming experience collection (30700 times) +[2024-03-29 18:41:48,330][00497] Updated weights for policy 0, policy_version 59928 (0.0027) +[2024-03-29 18:41:48,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40960.1, 300 sec: 41598.7). Total num frames: 981876736. Throughput: 0: 40659.6. Samples: 864016900. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:41:48,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:41:52,143][00497] Updated weights for policy 0, policy_version 59938 (0.0028) +[2024-03-29 18:41:53,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 982089728. Throughput: 0: 40840.9. Samples: 864282340. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:41:53,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 18:41:55,959][00497] Updated weights for policy 0, policy_version 59948 (0.0025) +[2024-03-29 18:41:58,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40686.9, 300 sec: 41543.2). Total num frames: 982286336. Throughput: 0: 41471.9. Samples: 864545020. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:41:58,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:41:59,872][00497] Updated weights for policy 0, policy_version 59958 (0.0029) +[2024-03-29 18:42:03,801][00497] Updated weights for policy 0, policy_version 59968 (0.0032) +[2024-03-29 18:42:03,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 982515712. Throughput: 0: 40980.9. Samples: 864648020. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:42:03,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 18:42:07,820][00497] Updated weights for policy 0, policy_version 59978 (0.0030) +[2024-03-29 18:42:08,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 982728704. Throughput: 0: 41394.1. Samples: 864914060. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:42:08,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 18:42:11,904][00497] Updated weights for policy 0, policy_version 59988 (0.0023) +[2024-03-29 18:42:13,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40140.8, 300 sec: 41487.6). Total num frames: 982892544. Throughput: 0: 41815.5. Samples: 865179880. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:42:13,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:42:15,839][00497] Updated weights for policy 0, policy_version 59998 (0.0022) +[2024-03-29 18:42:18,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 983121920. Throughput: 0: 41378.4. Samples: 865274600. Policy #0 lag: (min: 1.0, avg: 20.9, max: 44.0) +[2024-03-29 18:42:18,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 18:42:18,858][00476] Signal inference workers to stop experience collection... (30750 times) +[2024-03-29 18:42:18,911][00497] InferenceWorker_p0-w0: stopping experience collection (30750 times) +[2024-03-29 18:42:18,947][00476] Signal inference workers to resume experience collection... (30750 times) +[2024-03-29 18:42:18,949][00497] InferenceWorker_p0-w0: resuming experience collection (30750 times) +[2024-03-29 18:42:19,795][00497] Updated weights for policy 0, policy_version 60008 (0.0035) +[2024-03-29 18:42:23,826][00497] Updated weights for policy 0, policy_version 60018 (0.0028) +[2024-03-29 18:42:23,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 983334912. Throughput: 0: 41300.1. Samples: 865526780. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:23,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:42:27,711][00497] Updated weights for policy 0, policy_version 60028 (0.0022) +[2024-03-29 18:42:28,839][00126] Fps is (10 sec: 39320.8, 60 sec: 40686.8, 300 sec: 41487.6). Total num frames: 983515136. Throughput: 0: 41585.8. Samples: 865782740. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:28,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:42:31,712][00497] Updated weights for policy 0, policy_version 60038 (0.0025) +[2024-03-29 18:42:33,839][00126] Fps is (10 sec: 44235.6, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 983777280. Throughput: 0: 41633.2. Samples: 865890400. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:33,841][00126] Avg episode reward: [(0, '0.643')] +[2024-03-29 18:42:35,599][00497] Updated weights for policy 0, policy_version 60048 (0.0017) +[2024-03-29 18:42:38,839][00126] Fps is (10 sec: 44237.8, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 983957504. Throughput: 0: 41290.7. Samples: 866140420. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:38,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:42:39,740][00497] Updated weights for policy 0, policy_version 60058 (0.0022) +[2024-03-29 18:42:43,560][00497] Updated weights for policy 0, policy_version 60068 (0.0026) +[2024-03-29 18:42:43,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41233.1, 300 sec: 41543.1). Total num frames: 984154112. Throughput: 0: 41156.9. Samples: 866397080. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:43,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 18:42:47,493][00497] Updated weights for policy 0, policy_version 60078 (0.0020) +[2024-03-29 18:42:48,135][00476] Signal inference workers to stop experience collection... (30800 times) +[2024-03-29 18:42:48,175][00497] InferenceWorker_p0-w0: stopping experience collection (30800 times) +[2024-03-29 18:42:48,367][00476] Signal inference workers to resume experience collection... (30800 times) +[2024-03-29 18:42:48,367][00497] InferenceWorker_p0-w0: resuming experience collection (30800 times) +[2024-03-29 18:42:48,839][00126] Fps is (10 sec: 42597.5, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 984383488. Throughput: 0: 41771.4. Samples: 866527740. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:48,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:42:51,332][00497] Updated weights for policy 0, policy_version 60088 (0.0023) +[2024-03-29 18:42:53,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 984563712. Throughput: 0: 41082.8. Samples: 866762780. Policy #0 lag: (min: 0.0, avg: 21.7, max: 42.0) +[2024-03-29 18:42:53,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:42:55,517][00497] Updated weights for policy 0, policy_version 60098 (0.0019) +[2024-03-29 18:42:58,839][00126] Fps is (10 sec: 39322.5, 60 sec: 41506.2, 300 sec: 41487.6). Total num frames: 984776704. Throughput: 0: 40903.6. Samples: 867020540. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:42:58,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 18:42:59,389][00497] Updated weights for policy 0, policy_version 60108 (0.0018) +[2024-03-29 18:43:03,233][00497] Updated weights for policy 0, policy_version 60118 (0.0028) +[2024-03-29 18:43:03,839][00126] Fps is (10 sec: 40960.1, 60 sec: 40960.1, 300 sec: 41432.1). Total num frames: 984973312. Throughput: 0: 41832.0. Samples: 867157040. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:03,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 18:43:04,157][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060120_985006080.pth... +[2024-03-29 18:43:04,496][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059514_975077376.pth +[2024-03-29 18:43:07,359][00497] Updated weights for policy 0, policy_version 60128 (0.0027) +[2024-03-29 18:43:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 985202688. Throughput: 0: 41000.8. Samples: 867371820. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:08,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:43:11,611][00497] Updated weights for policy 0, policy_version 60138 (0.0020) +[2024-03-29 18:43:13,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41376.6). Total num frames: 985382912. Throughput: 0: 40778.4. Samples: 867617760. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:13,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:43:15,872][00497] Updated weights for policy 0, policy_version 60148 (0.0019) +[2024-03-29 18:43:18,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41487.7). Total num frames: 985612288. Throughput: 0: 41490.0. Samples: 867757440. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:18,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 18:43:19,421][00497] Updated weights for policy 0, policy_version 60158 (0.0033) +[2024-03-29 18:43:23,396][00497] Updated weights for policy 0, policy_version 60168 (0.0023) +[2024-03-29 18:43:23,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41232.9, 300 sec: 41432.1). Total num frames: 985808896. Throughput: 0: 40878.5. Samples: 867979960. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:23,841][00126] Avg episode reward: [(0, '0.648')] +[2024-03-29 18:43:24,201][00476] Signal inference workers to stop experience collection... (30850 times) +[2024-03-29 18:43:24,235][00497] InferenceWorker_p0-w0: stopping experience collection (30850 times) +[2024-03-29 18:43:24,382][00476] Signal inference workers to resume experience collection... (30850 times) +[2024-03-29 18:43:24,383][00497] InferenceWorker_p0-w0: resuming experience collection (30850 times) +[2024-03-29 18:43:27,328][00497] Updated weights for policy 0, policy_version 60178 (0.0025) +[2024-03-29 18:43:28,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.3, 300 sec: 41432.1). Total num frames: 986021888. Throughput: 0: 41156.4. Samples: 868249120. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:28,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 18:43:31,596][00497] Updated weights for policy 0, policy_version 60188 (0.0031) +[2024-03-29 18:43:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40687.0, 300 sec: 41376.5). Total num frames: 986218496. Throughput: 0: 41161.4. Samples: 868380000. Policy #0 lag: (min: 0.0, avg: 20.4, max: 41.0) +[2024-03-29 18:43:33,842][00126] Avg episode reward: [(0, '0.639')] +[2024-03-29 18:43:35,128][00497] Updated weights for policy 0, policy_version 60198 (0.0035) +[2024-03-29 18:43:38,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41233.0, 300 sec: 41376.6). Total num frames: 986431488. Throughput: 0: 40896.4. Samples: 868603120. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:43:38,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:43:39,193][00497] Updated weights for policy 0, policy_version 60208 (0.0019) +[2024-03-29 18:43:43,236][00497] Updated weights for policy 0, policy_version 60218 (0.0025) +[2024-03-29 18:43:43,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41233.0, 300 sec: 41376.5). Total num frames: 986628096. Throughput: 0: 40899.8. Samples: 868861040. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:43:43,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:43:47,522][00497] Updated weights for policy 0, policy_version 60228 (0.0024) +[2024-03-29 18:43:48,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40414.0, 300 sec: 41210.0). Total num frames: 986808320. Throughput: 0: 40915.5. Samples: 868998240. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:43:48,840][00126] Avg episode reward: [(0, '0.642')] +[2024-03-29 18:43:50,936][00497] Updated weights for policy 0, policy_version 60238 (0.0027) +[2024-03-29 18:43:53,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41506.1, 300 sec: 41376.6). Total num frames: 987054080. Throughput: 0: 41251.6. Samples: 869228140. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:43:53,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 18:43:54,834][00497] Updated weights for policy 0, policy_version 60248 (0.0023) +[2024-03-29 18:43:54,854][00476] Signal inference workers to stop experience collection... (30900 times) +[2024-03-29 18:43:54,854][00476] Signal inference workers to resume experience collection... (30900 times) +[2024-03-29 18:43:54,878][00497] InferenceWorker_p0-w0: stopping experience collection (30900 times) +[2024-03-29 18:43:54,878][00497] InferenceWorker_p0-w0: resuming experience collection (30900 times) +[2024-03-29 18:43:58,839][00126] Fps is (10 sec: 44236.0, 60 sec: 41232.9, 300 sec: 41321.0). Total num frames: 987250688. Throughput: 0: 41580.3. Samples: 869488880. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:43:58,841][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 18:43:58,863][00497] Updated weights for policy 0, policy_version 60258 (0.0030) +[2024-03-29 18:44:03,338][00497] Updated weights for policy 0, policy_version 60268 (0.0028) +[2024-03-29 18:44:03,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40960.0, 300 sec: 41154.4). Total num frames: 987430912. Throughput: 0: 41328.0. Samples: 869617200. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:44:03,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 18:44:06,621][00497] Updated weights for policy 0, policy_version 60278 (0.0030) +[2024-03-29 18:44:08,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 987693056. Throughput: 0: 41892.1. Samples: 869865100. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 18:44:08,841][00126] Avg episode reward: [(0, '0.618')] +[2024-03-29 18:44:10,789][00497] Updated weights for policy 0, policy_version 60288 (0.0021) +[2024-03-29 18:44:13,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 987873280. Throughput: 0: 41255.2. Samples: 870105600. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:13,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:44:14,929][00497] Updated weights for policy 0, policy_version 60298 (0.0021) +[2024-03-29 18:44:18,839][00126] Fps is (10 sec: 37682.6, 60 sec: 40959.9, 300 sec: 41209.9). Total num frames: 988069888. Throughput: 0: 41156.4. Samples: 870232040. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:18,840][00126] Avg episode reward: [(0, '0.684')] +[2024-03-29 18:44:19,611][00497] Updated weights for policy 0, policy_version 60308 (0.0024) +[2024-03-29 18:44:22,543][00497] Updated weights for policy 0, policy_version 60318 (0.0027) +[2024-03-29 18:44:23,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.1, 300 sec: 41265.5). Total num frames: 988282880. Throughput: 0: 41445.3. Samples: 870468160. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:23,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:44:26,714][00497] Updated weights for policy 0, policy_version 60328 (0.0024) +[2024-03-29 18:44:27,583][00476] Signal inference workers to stop experience collection... (30950 times) +[2024-03-29 18:44:27,663][00497] InferenceWorker_p0-w0: stopping experience collection (30950 times) +[2024-03-29 18:44:27,664][00476] Signal inference workers to resume experience collection... (30950 times) +[2024-03-29 18:44:27,689][00497] InferenceWorker_p0-w0: resuming experience collection (30950 times) +[2024-03-29 18:44:28,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41233.1, 300 sec: 41376.6). Total num frames: 988495872. Throughput: 0: 41240.5. Samples: 870716860. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:28,840][00126] Avg episode reward: [(0, '0.653')] +[2024-03-29 18:44:30,839][00497] Updated weights for policy 0, policy_version 60338 (0.0023) +[2024-03-29 18:44:33,839][00126] Fps is (10 sec: 37682.9, 60 sec: 40687.0, 300 sec: 41098.8). Total num frames: 988659712. Throughput: 0: 41058.1. Samples: 870845860. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:33,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 18:44:35,308][00497] Updated weights for policy 0, policy_version 60348 (0.0029) +[2024-03-29 18:44:38,409][00497] Updated weights for policy 0, policy_version 60358 (0.0034) +[2024-03-29 18:44:38,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41233.1, 300 sec: 41321.0). Total num frames: 988905472. Throughput: 0: 41447.1. Samples: 871093260. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:38,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 18:44:42,743][00497] Updated weights for policy 0, policy_version 60368 (0.0024) +[2024-03-29 18:44:43,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41506.2, 300 sec: 41321.0). Total num frames: 989118464. Throughput: 0: 41132.5. Samples: 871339840. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:43,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 18:44:46,700][00497] Updated weights for policy 0, policy_version 60378 (0.0025) +[2024-03-29 18:44:48,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.1, 300 sec: 41209.9). Total num frames: 989315072. Throughput: 0: 40989.7. Samples: 871461740. Policy #0 lag: (min: 1.0, avg: 20.7, max: 41.0) +[2024-03-29 18:44:48,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:44:51,218][00497] Updated weights for policy 0, policy_version 60388 (0.0022) +[2024-03-29 18:44:53,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41233.1, 300 sec: 41265.5). Total num frames: 989528064. Throughput: 0: 41351.6. Samples: 871725920. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:44:53,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:44:54,230][00497] Updated weights for policy 0, policy_version 60398 (0.0038) +[2024-03-29 18:44:56,185][00476] Signal inference workers to stop experience collection... (31000 times) +[2024-03-29 18:44:56,185][00476] Signal inference workers to resume experience collection... (31000 times) +[2024-03-29 18:44:56,283][00497] InferenceWorker_p0-w0: stopping experience collection (31000 times) +[2024-03-29 18:44:56,284][00497] InferenceWorker_p0-w0: resuming experience collection (31000 times) +[2024-03-29 18:44:58,506][00497] Updated weights for policy 0, policy_version 60408 (0.0023) +[2024-03-29 18:44:58,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41265.5). Total num frames: 989741056. Throughput: 0: 41315.6. Samples: 871964800. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:44:58,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:45:02,380][00497] Updated weights for policy 0, policy_version 60418 (0.0024) +[2024-03-29 18:45:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.2, 300 sec: 41265.4). Total num frames: 989954048. Throughput: 0: 41137.0. Samples: 872083200. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:45:03,840][00126] Avg episode reward: [(0, '0.639')] +[2024-03-29 18:45:03,990][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060423_989970432.pth... +[2024-03-29 18:45:04,302][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000059818_980058112.pth +[2024-03-29 18:45:06,782][00497] Updated weights for policy 0, policy_version 60428 (0.0022) +[2024-03-29 18:45:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 40960.0, 300 sec: 41265.5). Total num frames: 990150656. Throughput: 0: 41847.5. Samples: 872351300. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:45:08,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:45:09,987][00497] Updated weights for policy 0, policy_version 60438 (0.0027) +[2024-03-29 18:45:13,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41265.5). Total num frames: 990363648. Throughput: 0: 41503.5. Samples: 872584520. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:45:13,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:45:14,214][00497] Updated weights for policy 0, policy_version 60448 (0.0028) +[2024-03-29 18:45:18,127][00497] Updated weights for policy 0, policy_version 60458 (0.0023) +[2024-03-29 18:45:18,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.3, 300 sec: 41376.5). Total num frames: 990576640. Throughput: 0: 41419.7. Samples: 872709740. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:45:18,840][00126] Avg episode reward: [(0, '0.634')] +[2024-03-29 18:45:22,457][00497] Updated weights for policy 0, policy_version 60468 (0.0023) +[2024-03-29 18:45:23,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41321.0). Total num frames: 990773248. Throughput: 0: 41946.1. Samples: 872980840. Policy #0 lag: (min: 2.0, avg: 19.1, max: 42.0) +[2024-03-29 18:45:23,840][00126] Avg episode reward: [(0, '0.618')] +[2024-03-29 18:45:25,500][00497] Updated weights for policy 0, policy_version 60478 (0.0025) +[2024-03-29 18:45:26,045][00476] Signal inference workers to stop experience collection... (31050 times) +[2024-03-29 18:45:26,124][00476] Signal inference workers to resume experience collection... (31050 times) +[2024-03-29 18:45:26,127][00497] InferenceWorker_p0-w0: stopping experience collection (31050 times) +[2024-03-29 18:45:26,155][00497] InferenceWorker_p0-w0: resuming experience collection (31050 times) +[2024-03-29 18:45:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.3, 300 sec: 41376.6). Total num frames: 991002624. Throughput: 0: 41710.4. Samples: 873216800. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:28,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 18:45:29,707][00497] Updated weights for policy 0, policy_version 60488 (0.0029) +[2024-03-29 18:45:33,631][00497] Updated weights for policy 0, policy_version 60498 (0.0027) +[2024-03-29 18:45:33,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42325.4, 300 sec: 41376.6). Total num frames: 991199232. Throughput: 0: 41880.6. Samples: 873346360. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:33,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:45:38,077][00497] Updated weights for policy 0, policy_version 60508 (0.0024) +[2024-03-29 18:45:38,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.2, 300 sec: 41265.5). Total num frames: 991395840. Throughput: 0: 42029.8. Samples: 873617260. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:38,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:45:41,086][00497] Updated weights for policy 0, policy_version 60518 (0.0024) +[2024-03-29 18:45:43,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42052.3, 300 sec: 41432.1). Total num frames: 991641600. Throughput: 0: 41929.4. Samples: 873851620. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:43,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 18:45:45,339][00497] Updated weights for policy 0, policy_version 60528 (0.0032) +[2024-03-29 18:45:48,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 991821824. Throughput: 0: 42136.0. Samples: 873979320. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:48,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 18:45:49,175][00497] Updated weights for policy 0, policy_version 60538 (0.0021) +[2024-03-29 18:45:53,767][00497] Updated weights for policy 0, policy_version 60548 (0.0021) +[2024-03-29 18:45:53,839][00126] Fps is (10 sec: 37682.6, 60 sec: 41506.0, 300 sec: 41265.4). Total num frames: 992018432. Throughput: 0: 41955.9. Samples: 874239320. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:53,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 18:45:55,482][00476] Signal inference workers to stop experience collection... (31100 times) +[2024-03-29 18:45:55,482][00476] Signal inference workers to resume experience collection... (31100 times) +[2024-03-29 18:45:55,522][00497] InferenceWorker_p0-w0: stopping experience collection (31100 times) +[2024-03-29 18:45:55,522][00497] InferenceWorker_p0-w0: resuming experience collection (31100 times) +[2024-03-29 18:45:56,713][00497] Updated weights for policy 0, policy_version 60558 (0.0040) +[2024-03-29 18:45:58,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.2, 300 sec: 41432.1). Total num frames: 992264192. Throughput: 0: 42069.3. Samples: 874477640. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:45:58,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 18:46:00,746][00497] Updated weights for policy 0, policy_version 60568 (0.0019) +[2024-03-29 18:46:03,839][00126] Fps is (10 sec: 45875.3, 60 sec: 42052.2, 300 sec: 41543.1). Total num frames: 992477184. Throughput: 0: 42329.6. Samples: 874614580. Policy #0 lag: (min: 2.0, avg: 22.7, max: 44.0) +[2024-03-29 18:46:03,841][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 18:46:04,614][00497] Updated weights for policy 0, policy_version 60578 (0.0026) +[2024-03-29 18:46:08,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41779.2, 300 sec: 41265.5). Total num frames: 992657408. Throughput: 0: 42127.7. Samples: 874876580. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:08,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 18:46:09,022][00497] Updated weights for policy 0, policy_version 60588 (0.0024) +[2024-03-29 18:46:12,272][00497] Updated weights for policy 0, policy_version 60598 (0.0040) +[2024-03-29 18:46:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.4, 300 sec: 41543.2). Total num frames: 992903168. Throughput: 0: 42220.4. Samples: 875116720. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:13,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 18:46:16,271][00497] Updated weights for policy 0, policy_version 60608 (0.0021) +[2024-03-29 18:46:18,839][00126] Fps is (10 sec: 45875.2, 60 sec: 42325.3, 300 sec: 41543.2). Total num frames: 993116160. Throughput: 0: 42328.9. Samples: 875251160. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:18,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 18:46:20,145][00497] Updated weights for policy 0, policy_version 60618 (0.0038) +[2024-03-29 18:46:23,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42052.3, 300 sec: 41432.1). Total num frames: 993296384. Throughput: 0: 41921.7. Samples: 875503740. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:23,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:46:24,669][00497] Updated weights for policy 0, policy_version 60628 (0.0027) +[2024-03-29 18:46:26,994][00476] Signal inference workers to stop experience collection... (31150 times) +[2024-03-29 18:46:26,994][00476] Signal inference workers to resume experience collection... (31150 times) +[2024-03-29 18:46:27,040][00497] InferenceWorker_p0-w0: stopping experience collection (31150 times) +[2024-03-29 18:46:27,040][00497] InferenceWorker_p0-w0: resuming experience collection (31150 times) +[2024-03-29 18:46:27,660][00497] Updated weights for policy 0, policy_version 60638 (0.0028) +[2024-03-29 18:46:28,839][00126] Fps is (10 sec: 42597.6, 60 sec: 42325.2, 300 sec: 41543.2). Total num frames: 993542144. Throughput: 0: 42238.5. Samples: 875752360. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:28,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:46:31,585][00497] Updated weights for policy 0, policy_version 60648 (0.0025) +[2024-03-29 18:46:33,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42598.4, 300 sec: 41654.3). Total num frames: 993755136. Throughput: 0: 42394.3. Samples: 875887060. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:33,841][00126] Avg episode reward: [(0, '0.570')] +[2024-03-29 18:46:35,524][00497] Updated weights for policy 0, policy_version 60658 (0.0022) +[2024-03-29 18:46:38,839][00126] Fps is (10 sec: 37683.8, 60 sec: 42052.3, 300 sec: 41487.6). Total num frames: 993918976. Throughput: 0: 42133.5. Samples: 876135320. Policy #0 lag: (min: 0.0, avg: 21.2, max: 43.0) +[2024-03-29 18:46:38,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 18:46:39,942][00497] Updated weights for policy 0, policy_version 60668 (0.0025) +[2024-03-29 18:46:43,308][00497] Updated weights for policy 0, policy_version 60678 (0.0028) +[2024-03-29 18:46:43,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 994164736. Throughput: 0: 42327.7. Samples: 876382380. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:46:43,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:46:47,269][00497] Updated weights for policy 0, policy_version 60688 (0.0025) +[2024-03-29 18:46:48,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42598.4, 300 sec: 41654.2). Total num frames: 994377728. Throughput: 0: 42122.4. Samples: 876510080. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:46:48,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:46:51,306][00497] Updated weights for policy 0, policy_version 60698 (0.0024) +[2024-03-29 18:46:53,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42598.5, 300 sec: 41654.2). Total num frames: 994574336. Throughput: 0: 41953.7. Samples: 876764500. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:46:53,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:46:55,828][00497] Updated weights for policy 0, policy_version 60708 (0.0024) +[2024-03-29 18:46:58,313][00476] Signal inference workers to stop experience collection... (31200 times) +[2024-03-29 18:46:58,390][00497] InferenceWorker_p0-w0: stopping experience collection (31200 times) +[2024-03-29 18:46:58,403][00476] Signal inference workers to resume experience collection... (31200 times) +[2024-03-29 18:46:58,426][00497] InferenceWorker_p0-w0: resuming experience collection (31200 times) +[2024-03-29 18:46:58,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.4, 300 sec: 41598.7). Total num frames: 994787328. Throughput: 0: 41888.1. Samples: 877001680. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:46:58,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:46:59,037][00497] Updated weights for policy 0, policy_version 60718 (0.0027) +[2024-03-29 18:47:02,877][00497] Updated weights for policy 0, policy_version 60728 (0.0028) +[2024-03-29 18:47:03,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 995000320. Throughput: 0: 41931.9. Samples: 877138100. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:47:03,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:47:04,160][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060732_995033088.pth... +[2024-03-29 18:47:04,520][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060120_985006080.pth +[2024-03-29 18:47:06,826][00497] Updated weights for policy 0, policy_version 60738 (0.0029) +[2024-03-29 18:47:08,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42598.3, 300 sec: 41765.3). Total num frames: 995213312. Throughput: 0: 41976.0. Samples: 877392660. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:47:08,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:47:11,469][00497] Updated weights for policy 0, policy_version 60748 (0.0030) +[2024-03-29 18:47:13,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 995409920. Throughput: 0: 41946.0. Samples: 877639920. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:47:13,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 18:47:14,519][00497] Updated weights for policy 0, policy_version 60758 (0.0029) +[2024-03-29 18:47:18,321][00497] Updated weights for policy 0, policy_version 60768 (0.0020) +[2024-03-29 18:47:18,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 995639296. Throughput: 0: 41878.2. Samples: 877771580. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 18:47:18,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 18:47:22,350][00497] Updated weights for policy 0, policy_version 60778 (0.0031) +[2024-03-29 18:47:23,839][00126] Fps is (10 sec: 44236.3, 60 sec: 42598.4, 300 sec: 41820.9). Total num frames: 995852288. Throughput: 0: 42249.7. Samples: 878036560. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:23,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 18:47:26,921][00497] Updated weights for policy 0, policy_version 60788 (0.0025) +[2024-03-29 18:47:28,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.4, 300 sec: 41654.3). Total num frames: 996065280. Throughput: 0: 42154.7. Samples: 878279340. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:28,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 18:47:29,935][00497] Updated weights for policy 0, policy_version 60798 (0.0030) +[2024-03-29 18:47:30,310][00476] Signal inference workers to stop experience collection... (31250 times) +[2024-03-29 18:47:30,386][00476] Signal inference workers to resume experience collection... (31250 times) +[2024-03-29 18:47:30,389][00497] InferenceWorker_p0-w0: stopping experience collection (31250 times) +[2024-03-29 18:47:30,412][00497] InferenceWorker_p0-w0: resuming experience collection (31250 times) +[2024-03-29 18:47:33,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 996261888. Throughput: 0: 41938.7. Samples: 878397320. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:33,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 18:47:33,958][00497] Updated weights for policy 0, policy_version 60808 (0.0022) +[2024-03-29 18:47:38,145][00497] Updated weights for policy 0, policy_version 60818 (0.0021) +[2024-03-29 18:47:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42598.4, 300 sec: 41765.3). Total num frames: 996474880. Throughput: 0: 41880.9. Samples: 878649140. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:38,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:47:42,851][00497] Updated weights for policy 0, policy_version 60828 (0.0023) +[2024-03-29 18:47:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 996655104. Throughput: 0: 42381.3. Samples: 878908840. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:43,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 18:47:45,764][00497] Updated weights for policy 0, policy_version 60838 (0.0022) +[2024-03-29 18:47:48,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 996868096. Throughput: 0: 41707.2. Samples: 879014920. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:48,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 18:47:49,893][00497] Updated weights for policy 0, policy_version 60848 (0.0019) +[2024-03-29 18:47:53,758][00497] Updated weights for policy 0, policy_version 60858 (0.0019) +[2024-03-29 18:47:53,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 997097472. Throughput: 0: 41818.7. Samples: 879274500. Policy #0 lag: (min: 1.0, avg: 21.8, max: 42.0) +[2024-03-29 18:47:53,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:47:58,615][00497] Updated weights for policy 0, policy_version 60868 (0.0022) +[2024-03-29 18:47:58,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 997261312. Throughput: 0: 42274.2. Samples: 879542260. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:47:58,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 18:48:00,669][00476] Signal inference workers to stop experience collection... (31300 times) +[2024-03-29 18:48:00,686][00497] InferenceWorker_p0-w0: stopping experience collection (31300 times) +[2024-03-29 18:48:00,884][00476] Signal inference workers to resume experience collection... (31300 times) +[2024-03-29 18:48:00,885][00497] InferenceWorker_p0-w0: resuming experience collection (31300 times) +[2024-03-29 18:48:01,666][00497] Updated weights for policy 0, policy_version 60878 (0.0029) +[2024-03-29 18:48:03,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 997490688. Throughput: 0: 41334.5. Samples: 879631640. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:03,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 18:48:05,657][00497] Updated weights for policy 0, policy_version 60888 (0.0020) +[2024-03-29 18:48:08,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 997703680. Throughput: 0: 41168.9. Samples: 879889160. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:08,842][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 18:48:09,718][00497] Updated weights for policy 0, policy_version 60898 (0.0027) +[2024-03-29 18:48:13,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 997883904. Throughput: 0: 41555.9. Samples: 880149360. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:13,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 18:48:14,588][00497] Updated weights for policy 0, policy_version 60908 (0.0026) +[2024-03-29 18:48:17,596][00497] Updated weights for policy 0, policy_version 60918 (0.0029) +[2024-03-29 18:48:18,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 998129664. Throughput: 0: 41646.7. Samples: 880271420. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:18,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:48:21,392][00497] Updated weights for policy 0, policy_version 60928 (0.0018) +[2024-03-29 18:48:23,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 998326272. Throughput: 0: 41614.7. Samples: 880521800. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:23,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 18:48:25,379][00497] Updated weights for policy 0, policy_version 60938 (0.0020) +[2024-03-29 18:48:28,839][00126] Fps is (10 sec: 39321.1, 60 sec: 40959.9, 300 sec: 41709.8). Total num frames: 998522880. Throughput: 0: 41483.9. Samples: 880775620. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:28,840][00126] Avg episode reward: [(0, '0.577')] +[2024-03-29 18:48:30,144][00497] Updated weights for policy 0, policy_version 60948 (0.0022) +[2024-03-29 18:48:31,879][00476] Signal inference workers to stop experience collection... (31350 times) +[2024-03-29 18:48:31,914][00497] InferenceWorker_p0-w0: stopping experience collection (31350 times) +[2024-03-29 18:48:32,096][00476] Signal inference workers to resume experience collection... (31350 times) +[2024-03-29 18:48:32,097][00497] InferenceWorker_p0-w0: resuming experience collection (31350 times) +[2024-03-29 18:48:32,964][00497] Updated weights for policy 0, policy_version 60958 (0.0021) +[2024-03-29 18:48:33,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 998768640. Throughput: 0: 41989.3. Samples: 880904440. Policy #0 lag: (min: 1.0, avg: 17.4, max: 41.0) +[2024-03-29 18:48:33,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 18:48:37,181][00497] Updated weights for policy 0, policy_version 60968 (0.0019) +[2024-03-29 18:48:38,839][00126] Fps is (10 sec: 45876.1, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 998981632. Throughput: 0: 41527.7. Samples: 881143240. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:48:38,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 18:48:41,188][00497] Updated weights for policy 0, policy_version 60978 (0.0019) +[2024-03-29 18:48:43,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 999161856. Throughput: 0: 41126.1. Samples: 881392940. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:48:43,841][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 18:48:45,866][00497] Updated weights for policy 0, policy_version 60988 (0.0022) +[2024-03-29 18:48:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 999374848. Throughput: 0: 42302.0. Samples: 881535220. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:48:48,840][00126] Avg episode reward: [(0, '0.659')] +[2024-03-29 18:48:48,983][00497] Updated weights for policy 0, policy_version 60998 (0.0027) +[2024-03-29 18:48:52,938][00497] Updated weights for policy 0, policy_version 61008 (0.0022) +[2024-03-29 18:48:53,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 999587840. Throughput: 0: 41713.0. Samples: 881766240. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:48:53,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 18:48:56,746][00497] Updated weights for policy 0, policy_version 61018 (0.0023) +[2024-03-29 18:48:58,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42325.2, 300 sec: 41931.9). Total num frames: 999800832. Throughput: 0: 41566.6. Samples: 882019860. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:48:58,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:49:01,384][00497] Updated weights for policy 0, policy_version 61028 (0.0019) +[2024-03-29 18:49:03,219][00476] Signal inference workers to stop experience collection... (31400 times) +[2024-03-29 18:49:03,259][00497] InferenceWorker_p0-w0: stopping experience collection (31400 times) +[2024-03-29 18:49:03,448][00476] Signal inference workers to resume experience collection... (31400 times) +[2024-03-29 18:49:03,449][00497] InferenceWorker_p0-w0: resuming experience collection (31400 times) +[2024-03-29 18:49:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 1000013824. Throughput: 0: 42130.6. Samples: 882167300. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:49:03,840][00126] Avg episode reward: [(0, '0.477')] +[2024-03-29 18:49:03,996][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061037_1000030208.pth... +[2024-03-29 18:49:04,295][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060423_989970432.pth +[2024-03-29 18:49:04,635][00497] Updated weights for policy 0, policy_version 61038 (0.0028) +[2024-03-29 18:49:08,530][00497] Updated weights for policy 0, policy_version 61048 (0.0019) +[2024-03-29 18:49:08,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 1000210432. Throughput: 0: 41601.3. Samples: 882393860. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:49:08,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:49:12,344][00497] Updated weights for policy 0, policy_version 61058 (0.0031) +[2024-03-29 18:49:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 1000423424. Throughput: 0: 41570.7. Samples: 882646300. Policy #0 lag: (min: 0.0, avg: 20.8, max: 42.0) +[2024-03-29 18:49:13,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 18:49:17,015][00497] Updated weights for policy 0, policy_version 61068 (0.0026) +[2024-03-29 18:49:18,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 1000620032. Throughput: 0: 41968.5. Samples: 882793020. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:18,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 18:49:20,087][00497] Updated weights for policy 0, policy_version 61078 (0.0022) +[2024-03-29 18:49:23,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.1, 300 sec: 41876.4). Total num frames: 1000849408. Throughput: 0: 41974.4. Samples: 883032100. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:23,840][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 18:49:24,144][00497] Updated weights for policy 0, policy_version 61088 (0.0021) +[2024-03-29 18:49:27,712][00497] Updated weights for policy 0, policy_version 61098 (0.0023) +[2024-03-29 18:49:28,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.5, 300 sec: 42098.6). Total num frames: 1001078784. Throughput: 0: 42082.7. Samples: 883286660. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:28,840][00126] Avg episode reward: [(0, '0.632')] +[2024-03-29 18:49:32,426][00497] Updated weights for policy 0, policy_version 61108 (0.0018) +[2024-03-29 18:49:33,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41506.2, 300 sec: 41876.4). Total num frames: 1001259008. Throughput: 0: 42012.0. Samples: 883425760. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:33,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 18:49:35,501][00497] Updated weights for policy 0, policy_version 61118 (0.0030) +[2024-03-29 18:49:35,822][00476] Signal inference workers to stop experience collection... (31450 times) +[2024-03-29 18:49:35,861][00497] InferenceWorker_p0-w0: stopping experience collection (31450 times) +[2024-03-29 18:49:36,021][00476] Signal inference workers to resume experience collection... (31450 times) +[2024-03-29 18:49:36,021][00497] InferenceWorker_p0-w0: resuming experience collection (31450 times) +[2024-03-29 18:49:38,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 1001472000. Throughput: 0: 42024.9. Samples: 883657360. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:38,840][00126] Avg episode reward: [(0, '0.506')] +[2024-03-29 18:49:39,744][00497] Updated weights for policy 0, policy_version 61128 (0.0023) +[2024-03-29 18:49:43,491][00497] Updated weights for policy 0, policy_version 61138 (0.0029) +[2024-03-29 18:49:43,839][00126] Fps is (10 sec: 42597.3, 60 sec: 42052.1, 300 sec: 41931.9). Total num frames: 1001684992. Throughput: 0: 41922.6. Samples: 883906380. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:43,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:49:48,292][00497] Updated weights for policy 0, policy_version 61148 (0.0018) +[2024-03-29 18:49:48,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.1, 300 sec: 41820.8). Total num frames: 1001865216. Throughput: 0: 41468.9. Samples: 884033400. Policy #0 lag: (min: 0.0, avg: 20.2, max: 40.0) +[2024-03-29 18:49:48,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 18:49:51,744][00497] Updated weights for policy 0, policy_version 61158 (0.0027) +[2024-03-29 18:49:53,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41779.1, 300 sec: 41876.4). Total num frames: 1002094592. Throughput: 0: 41712.8. Samples: 884270940. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:49:53,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:49:55,679][00497] Updated weights for policy 0, policy_version 61168 (0.0023) +[2024-03-29 18:49:58,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41506.3, 300 sec: 41820.9). Total num frames: 1002291200. Throughput: 0: 41890.4. Samples: 884531360. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:49:58,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 18:49:59,460][00497] Updated weights for policy 0, policy_version 61178 (0.0026) +[2024-03-29 18:50:03,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41233.0, 300 sec: 41820.8). Total num frames: 1002487808. Throughput: 0: 41348.2. Samples: 884653700. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:03,841][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:50:04,074][00497] Updated weights for policy 0, policy_version 61188 (0.0018) +[2024-03-29 18:50:07,312][00497] Updated weights for policy 0, policy_version 61198 (0.0021) +[2024-03-29 18:50:08,839][00126] Fps is (10 sec: 44236.0, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 1002733568. Throughput: 0: 41464.5. Samples: 884898000. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:08,840][00126] Avg episode reward: [(0, '0.632')] +[2024-03-29 18:50:10,325][00476] Signal inference workers to stop experience collection... (31500 times) +[2024-03-29 18:50:10,365][00497] InferenceWorker_p0-w0: stopping experience collection (31500 times) +[2024-03-29 18:50:10,551][00476] Signal inference workers to resume experience collection... (31500 times) +[2024-03-29 18:50:10,585][00497] InferenceWorker_p0-w0: resuming experience collection (31500 times) +[2024-03-29 18:50:11,297][00497] Updated weights for policy 0, policy_version 61208 (0.0025) +[2024-03-29 18:50:13,839][00126] Fps is (10 sec: 42599.3, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 1002913792. Throughput: 0: 41387.1. Samples: 885149080. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:13,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 18:50:15,134][00497] Updated weights for policy 0, policy_version 61218 (0.0024) +[2024-03-29 18:50:18,839][00126] Fps is (10 sec: 37683.0, 60 sec: 41506.0, 300 sec: 41820.8). Total num frames: 1003110400. Throughput: 0: 41128.7. Samples: 885276560. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:18,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:50:19,709][00497] Updated weights for policy 0, policy_version 61228 (0.0029) +[2024-03-29 18:50:22,802][00497] Updated weights for policy 0, policy_version 61238 (0.0019) +[2024-03-29 18:50:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.3, 300 sec: 41876.4). Total num frames: 1003356160. Throughput: 0: 41727.5. Samples: 885535100. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:23,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 18:50:27,103][00497] Updated weights for policy 0, policy_version 61248 (0.0027) +[2024-03-29 18:50:28,839][00126] Fps is (10 sec: 45875.1, 60 sec: 41506.0, 300 sec: 41931.9). Total num frames: 1003569152. Throughput: 0: 41789.4. Samples: 885786900. Policy #0 lag: (min: 1.0, avg: 21.8, max: 41.0) +[2024-03-29 18:50:28,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:50:30,724][00497] Updated weights for policy 0, policy_version 61258 (0.0027) +[2024-03-29 18:50:33,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 1003749376. Throughput: 0: 41616.1. Samples: 885906120. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:33,841][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 18:50:35,466][00497] Updated weights for policy 0, policy_version 61268 (0.0018) +[2024-03-29 18:50:38,527][00497] Updated weights for policy 0, policy_version 61278 (0.0026) +[2024-03-29 18:50:38,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 1003995136. Throughput: 0: 42272.6. Samples: 886173200. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:38,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:50:42,883][00497] Updated weights for policy 0, policy_version 61288 (0.0021) +[2024-03-29 18:50:43,279][00476] Signal inference workers to stop experience collection... (31550 times) +[2024-03-29 18:50:43,337][00497] InferenceWorker_p0-w0: stopping experience collection (31550 times) +[2024-03-29 18:50:43,445][00476] Signal inference workers to resume experience collection... (31550 times) +[2024-03-29 18:50:43,445][00497] InferenceWorker_p0-w0: resuming experience collection (31550 times) +[2024-03-29 18:50:43,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.4, 300 sec: 41931.9). Total num frames: 1004191744. Throughput: 0: 41848.8. Samples: 886414560. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:43,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 18:50:46,622][00497] Updated weights for policy 0, policy_version 61298 (0.0023) +[2024-03-29 18:50:48,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 1004371968. Throughput: 0: 41779.6. Samples: 886533780. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:48,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 18:50:51,295][00497] Updated weights for policy 0, policy_version 61308 (0.0020) +[2024-03-29 18:50:53,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41779.3, 300 sec: 41820.9). Total num frames: 1004601344. Throughput: 0: 41997.5. Samples: 886787880. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:53,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:50:54,362][00497] Updated weights for policy 0, policy_version 61318 (0.0026) +[2024-03-29 18:50:58,737][00497] Updated weights for policy 0, policy_version 61328 (0.0022) +[2024-03-29 18:50:58,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 1004797952. Throughput: 0: 41742.2. Samples: 887027480. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:50:58,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 18:51:02,371][00497] Updated weights for policy 0, policy_version 61338 (0.0028) +[2024-03-29 18:51:03,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42052.4, 300 sec: 41876.4). Total num frames: 1005010944. Throughput: 0: 41788.5. Samples: 887157040. Policy #0 lag: (min: 2.0, avg: 21.0, max: 41.0) +[2024-03-29 18:51:03,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 18:51:03,858][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061341_1005010944.pth... +[2024-03-29 18:51:04,172][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000060732_995033088.pth +[2024-03-29 18:51:06,849][00497] Updated weights for policy 0, policy_version 61348 (0.0023) +[2024-03-29 18:51:08,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 1005223936. Throughput: 0: 41899.9. Samples: 887420600. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:08,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 18:51:09,905][00497] Updated weights for policy 0, policy_version 61358 (0.0023) +[2024-03-29 18:51:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 1005436928. Throughput: 0: 41614.4. Samples: 887659540. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:13,840][00126] Avg episode reward: [(0, '0.662')] +[2024-03-29 18:51:14,187][00497] Updated weights for policy 0, policy_version 61368 (0.0023) +[2024-03-29 18:51:15,307][00476] Signal inference workers to stop experience collection... (31600 times) +[2024-03-29 18:51:15,308][00476] Signal inference workers to resume experience collection... (31600 times) +[2024-03-29 18:51:15,353][00497] InferenceWorker_p0-w0: stopping experience collection (31600 times) +[2024-03-29 18:51:15,354][00497] InferenceWorker_p0-w0: resuming experience collection (31600 times) +[2024-03-29 18:51:17,891][00497] Updated weights for policy 0, policy_version 61378 (0.0026) +[2024-03-29 18:51:18,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.4, 300 sec: 41876.4). Total num frames: 1005649920. Throughput: 0: 41686.1. Samples: 887782000. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:18,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:51:22,556][00497] Updated weights for policy 0, policy_version 61388 (0.0024) +[2024-03-29 18:51:23,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.0, 300 sec: 41709.8). Total num frames: 1005846528. Throughput: 0: 41794.1. Samples: 888053940. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:23,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 18:51:25,518][00497] Updated weights for policy 0, policy_version 61398 (0.0026) +[2024-03-29 18:51:28,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1006075904. Throughput: 0: 41758.5. Samples: 888293700. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:28,841][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:51:29,767][00497] Updated weights for policy 0, policy_version 61408 (0.0024) +[2024-03-29 18:51:33,615][00497] Updated weights for policy 0, policy_version 61418 (0.0026) +[2024-03-29 18:51:33,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 1006272512. Throughput: 0: 41978.8. Samples: 888422820. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:33,840][00126] Avg episode reward: [(0, '0.599')] +[2024-03-29 18:51:38,096][00497] Updated weights for policy 0, policy_version 61428 (0.0024) +[2024-03-29 18:51:38,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 1006469120. Throughput: 0: 42112.7. Samples: 888682960. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:38,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:51:41,126][00497] Updated weights for policy 0, policy_version 61438 (0.0020) +[2024-03-29 18:51:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1006698496. Throughput: 0: 42028.1. Samples: 888918740. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:51:43,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 18:51:45,461][00497] Updated weights for policy 0, policy_version 61448 (0.0030) +[2024-03-29 18:51:48,839][00126] Fps is (10 sec: 44237.8, 60 sec: 42325.4, 300 sec: 41820.9). Total num frames: 1006911488. Throughput: 0: 42048.1. Samples: 889049200. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:51:48,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:51:49,266][00497] Updated weights for policy 0, policy_version 61458 (0.0017) +[2024-03-29 18:51:49,297][00476] Signal inference workers to stop experience collection... (31650 times) +[2024-03-29 18:51:49,340][00497] InferenceWorker_p0-w0: stopping experience collection (31650 times) +[2024-03-29 18:51:49,518][00476] Signal inference workers to resume experience collection... (31650 times) +[2024-03-29 18:51:49,518][00497] InferenceWorker_p0-w0: resuming experience collection (31650 times) +[2024-03-29 18:51:53,573][00497] Updated weights for policy 0, policy_version 61468 (0.0031) +[2024-03-29 18:51:53,839][00126] Fps is (10 sec: 39321.0, 60 sec: 41506.0, 300 sec: 41709.8). Total num frames: 1007091712. Throughput: 0: 41996.0. Samples: 889310420. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:51:53,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 18:51:56,918][00497] Updated weights for policy 0, policy_version 61478 (0.0022) +[2024-03-29 18:51:58,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 1007321088. Throughput: 0: 41931.5. Samples: 889546460. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:51:58,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:52:01,299][00497] Updated weights for policy 0, policy_version 61488 (0.0032) +[2024-03-29 18:52:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 1007534080. Throughput: 0: 42060.9. Samples: 889674740. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:52:03,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 18:52:04,955][00497] Updated weights for policy 0, policy_version 61498 (0.0019) +[2024-03-29 18:52:08,841][00126] Fps is (10 sec: 37676.8, 60 sec: 41231.9, 300 sec: 41654.0). Total num frames: 1007697920. Throughput: 0: 41669.6. Samples: 889929140. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:52:08,841][00126] Avg episode reward: [(0, '0.580')] +[2024-03-29 18:52:09,470][00497] Updated weights for policy 0, policy_version 61508 (0.0019) +[2024-03-29 18:52:12,724][00497] Updated weights for policy 0, policy_version 61518 (0.0030) +[2024-03-29 18:52:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 1007943680. Throughput: 0: 41581.4. Samples: 890164860. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:52:13,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 18:52:16,976][00497] Updated weights for policy 0, policy_version 61528 (0.0020) +[2024-03-29 18:52:18,839][00126] Fps is (10 sec: 44244.6, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 1008140288. Throughput: 0: 41406.7. Samples: 890286120. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:52:18,840][00126] Avg episode reward: [(0, '0.627')] +[2024-03-29 18:52:20,613][00497] Updated weights for policy 0, policy_version 61538 (0.0021) +[2024-03-29 18:52:23,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1008336896. Throughput: 0: 41179.1. Samples: 890536020. Policy #0 lag: (min: 0.0, avg: 21.5, max: 43.0) +[2024-03-29 18:52:23,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 18:52:25,407][00497] Updated weights for policy 0, policy_version 61548 (0.0018) +[2024-03-29 18:52:26,119][00476] Signal inference workers to stop experience collection... (31700 times) +[2024-03-29 18:52:26,160][00497] InferenceWorker_p0-w0: stopping experience collection (31700 times) +[2024-03-29 18:52:26,311][00476] Signal inference workers to resume experience collection... (31700 times) +[2024-03-29 18:52:26,312][00497] InferenceWorker_p0-w0: resuming experience collection (31700 times) +[2024-03-29 18:52:28,497][00497] Updated weights for policy 0, policy_version 61558 (0.0025) +[2024-03-29 18:52:28,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 1008566272. Throughput: 0: 41624.0. Samples: 890791820. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:28,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:52:32,739][00497] Updated weights for policy 0, policy_version 61568 (0.0028) +[2024-03-29 18:52:33,839][00126] Fps is (10 sec: 44237.5, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1008779264. Throughput: 0: 41425.3. Samples: 890913340. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:33,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 18:52:36,321][00497] Updated weights for policy 0, policy_version 61578 (0.0022) +[2024-03-29 18:52:38,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.4, 300 sec: 41765.3). Total num frames: 1008975872. Throughput: 0: 41332.2. Samples: 891170360. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:38,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 18:52:41,167][00497] Updated weights for policy 0, policy_version 61588 (0.0027) +[2024-03-29 18:52:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 1009188864. Throughput: 0: 41684.9. Samples: 891422280. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:43,841][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 18:52:44,174][00497] Updated weights for policy 0, policy_version 61598 (0.0021) +[2024-03-29 18:52:48,526][00497] Updated weights for policy 0, policy_version 61608 (0.0028) +[2024-03-29 18:52:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 1009401856. Throughput: 0: 41455.7. Samples: 891540240. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:48,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 18:52:52,262][00497] Updated weights for policy 0, policy_version 61618 (0.0024) +[2024-03-29 18:52:53,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 1009614848. Throughput: 0: 41450.4. Samples: 891794340. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:53,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 18:52:56,678][00497] Updated weights for policy 0, policy_version 61628 (0.0026) +[2024-03-29 18:52:58,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 1009811456. Throughput: 0: 42180.5. Samples: 892062980. Policy #0 lag: (min: 0.0, avg: 18.2, max: 42.0) +[2024-03-29 18:52:58,840][00126] Avg episode reward: [(0, '0.551')] +[2024-03-29 18:52:58,962][00476] Signal inference workers to stop experience collection... (31750 times) +[2024-03-29 18:52:58,963][00476] Signal inference workers to resume experience collection... (31750 times) +[2024-03-29 18:52:59,004][00497] InferenceWorker_p0-w0: stopping experience collection (31750 times) +[2024-03-29 18:52:59,004][00497] InferenceWorker_p0-w0: resuming experience collection (31750 times) +[2024-03-29 18:52:59,827][00497] Updated weights for policy 0, policy_version 61638 (0.0020) +[2024-03-29 18:53:03,753][00497] Updated weights for policy 0, policy_version 61648 (0.0018) +[2024-03-29 18:53:03,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.2, 300 sec: 41820.8). Total num frames: 1010040832. Throughput: 0: 42001.2. Samples: 892176180. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:03,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 18:53:04,149][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061649_1010057216.pth... +[2024-03-29 18:53:04,475][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061037_1000030208.pth +[2024-03-29 18:53:07,577][00497] Updated weights for policy 0, policy_version 61658 (0.0026) +[2024-03-29 18:53:08,839][00126] Fps is (10 sec: 44237.2, 60 sec: 42599.7, 300 sec: 41931.9). Total num frames: 1010253824. Throughput: 0: 42100.6. Samples: 892430540. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:08,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:53:12,230][00497] Updated weights for policy 0, policy_version 61668 (0.0019) +[2024-03-29 18:53:13,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 1010434048. Throughput: 0: 42218.2. Samples: 892691640. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:13,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:53:15,552][00497] Updated weights for policy 0, policy_version 61678 (0.0029) +[2024-03-29 18:53:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 1010679808. Throughput: 0: 41947.6. Samples: 892800980. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:18,841][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 18:53:19,284][00497] Updated weights for policy 0, policy_version 61688 (0.0037) +[2024-03-29 18:53:23,240][00497] Updated weights for policy 0, policy_version 61698 (0.0023) +[2024-03-29 18:53:23,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42325.4, 300 sec: 41876.4). Total num frames: 1010876416. Throughput: 0: 42118.5. Samples: 893065700. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:23,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 18:53:27,745][00497] Updated weights for policy 0, policy_version 61708 (0.0024) +[2024-03-29 18:53:28,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1011073024. Throughput: 0: 42291.6. Samples: 893325400. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:28,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 18:53:30,936][00497] Updated weights for policy 0, policy_version 61718 (0.0020) +[2024-03-29 18:53:32,648][00476] Signal inference workers to stop experience collection... (31800 times) +[2024-03-29 18:53:32,649][00476] Signal inference workers to resume experience collection... (31800 times) +[2024-03-29 18:53:32,694][00497] InferenceWorker_p0-w0: stopping experience collection (31800 times) +[2024-03-29 18:53:32,694][00497] InferenceWorker_p0-w0: resuming experience collection (31800 times) +[2024-03-29 18:53:33,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42325.4, 300 sec: 41820.8). Total num frames: 1011318784. Throughput: 0: 42256.8. Samples: 893441800. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:33,840][00126] Avg episode reward: [(0, '0.668')] +[2024-03-29 18:53:34,797][00497] Updated weights for policy 0, policy_version 61728 (0.0024) +[2024-03-29 18:53:38,634][00497] Updated weights for policy 0, policy_version 61738 (0.0024) +[2024-03-29 18:53:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 1011515392. Throughput: 0: 42412.9. Samples: 893702920. Policy #0 lag: (min: 0.0, avg: 22.2, max: 42.0) +[2024-03-29 18:53:38,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 18:53:43,249][00497] Updated weights for policy 0, policy_version 61748 (0.0025) +[2024-03-29 18:53:43,839][00126] Fps is (10 sec: 39321.7, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 1011712000. Throughput: 0: 42194.8. Samples: 893961740. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:53:43,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 18:53:46,630][00497] Updated weights for policy 0, policy_version 61758 (0.0022) +[2024-03-29 18:53:48,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 1011941376. Throughput: 0: 41943.2. Samples: 894063620. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:53:48,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 18:53:50,713][00497] Updated weights for policy 0, policy_version 61768 (0.0019) +[2024-03-29 18:53:53,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1012121600. Throughput: 0: 41802.1. Samples: 894311640. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:53:53,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 18:53:54,832][00497] Updated weights for policy 0, policy_version 61778 (0.0023) +[2024-03-29 18:53:58,839][00126] Fps is (10 sec: 36045.0, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 1012301824. Throughput: 0: 41757.8. Samples: 894570740. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:53:58,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:53:59,117][00497] Updated weights for policy 0, policy_version 61788 (0.0027) +[2024-03-29 18:54:02,560][00497] Updated weights for policy 0, policy_version 61798 (0.0029) +[2024-03-29 18:54:03,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 1012531200. Throughput: 0: 42107.5. Samples: 894695820. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:54:03,840][00126] Avg episode reward: [(0, '0.669')] +[2024-03-29 18:54:06,556][00497] Updated weights for policy 0, policy_version 61808 (0.0019) +[2024-03-29 18:54:07,435][00476] Signal inference workers to stop experience collection... (31850 times) +[2024-03-29 18:54:07,473][00497] InferenceWorker_p0-w0: stopping experience collection (31850 times) +[2024-03-29 18:54:07,523][00476] Signal inference workers to resume experience collection... (31850 times) +[2024-03-29 18:54:07,523][00497] InferenceWorker_p0-w0: resuming experience collection (31850 times) +[2024-03-29 18:54:08,839][00126] Fps is (10 sec: 44235.9, 60 sec: 41506.0, 300 sec: 41765.3). Total num frames: 1012744192. Throughput: 0: 41579.1. Samples: 894936760. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:54:08,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 18:54:10,733][00497] Updated weights for policy 0, policy_version 61818 (0.0032) +[2024-03-29 18:54:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1012940800. Throughput: 0: 41528.9. Samples: 895194200. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:54:13,841][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 18:54:14,849][00497] Updated weights for policy 0, policy_version 61828 (0.0019) +[2024-03-29 18:54:18,547][00497] Updated weights for policy 0, policy_version 61838 (0.0023) +[2024-03-29 18:54:18,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 1013153792. Throughput: 0: 41459.1. Samples: 895307460. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 18:54:18,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 18:54:22,508][00497] Updated weights for policy 0, policy_version 61848 (0.0023) +[2024-03-29 18:54:23,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 1013366784. Throughput: 0: 40911.5. Samples: 895543940. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:23,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:54:26,502][00497] Updated weights for policy 0, policy_version 61858 (0.0027) +[2024-03-29 18:54:28,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 1013547008. Throughput: 0: 40852.4. Samples: 895800100. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:28,840][00126] Avg episode reward: [(0, '0.674')] +[2024-03-29 18:54:30,758][00497] Updated weights for policy 0, policy_version 61868 (0.0030) +[2024-03-29 18:54:33,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40960.0, 300 sec: 41709.8). Total num frames: 1013776384. Throughput: 0: 41632.4. Samples: 895937080. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:33,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 18:54:34,244][00497] Updated weights for policy 0, policy_version 61878 (0.0023) +[2024-03-29 18:54:38,187][00497] Updated weights for policy 0, policy_version 61888 (0.0022) +[2024-03-29 18:54:38,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 1013989376. Throughput: 0: 41357.4. Samples: 896172720. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:38,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 18:54:40,710][00476] Signal inference workers to stop experience collection... (31900 times) +[2024-03-29 18:54:40,711][00476] Signal inference workers to resume experience collection... (31900 times) +[2024-03-29 18:54:40,750][00497] InferenceWorker_p0-w0: stopping experience collection (31900 times) +[2024-03-29 18:54:40,750][00497] InferenceWorker_p0-w0: resuming experience collection (31900 times) +[2024-03-29 18:54:42,154][00497] Updated weights for policy 0, policy_version 61898 (0.0024) +[2024-03-29 18:54:43,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 1014202368. Throughput: 0: 41191.9. Samples: 896424380. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:43,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 18:54:46,534][00497] Updated weights for policy 0, policy_version 61908 (0.0021) +[2024-03-29 18:54:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40686.9, 300 sec: 41654.2). Total num frames: 1014382592. Throughput: 0: 41443.9. Samples: 896560800. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:48,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 18:54:50,028][00497] Updated weights for policy 0, policy_version 61918 (0.0022) +[2024-03-29 18:54:53,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.2, 300 sec: 41820.8). Total num frames: 1014628352. Throughput: 0: 41363.6. Samples: 896798120. Policy #0 lag: (min: 1.0, avg: 21.4, max: 42.0) +[2024-03-29 18:54:53,835][00497] Updated weights for policy 0, policy_version 61928 (0.0031) +[2024-03-29 18:54:53,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 18:54:57,902][00497] Updated weights for policy 0, policy_version 61938 (0.0027) +[2024-03-29 18:54:58,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.2, 300 sec: 41820.9). Total num frames: 1014824960. Throughput: 0: 41328.0. Samples: 897053960. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:54:58,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 18:55:01,977][00497] Updated weights for policy 0, policy_version 61948 (0.0025) +[2024-03-29 18:55:03,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.0, 300 sec: 41654.2). Total num frames: 1015021568. Throughput: 0: 41807.9. Samples: 897188820. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:03,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 18:55:03,860][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061952_1015021568.pth... +[2024-03-29 18:55:04,164][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061341_1005010944.pth +[2024-03-29 18:55:05,807][00497] Updated weights for policy 0, policy_version 61958 (0.0025) +[2024-03-29 18:55:08,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 1015234560. Throughput: 0: 41668.4. Samples: 897419020. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:08,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 18:55:09,753][00497] Updated weights for policy 0, policy_version 61968 (0.0018) +[2024-03-29 18:55:11,341][00476] Signal inference workers to stop experience collection... (31950 times) +[2024-03-29 18:55:11,402][00497] InferenceWorker_p0-w0: stopping experience collection (31950 times) +[2024-03-29 18:55:11,437][00476] Signal inference workers to resume experience collection... (31950 times) +[2024-03-29 18:55:11,439][00497] InferenceWorker_p0-w0: resuming experience collection (31950 times) +[2024-03-29 18:55:13,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.0, 300 sec: 41765.3). Total num frames: 1015431168. Throughput: 0: 41509.2. Samples: 897668020. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:13,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 18:55:14,025][00497] Updated weights for policy 0, policy_version 61978 (0.0022) +[2024-03-29 18:55:17,926][00497] Updated weights for policy 0, policy_version 61988 (0.0027) +[2024-03-29 18:55:18,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 1015644160. Throughput: 0: 41483.5. Samples: 897803840. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:18,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:55:21,582][00497] Updated weights for policy 0, policy_version 61998 (0.0028) +[2024-03-29 18:55:23,839][00126] Fps is (10 sec: 44237.7, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1015873536. Throughput: 0: 41714.8. Samples: 898049880. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:23,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:55:25,478][00497] Updated weights for policy 0, policy_version 62008 (0.0019) +[2024-03-29 18:55:28,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1016053760. Throughput: 0: 41671.2. Samples: 898299580. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:28,841][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 18:55:29,431][00497] Updated weights for policy 0, policy_version 62018 (0.0026) +[2024-03-29 18:55:33,599][00497] Updated weights for policy 0, policy_version 62028 (0.0019) +[2024-03-29 18:55:33,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1016266752. Throughput: 0: 41543.2. Samples: 898430240. Policy #0 lag: (min: 1.0, avg: 20.6, max: 42.0) +[2024-03-29 18:55:33,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 18:55:37,054][00497] Updated weights for policy 0, policy_version 62038 (0.0025) +[2024-03-29 18:55:38,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 1016512512. Throughput: 0: 41760.0. Samples: 898677320. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:55:38,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 18:55:41,291][00497] Updated weights for policy 0, policy_version 62048 (0.0025) +[2024-03-29 18:55:42,021][00476] Signal inference workers to stop experience collection... (32000 times) +[2024-03-29 18:55:42,059][00497] InferenceWorker_p0-w0: stopping experience collection (32000 times) +[2024-03-29 18:55:42,240][00476] Signal inference workers to resume experience collection... (32000 times) +[2024-03-29 18:55:42,240][00497] InferenceWorker_p0-w0: resuming experience collection (32000 times) +[2024-03-29 18:55:43,839][00126] Fps is (10 sec: 44236.1, 60 sec: 41779.1, 300 sec: 41820.9). Total num frames: 1016709120. Throughput: 0: 41407.4. Samples: 898917300. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:55:43,840][00126] Avg episode reward: [(0, '0.680')] +[2024-03-29 18:55:45,293][00497] Updated weights for policy 0, policy_version 62058 (0.0025) +[2024-03-29 18:55:48,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1016905728. Throughput: 0: 41473.4. Samples: 899055120. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:55:48,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 18:55:49,565][00497] Updated weights for policy 0, policy_version 62068 (0.0022) +[2024-03-29 18:55:53,113][00497] Updated weights for policy 0, policy_version 62078 (0.0030) +[2024-03-29 18:55:53,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1017102336. Throughput: 0: 41960.5. Samples: 899307240. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:55:53,840][00126] Avg episode reward: [(0, '0.625')] +[2024-03-29 18:55:57,230][00497] Updated weights for policy 0, policy_version 62088 (0.0024) +[2024-03-29 18:55:58,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 1017331712. Throughput: 0: 41452.0. Samples: 899533360. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:55:58,842][00126] Avg episode reward: [(0, '0.647')] +[2024-03-29 18:56:01,421][00497] Updated weights for policy 0, policy_version 62098 (0.0020) +[2024-03-29 18:56:03,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41506.3, 300 sec: 41654.3). Total num frames: 1017511936. Throughput: 0: 41584.5. Samples: 899675140. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:56:03,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 18:56:05,233][00497] Updated weights for policy 0, policy_version 62108 (0.0023) +[2024-03-29 18:56:08,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 1017724928. Throughput: 0: 41624.0. Samples: 899922960. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:56:08,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 18:56:09,068][00497] Updated weights for policy 0, policy_version 62118 (0.0036) +[2024-03-29 18:56:12,736][00497] Updated weights for policy 0, policy_version 62128 (0.0028) +[2024-03-29 18:56:13,839][00126] Fps is (10 sec: 44235.9, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1017954304. Throughput: 0: 41501.6. Samples: 900167160. Policy #0 lag: (min: 1.0, avg: 19.2, max: 42.0) +[2024-03-29 18:56:13,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 18:56:16,888][00497] Updated weights for policy 0, policy_version 62138 (0.0022) +[2024-03-29 18:56:17,349][00476] Signal inference workers to stop experience collection... (32050 times) +[2024-03-29 18:56:17,418][00497] InferenceWorker_p0-w0: stopping experience collection (32050 times) +[2024-03-29 18:56:17,423][00476] Signal inference workers to resume experience collection... (32050 times) +[2024-03-29 18:56:17,445][00497] InferenceWorker_p0-w0: resuming experience collection (32050 times) +[2024-03-29 18:56:18,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 1018134528. Throughput: 0: 41537.6. Samples: 900299440. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:18,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 18:56:20,855][00497] Updated weights for policy 0, policy_version 62148 (0.0033) +[2024-03-29 18:56:23,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 1018347520. Throughput: 0: 41685.3. Samples: 900553160. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:23,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:56:24,723][00497] Updated weights for policy 0, policy_version 62158 (0.0027) +[2024-03-29 18:56:28,510][00497] Updated weights for policy 0, policy_version 62168 (0.0032) +[2024-03-29 18:56:28,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.2, 300 sec: 41709.8). Total num frames: 1018576896. Throughput: 0: 41619.2. Samples: 900790160. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:28,840][00126] Avg episode reward: [(0, '0.638')] +[2024-03-29 18:56:32,791][00497] Updated weights for policy 0, policy_version 62178 (0.0025) +[2024-03-29 18:56:33,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41654.3). Total num frames: 1018757120. Throughput: 0: 41172.0. Samples: 900907860. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:33,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:56:36,960][00497] Updated weights for policy 0, policy_version 62188 (0.0024) +[2024-03-29 18:56:38,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40687.0, 300 sec: 41543.2). Total num frames: 1018953728. Throughput: 0: 41465.9. Samples: 901173200. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:38,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 18:56:40,601][00497] Updated weights for policy 0, policy_version 62198 (0.0026) +[2024-03-29 18:56:43,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 1019183104. Throughput: 0: 41568.5. Samples: 901403940. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:43,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 18:56:44,552][00497] Updated weights for policy 0, policy_version 62208 (0.0025) +[2024-03-29 18:56:48,838][00497] Updated weights for policy 0, policy_version 62218 (0.0022) +[2024-03-29 18:56:48,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 1019379712. Throughput: 0: 41011.9. Samples: 901520680. Policy #0 lag: (min: 1.0, avg: 22.7, max: 43.0) +[2024-03-29 18:56:48,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:56:50,772][00476] Signal inference workers to stop experience collection... (32100 times) +[2024-03-29 18:56:50,853][00476] Signal inference workers to resume experience collection... (32100 times) +[2024-03-29 18:56:50,855][00497] InferenceWorker_p0-w0: stopping experience collection (32100 times) +[2024-03-29 18:56:50,884][00497] InferenceWorker_p0-w0: resuming experience collection (32100 times) +[2024-03-29 18:56:52,862][00497] Updated weights for policy 0, policy_version 62228 (0.0033) +[2024-03-29 18:56:53,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1019592704. Throughput: 0: 41289.2. Samples: 901780980. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:56:53,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 18:56:56,633][00497] Updated weights for policy 0, policy_version 62238 (0.0028) +[2024-03-29 18:56:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 1019805696. Throughput: 0: 40980.0. Samples: 902011260. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:56:58,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 18:57:00,550][00497] Updated weights for policy 0, policy_version 62248 (0.0024) +[2024-03-29 18:57:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41506.0, 300 sec: 41710.0). Total num frames: 1020002304. Throughput: 0: 41107.5. Samples: 902149280. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:03,840][00126] Avg episode reward: [(0, '0.609')] +[2024-03-29 18:57:03,865][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062256_1020002304.pth... +[2024-03-29 18:57:04,185][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061649_1010057216.pth +[2024-03-29 18:57:04,768][00497] Updated weights for policy 0, policy_version 62258 (0.0025) +[2024-03-29 18:57:08,474][00497] Updated weights for policy 0, policy_version 62268 (0.0026) +[2024-03-29 18:57:08,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 1020198912. Throughput: 0: 41260.1. Samples: 902409860. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:08,840][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 18:57:12,764][00497] Updated weights for policy 0, policy_version 62278 (0.0023) +[2024-03-29 18:57:13,839][00126] Fps is (10 sec: 42599.6, 60 sec: 41233.2, 300 sec: 41654.2). Total num frames: 1020428288. Throughput: 0: 41349.0. Samples: 902650860. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:13,840][00126] Avg episode reward: [(0, '0.517')] +[2024-03-29 18:57:16,494][00497] Updated weights for policy 0, policy_version 62288 (0.0022) +[2024-03-29 18:57:18,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1020641280. Throughput: 0: 41233.2. Samples: 902763360. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:18,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 18:57:20,660][00497] Updated weights for policy 0, policy_version 62298 (0.0026) +[2024-03-29 18:57:21,108][00476] Signal inference workers to stop experience collection... (32150 times) +[2024-03-29 18:57:21,134][00497] InferenceWorker_p0-w0: stopping experience collection (32150 times) +[2024-03-29 18:57:21,300][00476] Signal inference workers to resume experience collection... (32150 times) +[2024-03-29 18:57:21,301][00497] InferenceWorker_p0-w0: resuming experience collection (32150 times) +[2024-03-29 18:57:23,839][00126] Fps is (10 sec: 39320.8, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 1020821504. Throughput: 0: 41032.8. Samples: 903019680. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:23,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 18:57:24,273][00497] Updated weights for policy 0, policy_version 62308 (0.0021) +[2024-03-29 18:57:28,446][00497] Updated weights for policy 0, policy_version 62318 (0.0025) +[2024-03-29 18:57:28,839][00126] Fps is (10 sec: 39322.1, 60 sec: 40960.1, 300 sec: 41543.2). Total num frames: 1021034496. Throughput: 0: 41640.1. Samples: 903277740. Policy #0 lag: (min: 1.0, avg: 19.2, max: 41.0) +[2024-03-29 18:57:28,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 18:57:32,254][00497] Updated weights for policy 0, policy_version 62328 (0.0029) +[2024-03-29 18:57:33,839][00126] Fps is (10 sec: 44237.4, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1021263872. Throughput: 0: 41406.3. Samples: 903383960. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:33,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 18:57:36,150][00497] Updated weights for policy 0, policy_version 62338 (0.0024) +[2024-03-29 18:57:38,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 1021444096. Throughput: 0: 41655.7. Samples: 903655480. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:38,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 18:57:39,957][00497] Updated weights for policy 0, policy_version 62348 (0.0027) +[2024-03-29 18:57:43,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40960.1, 300 sec: 41487.6). Total num frames: 1021640704. Throughput: 0: 41934.4. Samples: 903898300. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:43,840][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 18:57:44,178][00497] Updated weights for policy 0, policy_version 62358 (0.0027) +[2024-03-29 18:57:47,879][00497] Updated weights for policy 0, policy_version 62368 (0.0023) +[2024-03-29 18:57:48,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1021886464. Throughput: 0: 41233.4. Samples: 904004780. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:48,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:57:51,972][00497] Updated weights for policy 0, policy_version 62378 (0.0020) +[2024-03-29 18:57:53,021][00476] Signal inference workers to stop experience collection... (32200 times) +[2024-03-29 18:57:53,058][00497] InferenceWorker_p0-w0: stopping experience collection (32200 times) +[2024-03-29 18:57:53,249][00476] Signal inference workers to resume experience collection... (32200 times) +[2024-03-29 18:57:53,250][00497] InferenceWorker_p0-w0: resuming experience collection (32200 times) +[2024-03-29 18:57:53,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41233.2, 300 sec: 41543.2). Total num frames: 1022066688. Throughput: 0: 41366.7. Samples: 904271360. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:53,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 18:57:55,718][00497] Updated weights for policy 0, policy_version 62388 (0.0022) +[2024-03-29 18:57:58,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1022279680. Throughput: 0: 41720.8. Samples: 904528300. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:57:58,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 18:57:59,878][00497] Updated weights for policy 0, policy_version 62398 (0.0045) +[2024-03-29 18:58:03,428][00497] Updated weights for policy 0, policy_version 62408 (0.0022) +[2024-03-29 18:58:03,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.3, 300 sec: 41487.6). Total num frames: 1022492672. Throughput: 0: 41750.4. Samples: 904642120. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:58:03,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 18:58:07,706][00497] Updated weights for policy 0, policy_version 62418 (0.0022) +[2024-03-29 18:58:08,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 1022689280. Throughput: 0: 41722.4. Samples: 904897180. Policy #0 lag: (min: 0.0, avg: 21.2, max: 41.0) +[2024-03-29 18:58:08,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 18:58:11,423][00497] Updated weights for policy 0, policy_version 62428 (0.0023) +[2024-03-29 18:58:13,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41233.0, 300 sec: 41432.1). Total num frames: 1022902272. Throughput: 0: 41228.4. Samples: 905133020. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:13,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 18:58:15,743][00497] Updated weights for policy 0, policy_version 62438 (0.0020) +[2024-03-29 18:58:18,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1023115264. Throughput: 0: 41994.6. Samples: 905273720. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:18,841][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:58:19,234][00497] Updated weights for policy 0, policy_version 62448 (0.0027) +[2024-03-29 18:58:23,735][00497] Updated weights for policy 0, policy_version 62458 (0.0021) +[2024-03-29 18:58:23,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 1023311872. Throughput: 0: 40967.8. Samples: 905499040. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:23,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:58:25,230][00476] Signal inference workers to stop experience collection... (32250 times) +[2024-03-29 18:58:25,311][00476] Signal inference workers to resume experience collection... (32250 times) +[2024-03-29 18:58:25,318][00497] InferenceWorker_p0-w0: stopping experience collection (32250 times) +[2024-03-29 18:58:25,340][00497] InferenceWorker_p0-w0: resuming experience collection (32250 times) +[2024-03-29 18:58:27,169][00497] Updated weights for policy 0, policy_version 62468 (0.0024) +[2024-03-29 18:58:28,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.1, 300 sec: 41432.1). Total num frames: 1023541248. Throughput: 0: 41408.3. Samples: 905761680. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:28,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 18:58:31,422][00497] Updated weights for policy 0, policy_version 62478 (0.0021) +[2024-03-29 18:58:33,839][00126] Fps is (10 sec: 44237.2, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 1023754240. Throughput: 0: 42096.5. Samples: 905899120. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:33,840][00126] Avg episode reward: [(0, '0.650')] +[2024-03-29 18:58:34,880][00497] Updated weights for policy 0, policy_version 62488 (0.0019) +[2024-03-29 18:58:38,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 1023934464. Throughput: 0: 41152.5. Samples: 906123220. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:38,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 18:58:39,254][00497] Updated weights for policy 0, policy_version 62498 (0.0023) +[2024-03-29 18:58:43,052][00497] Updated weights for policy 0, policy_version 62508 (0.0021) +[2024-03-29 18:58:43,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.1, 300 sec: 41376.5). Total num frames: 1024147456. Throughput: 0: 41047.6. Samples: 906375440. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:43,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 18:58:47,356][00497] Updated weights for policy 0, policy_version 62518 (0.0021) +[2024-03-29 18:58:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41233.2, 300 sec: 41487.6). Total num frames: 1024360448. Throughput: 0: 41480.8. Samples: 906508760. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 18:58:48,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 18:58:51,019][00497] Updated weights for policy 0, policy_version 62528 (0.0022) +[2024-03-29 18:58:53,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1024573440. Throughput: 0: 41181.3. Samples: 906750340. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:58:53,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 18:58:55,180][00497] Updated weights for policy 0, policy_version 62538 (0.0020) +[2024-03-29 18:58:58,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 41487.6). Total num frames: 1024770048. Throughput: 0: 41609.4. Samples: 907005440. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:58:58,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 18:58:58,910][00497] Updated weights for policy 0, policy_version 62548 (0.0030) +[2024-03-29 18:59:02,425][00476] Signal inference workers to stop experience collection... (32300 times) +[2024-03-29 18:59:02,426][00476] Signal inference workers to resume experience collection... (32300 times) +[2024-03-29 18:59:02,459][00497] InferenceWorker_p0-w0: stopping experience collection (32300 times) +[2024-03-29 18:59:02,459][00497] InferenceWorker_p0-w0: resuming experience collection (32300 times) +[2024-03-29 18:59:02,719][00497] Updated weights for policy 0, policy_version 62558 (0.0024) +[2024-03-29 18:59:03,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.1, 300 sec: 41543.2). Total num frames: 1024999424. Throughput: 0: 41534.2. Samples: 907142760. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:59:03,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:59:04,070][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062562_1025015808.pth... +[2024-03-29 18:59:04,383][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000061952_1015021568.pth +[2024-03-29 18:59:06,609][00497] Updated weights for policy 0, policy_version 62568 (0.0025) +[2024-03-29 18:59:08,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42325.3, 300 sec: 41654.2). Total num frames: 1025228800. Throughput: 0: 41881.8. Samples: 907383720. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:59:08,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 18:59:11,038][00497] Updated weights for policy 0, policy_version 62578 (0.0023) +[2024-03-29 18:59:13,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 1025392640. Throughput: 0: 41546.3. Samples: 907631260. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:59:13,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 18:59:14,593][00497] Updated weights for policy 0, policy_version 62588 (0.0027) +[2024-03-29 18:59:18,668][00497] Updated weights for policy 0, policy_version 62598 (0.0027) +[2024-03-29 18:59:18,839][00126] Fps is (10 sec: 37683.8, 60 sec: 41506.2, 300 sec: 41487.6). Total num frames: 1025605632. Throughput: 0: 41341.9. Samples: 907759500. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:59:18,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 18:59:22,250][00497] Updated weights for policy 0, policy_version 62608 (0.0021) +[2024-03-29 18:59:23,839][00126] Fps is (10 sec: 44235.9, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 1025835008. Throughput: 0: 41895.3. Samples: 908008520. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 18:59:23,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 18:59:26,571][00497] Updated weights for policy 0, policy_version 62618 (0.0022) +[2024-03-29 18:59:28,839][00126] Fps is (10 sec: 42597.5, 60 sec: 41506.1, 300 sec: 41543.1). Total num frames: 1026031616. Throughput: 0: 41950.1. Samples: 908263200. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:28,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 18:59:30,300][00497] Updated weights for policy 0, policy_version 62628 (0.0020) +[2024-03-29 18:59:33,839][00126] Fps is (10 sec: 39322.6, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1026228224. Throughput: 0: 41614.2. Samples: 908381400. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:33,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 18:59:34,283][00497] Updated weights for policy 0, policy_version 62638 (0.0017) +[2024-03-29 18:59:37,904][00497] Updated weights for policy 0, policy_version 62648 (0.0021) +[2024-03-29 18:59:38,839][00126] Fps is (10 sec: 42598.9, 60 sec: 42052.2, 300 sec: 41543.2). Total num frames: 1026457600. Throughput: 0: 41885.3. Samples: 908635180. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:38,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 18:59:38,883][00476] Signal inference workers to stop experience collection... (32350 times) +[2024-03-29 18:59:38,914][00497] InferenceWorker_p0-w0: stopping experience collection (32350 times) +[2024-03-29 18:59:39,098][00476] Signal inference workers to resume experience collection... (32350 times) +[2024-03-29 18:59:39,098][00497] InferenceWorker_p0-w0: resuming experience collection (32350 times) +[2024-03-29 18:59:42,047][00497] Updated weights for policy 0, policy_version 62658 (0.0025) +[2024-03-29 18:59:43,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 1026637824. Throughput: 0: 42247.1. Samples: 908906560. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:43,840][00126] Avg episode reward: [(0, '0.642')] +[2024-03-29 18:59:46,014][00497] Updated weights for policy 0, policy_version 62668 (0.0021) +[2024-03-29 18:59:48,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.0, 300 sec: 41432.1). Total num frames: 1026850816. Throughput: 0: 41403.5. Samples: 909005920. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:48,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 18:59:50,033][00497] Updated weights for policy 0, policy_version 62678 (0.0019) +[2024-03-29 18:59:53,560][00497] Updated weights for policy 0, policy_version 62688 (0.0024) +[2024-03-29 18:59:53,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 1027080192. Throughput: 0: 41590.8. Samples: 909255300. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:53,840][00126] Avg episode reward: [(0, '0.653')] +[2024-03-29 18:59:57,715][00497] Updated weights for policy 0, policy_version 62698 (0.0025) +[2024-03-29 18:59:58,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 1027260416. Throughput: 0: 42084.5. Samples: 909525060. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 18:59:58,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 19:00:01,653][00497] Updated weights for policy 0, policy_version 62708 (0.0024) +[2024-03-29 19:00:03,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 1027489792. Throughput: 0: 41910.5. Samples: 909645480. Policy #0 lag: (min: 0.0, avg: 18.3, max: 41.0) +[2024-03-29 19:00:03,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 19:00:05,772][00497] Updated weights for policy 0, policy_version 62718 (0.0021) +[2024-03-29 19:00:08,839][00126] Fps is (10 sec: 45874.9, 60 sec: 41506.2, 300 sec: 41654.3). Total num frames: 1027719168. Throughput: 0: 41785.1. Samples: 909888840. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:08,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 19:00:09,152][00497] Updated weights for policy 0, policy_version 62728 (0.0024) +[2024-03-29 19:00:10,041][00476] Signal inference workers to stop experience collection... (32400 times) +[2024-03-29 19:00:10,041][00476] Signal inference workers to resume experience collection... (32400 times) +[2024-03-29 19:00:10,076][00497] InferenceWorker_p0-w0: stopping experience collection (32400 times) +[2024-03-29 19:00:10,076][00497] InferenceWorker_p0-w0: resuming experience collection (32400 times) +[2024-03-29 19:00:13,446][00497] Updated weights for policy 0, policy_version 62738 (0.0023) +[2024-03-29 19:00:13,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 1027899392. Throughput: 0: 41974.7. Samples: 910152060. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:13,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 19:00:17,311][00497] Updated weights for policy 0, policy_version 62748 (0.0029) +[2024-03-29 19:00:18,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42052.1, 300 sec: 41543.1). Total num frames: 1028128768. Throughput: 0: 41951.0. Samples: 910269200. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:18,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 19:00:21,554][00497] Updated weights for policy 0, policy_version 62758 (0.0017) +[2024-03-29 19:00:23,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41779.4, 300 sec: 41654.2). Total num frames: 1028341760. Throughput: 0: 41938.7. Samples: 910522420. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:23,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 19:00:24,916][00497] Updated weights for policy 0, policy_version 62768 (0.0025) +[2024-03-29 19:00:28,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 1028538368. Throughput: 0: 41128.9. Samples: 910757360. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:28,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 19:00:29,245][00497] Updated weights for policy 0, policy_version 62778 (0.0022) +[2024-03-29 19:00:33,085][00497] Updated weights for policy 0, policy_version 62788 (0.0024) +[2024-03-29 19:00:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 1028734976. Throughput: 0: 42039.2. Samples: 910897680. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:33,840][00126] Avg episode reward: [(0, '0.656')] +[2024-03-29 19:00:37,350][00497] Updated weights for policy 0, policy_version 62798 (0.0026) +[2024-03-29 19:00:38,797][00476] Signal inference workers to stop experience collection... (32450 times) +[2024-03-29 19:00:38,836][00497] InferenceWorker_p0-w0: stopping experience collection (32450 times) +[2024-03-29 19:00:38,839][00126] Fps is (10 sec: 42597.6, 60 sec: 41779.1, 300 sec: 41543.2). Total num frames: 1028964352. Throughput: 0: 41979.4. Samples: 911144380. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:38,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 19:00:39,018][00476] Signal inference workers to resume experience collection... (32450 times) +[2024-03-29 19:00:39,018][00497] InferenceWorker_p0-w0: resuming experience collection (32450 times) +[2024-03-29 19:00:40,685][00497] Updated weights for policy 0, policy_version 62808 (0.0032) +[2024-03-29 19:00:43,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42052.2, 300 sec: 41543.2). Total num frames: 1029160960. Throughput: 0: 41217.7. Samples: 911379860. Policy #0 lag: (min: 0.0, avg: 20.5, max: 43.0) +[2024-03-29 19:00:43,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 19:00:45,028][00497] Updated weights for policy 0, policy_version 62818 (0.0025) +[2024-03-29 19:00:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 1029357568. Throughput: 0: 41839.1. Samples: 911528240. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:00:48,840][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 19:00:49,064][00497] Updated weights for policy 0, policy_version 62828 (0.0022) +[2024-03-29 19:00:53,248][00497] Updated weights for policy 0, policy_version 62838 (0.0026) +[2024-03-29 19:00:53,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.0, 300 sec: 41487.6). Total num frames: 1029570560. Throughput: 0: 41639.0. Samples: 911762600. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:00:53,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 19:00:56,660][00497] Updated weights for policy 0, policy_version 62848 (0.0029) +[2024-03-29 19:00:58,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 1029783552. Throughput: 0: 41228.5. Samples: 912007340. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:00:58,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 19:01:01,006][00497] Updated weights for policy 0, policy_version 62858 (0.0023) +[2024-03-29 19:01:03,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1029963776. Throughput: 0: 41698.3. Samples: 912145620. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:01:03,841][00126] Avg episode reward: [(0, '0.676')] +[2024-03-29 19:01:03,921][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062865_1029980160.pth... +[2024-03-29 19:01:04,263][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062256_1020002304.pth +[2024-03-29 19:01:05,097][00497] Updated weights for policy 0, policy_version 62868 (0.0032) +[2024-03-29 19:01:08,839][00126] Fps is (10 sec: 39320.8, 60 sec: 40959.9, 300 sec: 41432.1). Total num frames: 1030176768. Throughput: 0: 41170.1. Samples: 912375080. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:01:08,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 19:01:09,319][00497] Updated weights for policy 0, policy_version 62878 (0.0030) +[2024-03-29 19:01:09,919][00476] Signal inference workers to stop experience collection... (32500 times) +[2024-03-29 19:01:09,959][00497] InferenceWorker_p0-w0: stopping experience collection (32500 times) +[2024-03-29 19:01:10,144][00476] Signal inference workers to resume experience collection... (32500 times) +[2024-03-29 19:01:10,144][00497] InferenceWorker_p0-w0: resuming experience collection (32500 times) +[2024-03-29 19:01:12,768][00497] Updated weights for policy 0, policy_version 62888 (0.0026) +[2024-03-29 19:01:13,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.1, 300 sec: 41543.2). Total num frames: 1030389760. Throughput: 0: 41087.0. Samples: 912606280. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:01:13,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 19:01:17,378][00497] Updated weights for policy 0, policy_version 62898 (0.0019) +[2024-03-29 19:01:18,839][00126] Fps is (10 sec: 40961.0, 60 sec: 40960.1, 300 sec: 41487.6). Total num frames: 1030586368. Throughput: 0: 40939.6. Samples: 912739960. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:01:18,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 19:01:21,120][00497] Updated weights for policy 0, policy_version 62908 (0.0021) +[2024-03-29 19:01:23,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40959.9, 300 sec: 41432.1). Total num frames: 1030799360. Throughput: 0: 40939.1. Samples: 912986640. Policy #0 lag: (min: 0.0, avg: 20.3, max: 41.0) +[2024-03-29 19:01:23,840][00126] Avg episode reward: [(0, '0.650')] +[2024-03-29 19:01:25,263][00497] Updated weights for policy 0, policy_version 62918 (0.0022) +[2024-03-29 19:01:28,794][00497] Updated weights for policy 0, policy_version 62928 (0.0021) +[2024-03-29 19:01:28,839][00126] Fps is (10 sec: 42597.6, 60 sec: 41233.0, 300 sec: 41543.1). Total num frames: 1031012352. Throughput: 0: 41009.2. Samples: 913225280. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:28,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 19:01:33,216][00497] Updated weights for policy 0, policy_version 62938 (0.0025) +[2024-03-29 19:01:33,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40959.9, 300 sec: 41487.6). Total num frames: 1031192576. Throughput: 0: 40539.6. Samples: 913352520. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:33,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 19:01:37,013][00497] Updated weights for policy 0, policy_version 62948 (0.0021) +[2024-03-29 19:01:38,839][00126] Fps is (10 sec: 40959.7, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 1031421952. Throughput: 0: 41052.0. Samples: 913609940. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:38,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 19:01:41,093][00497] Updated weights for policy 0, policy_version 62958 (0.0022) +[2024-03-29 19:01:42,273][00476] Signal inference workers to stop experience collection... (32550 times) +[2024-03-29 19:01:42,342][00497] InferenceWorker_p0-w0: stopping experience collection (32550 times) +[2024-03-29 19:01:42,374][00476] Signal inference workers to resume experience collection... (32550 times) +[2024-03-29 19:01:42,376][00497] InferenceWorker_p0-w0: resuming experience collection (32550 times) +[2024-03-29 19:01:43,839][00126] Fps is (10 sec: 44237.2, 60 sec: 41233.1, 300 sec: 41543.2). Total num frames: 1031634944. Throughput: 0: 40818.6. Samples: 913844180. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:43,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 19:01:44,477][00497] Updated weights for policy 0, policy_version 62968 (0.0027) +[2024-03-29 19:01:48,839][00126] Fps is (10 sec: 39322.3, 60 sec: 40960.1, 300 sec: 41432.1). Total num frames: 1031815168. Throughput: 0: 40696.0. Samples: 913976940. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:48,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 19:01:49,220][00497] Updated weights for policy 0, policy_version 62978 (0.0019) +[2024-03-29 19:01:52,820][00497] Updated weights for policy 0, policy_version 62988 (0.0029) +[2024-03-29 19:01:53,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1032044544. Throughput: 0: 41383.2. Samples: 914237320. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:53,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:01:56,655][00497] Updated weights for policy 0, policy_version 62998 (0.0018) +[2024-03-29 19:01:58,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41233.0, 300 sec: 41543.2). Total num frames: 1032257536. Throughput: 0: 41620.1. Samples: 914479180. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:01:58,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:02:00,190][00497] Updated weights for policy 0, policy_version 63008 (0.0030) +[2024-03-29 19:02:03,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41233.0, 300 sec: 41487.6). Total num frames: 1032437760. Throughput: 0: 41396.2. Samples: 914602800. Policy #0 lag: (min: 0.0, avg: 20.2, max: 41.0) +[2024-03-29 19:02:03,840][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 19:02:05,056][00497] Updated weights for policy 0, policy_version 63018 (0.0018) +[2024-03-29 19:02:08,612][00497] Updated weights for policy 0, policy_version 63028 (0.0024) +[2024-03-29 19:02:08,840][00126] Fps is (10 sec: 39316.8, 60 sec: 41232.3, 300 sec: 41431.9). Total num frames: 1032650752. Throughput: 0: 41486.5. Samples: 914853580. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:08,841][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 19:02:12,467][00497] Updated weights for policy 0, policy_version 63038 (0.0022) +[2024-03-29 19:02:13,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 1032863744. Throughput: 0: 41759.6. Samples: 915104460. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:13,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 19:02:14,366][00476] Signal inference workers to stop experience collection... (32600 times) +[2024-03-29 19:02:14,430][00497] InferenceWorker_p0-w0: stopping experience collection (32600 times) +[2024-03-29 19:02:14,439][00476] Signal inference workers to resume experience collection... (32600 times) +[2024-03-29 19:02:14,459][00497] InferenceWorker_p0-w0: resuming experience collection (32600 times) +[2024-03-29 19:02:16,262][00497] Updated weights for policy 0, policy_version 63048 (0.0024) +[2024-03-29 19:02:18,839][00126] Fps is (10 sec: 44242.0, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 1033093120. Throughput: 0: 41480.5. Samples: 915219140. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:18,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 19:02:20,878][00497] Updated weights for policy 0, policy_version 63058 (0.0031) +[2024-03-29 19:02:23,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41233.0, 300 sec: 41487.6). Total num frames: 1033273344. Throughput: 0: 41673.8. Samples: 915485260. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:23,840][00126] Avg episode reward: [(0, '0.659')] +[2024-03-29 19:02:24,254][00497] Updated weights for policy 0, policy_version 63068 (0.0034) +[2024-03-29 19:02:27,949][00497] Updated weights for policy 0, policy_version 63078 (0.0019) +[2024-03-29 19:02:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.1, 300 sec: 41487.6). Total num frames: 1033502720. Throughput: 0: 42035.9. Samples: 915735800. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:28,840][00126] Avg episode reward: [(0, '0.600')] +[2024-03-29 19:02:31,438][00497] Updated weights for policy 0, policy_version 63088 (0.0026) +[2024-03-29 19:02:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.2, 300 sec: 41598.7). Total num frames: 1033715712. Throughput: 0: 41587.0. Samples: 915848360. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:33,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 19:02:36,176][00497] Updated weights for policy 0, policy_version 63098 (0.0019) +[2024-03-29 19:02:38,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 1033912320. Throughput: 0: 41773.8. Samples: 916117140. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 19:02:38,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 19:02:39,789][00497] Updated weights for policy 0, policy_version 63108 (0.0024) +[2024-03-29 19:02:43,559][00497] Updated weights for policy 0, policy_version 63118 (0.0033) +[2024-03-29 19:02:43,839][00126] Fps is (10 sec: 40960.8, 60 sec: 41506.2, 300 sec: 41487.6). Total num frames: 1034125312. Throughput: 0: 41799.6. Samples: 916360160. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:02:43,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 19:02:46,920][00497] Updated weights for policy 0, policy_version 63128 (0.0024) +[2024-03-29 19:02:47,785][00476] Signal inference workers to stop experience collection... (32650 times) +[2024-03-29 19:02:47,786][00476] Signal inference workers to resume experience collection... (32650 times) +[2024-03-29 19:02:47,825][00497] InferenceWorker_p0-w0: stopping experience collection (32650 times) +[2024-03-29 19:02:47,829][00497] InferenceWorker_p0-w0: resuming experience collection (32650 times) +[2024-03-29 19:02:48,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42325.4, 300 sec: 41654.2). Total num frames: 1034354688. Throughput: 0: 41849.0. Samples: 916486000. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:02:48,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 19:02:51,790][00497] Updated weights for policy 0, policy_version 63138 (0.0021) +[2024-03-29 19:02:53,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1034551296. Throughput: 0: 42177.9. Samples: 916751540. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:02:53,841][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 19:02:55,336][00497] Updated weights for policy 0, policy_version 63148 (0.0022) +[2024-03-29 19:02:58,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.1, 300 sec: 41598.7). Total num frames: 1034764288. Throughput: 0: 41932.4. Samples: 916991420. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:02:58,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 19:02:58,984][00497] Updated weights for policy 0, policy_version 63158 (0.0025) +[2024-03-29 19:03:02,772][00497] Updated weights for policy 0, policy_version 63168 (0.0024) +[2024-03-29 19:03:03,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42325.4, 300 sec: 41654.2). Total num frames: 1034977280. Throughput: 0: 41976.9. Samples: 917108100. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:03:03,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 19:03:04,026][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063171_1034993664.pth... +[2024-03-29 19:03:04,367][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062562_1025015808.pth +[2024-03-29 19:03:07,583][00497] Updated weights for policy 0, policy_version 63178 (0.0033) +[2024-03-29 19:03:08,839][00126] Fps is (10 sec: 40959.7, 60 sec: 42053.0, 300 sec: 41598.7). Total num frames: 1035173888. Throughput: 0: 42253.8. Samples: 917386680. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:03:08,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 19:03:11,175][00497] Updated weights for policy 0, policy_version 63188 (0.0026) +[2024-03-29 19:03:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 1035386880. Throughput: 0: 41888.1. Samples: 917620760. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:03:13,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 19:03:14,581][00497] Updated weights for policy 0, policy_version 63198 (0.0021) +[2024-03-29 19:03:18,420][00497] Updated weights for policy 0, policy_version 63208 (0.0018) +[2024-03-29 19:03:18,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1035599872. Throughput: 0: 42184.5. Samples: 917746660. Policy #0 lag: (min: 0.0, avg: 21.8, max: 43.0) +[2024-03-29 19:03:18,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 19:03:23,224][00497] Updated weights for policy 0, policy_version 63218 (0.0019) +[2024-03-29 19:03:23,253][00476] Signal inference workers to stop experience collection... (32700 times) +[2024-03-29 19:03:23,273][00497] InferenceWorker_p0-w0: stopping experience collection (32700 times) +[2024-03-29 19:03:23,474][00476] Signal inference workers to resume experience collection... (32700 times) +[2024-03-29 19:03:23,475][00497] InferenceWorker_p0-w0: resuming experience collection (32700 times) +[2024-03-29 19:03:23,839][00126] Fps is (10 sec: 40959.5, 60 sec: 42052.3, 300 sec: 41543.2). Total num frames: 1035796480. Throughput: 0: 41963.9. Samples: 918005520. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:23,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 19:03:26,702][00497] Updated weights for policy 0, policy_version 63228 (0.0030) +[2024-03-29 19:03:28,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41779.3, 300 sec: 41543.2). Total num frames: 1036009472. Throughput: 0: 41864.5. Samples: 918244060. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:28,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 19:03:30,170][00497] Updated weights for policy 0, policy_version 63238 (0.0024) +[2024-03-29 19:03:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1036238848. Throughput: 0: 41994.5. Samples: 918375760. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:33,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 19:03:34,208][00497] Updated weights for policy 0, policy_version 63248 (0.0026) +[2024-03-29 19:03:38,839][00126] Fps is (10 sec: 39321.0, 60 sec: 41506.1, 300 sec: 41543.1). Total num frames: 1036402688. Throughput: 0: 41679.6. Samples: 918627120. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:38,841][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 19:03:38,923][00497] Updated weights for policy 0, policy_version 63258 (0.0026) +[2024-03-29 19:03:42,408][00497] Updated weights for policy 0, policy_version 63268 (0.0028) +[2024-03-29 19:03:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.1, 300 sec: 41654.2). Total num frames: 1036648448. Throughput: 0: 41798.6. Samples: 918872360. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:43,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 19:03:45,979][00497] Updated weights for policy 0, policy_version 63278 (0.0029) +[2024-03-29 19:03:48,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 1036861440. Throughput: 0: 41920.3. Samples: 918994520. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:48,842][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 19:03:50,146][00497] Updated weights for policy 0, policy_version 63288 (0.0028) +[2024-03-29 19:03:53,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1037041664. Throughput: 0: 41316.4. Samples: 919245920. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:53,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 19:03:54,550][00497] Updated weights for policy 0, policy_version 63298 (0.0017) +[2024-03-29 19:03:58,025][00497] Updated weights for policy 0, policy_version 63308 (0.0023) +[2024-03-29 19:03:58,214][00476] Signal inference workers to stop experience collection... (32750 times) +[2024-03-29 19:03:58,249][00497] InferenceWorker_p0-w0: stopping experience collection (32750 times) +[2024-03-29 19:03:58,395][00476] Signal inference workers to resume experience collection... (32750 times) +[2024-03-29 19:03:58,396][00497] InferenceWorker_p0-w0: resuming experience collection (32750 times) +[2024-03-29 19:03:58,839][00126] Fps is (10 sec: 40961.0, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 1037271040. Throughput: 0: 41882.8. Samples: 919505480. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:03:58,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 19:04:01,583][00497] Updated weights for policy 0, policy_version 63318 (0.0026) +[2024-03-29 19:04:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.1, 300 sec: 41543.1). Total num frames: 1037484032. Throughput: 0: 41561.3. Samples: 919616920. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:03,840][00126] Avg episode reward: [(0, '0.637')] +[2024-03-29 19:04:05,852][00497] Updated weights for policy 0, policy_version 63328 (0.0036) +[2024-03-29 19:04:08,839][00126] Fps is (10 sec: 40959.3, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 1037680640. Throughput: 0: 41416.9. Samples: 919869280. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:08,840][00126] Avg episode reward: [(0, '0.616')] +[2024-03-29 19:04:10,310][00497] Updated weights for policy 0, policy_version 63338 (0.0021) +[2024-03-29 19:04:13,670][00497] Updated weights for policy 0, policy_version 63348 (0.0022) +[2024-03-29 19:04:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 1037893632. Throughput: 0: 42002.5. Samples: 920134180. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:13,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 19:04:17,141][00497] Updated weights for policy 0, policy_version 63358 (0.0027) +[2024-03-29 19:04:18,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 1038090240. Throughput: 0: 41770.3. Samples: 920255420. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:18,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 19:04:21,441][00497] Updated weights for policy 0, policy_version 63368 (0.0030) +[2024-03-29 19:04:23,839][00126] Fps is (10 sec: 42599.2, 60 sec: 42052.4, 300 sec: 41654.3). Total num frames: 1038319616. Throughput: 0: 41558.8. Samples: 920497260. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:23,841][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 19:04:26,196][00497] Updated weights for policy 0, policy_version 63378 (0.0021) +[2024-03-29 19:04:28,609][00476] Signal inference workers to stop experience collection... (32800 times) +[2024-03-29 19:04:28,622][00497] InferenceWorker_p0-w0: stopping experience collection (32800 times) +[2024-03-29 19:04:28,817][00476] Signal inference workers to resume experience collection... (32800 times) +[2024-03-29 19:04:28,818][00497] InferenceWorker_p0-w0: resuming experience collection (32800 times) +[2024-03-29 19:04:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1038516224. Throughput: 0: 42023.2. Samples: 920763400. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:28,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 19:04:29,591][00497] Updated weights for policy 0, policy_version 63388 (0.0023) +[2024-03-29 19:04:32,904][00497] Updated weights for policy 0, policy_version 63398 (0.0029) +[2024-03-29 19:04:33,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 1038729216. Throughput: 0: 41710.3. Samples: 920871480. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:33,840][00126] Avg episode reward: [(0, '0.603')] +[2024-03-29 19:04:37,309][00497] Updated weights for policy 0, policy_version 63408 (0.0020) +[2024-03-29 19:04:38,839][00126] Fps is (10 sec: 42598.2, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 1038942208. Throughput: 0: 41659.7. Samples: 921120600. Policy #0 lag: (min: 0.0, avg: 21.5, max: 41.0) +[2024-03-29 19:04:38,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 19:04:41,813][00497] Updated weights for policy 0, policy_version 63418 (0.0029) +[2024-03-29 19:04:43,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 1039122432. Throughput: 0: 41716.2. Samples: 921382720. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:04:43,840][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 19:04:45,285][00497] Updated weights for policy 0, policy_version 63428 (0.0018) +[2024-03-29 19:04:48,380][00497] Updated weights for policy 0, policy_version 63438 (0.0020) +[2024-03-29 19:04:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 1039368192. Throughput: 0: 41981.9. Samples: 921506100. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:04:48,840][00126] Avg episode reward: [(0, '0.675')] +[2024-03-29 19:04:52,968][00497] Updated weights for policy 0, policy_version 63448 (0.0024) +[2024-03-29 19:04:53,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1039564800. Throughput: 0: 41834.6. Samples: 921751840. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:04:53,840][00126] Avg episode reward: [(0, '0.670')] +[2024-03-29 19:04:56,963][00497] Updated weights for policy 0, policy_version 63458 (0.0027) +[2024-03-29 19:04:58,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1039761408. Throughput: 0: 41986.4. Samples: 922023560. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:04:58,840][00126] Avg episode reward: [(0, '0.619')] +[2024-03-29 19:05:00,720][00497] Updated weights for policy 0, policy_version 63468 (0.0030) +[2024-03-29 19:05:03,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42052.4, 300 sec: 41654.2). Total num frames: 1040007168. Throughput: 0: 42058.7. Samples: 922148060. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:05:03,840][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 19:05:03,869][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063478_1040023552.pth... +[2024-03-29 19:05:03,881][00497] Updated weights for policy 0, policy_version 63478 (0.0025) +[2024-03-29 19:05:04,178][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000062865_1029980160.pth +[2024-03-29 19:05:04,684][00476] Signal inference workers to stop experience collection... (32850 times) +[2024-03-29 19:05:04,745][00497] InferenceWorker_p0-w0: stopping experience collection (32850 times) +[2024-03-29 19:05:04,763][00476] Signal inference workers to resume experience collection... (32850 times) +[2024-03-29 19:05:04,777][00497] InferenceWorker_p0-w0: resuming experience collection (32850 times) +[2024-03-29 19:05:08,207][00497] Updated weights for policy 0, policy_version 63488 (0.0019) +[2024-03-29 19:05:08,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1040203776. Throughput: 0: 42181.3. Samples: 922395420. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:05:08,840][00126] Avg episode reward: [(0, '0.680')] +[2024-03-29 19:05:12,444][00497] Updated weights for policy 0, policy_version 63498 (0.0023) +[2024-03-29 19:05:13,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1040400384. Throughput: 0: 42222.2. Samples: 922663400. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:05:13,840][00126] Avg episode reward: [(0, '0.641')] +[2024-03-29 19:05:16,109][00497] Updated weights for policy 0, policy_version 63508 (0.0023) +[2024-03-29 19:05:18,839][00126] Fps is (10 sec: 42598.8, 60 sec: 42325.4, 300 sec: 41654.2). Total num frames: 1040629760. Throughput: 0: 42548.1. Samples: 922786140. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:05:18,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 19:05:19,518][00497] Updated weights for policy 0, policy_version 63518 (0.0023) +[2024-03-29 19:05:23,775][00497] Updated weights for policy 0, policy_version 63528 (0.0037) +[2024-03-29 19:05:23,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.1, 300 sec: 41709.7). Total num frames: 1040842752. Throughput: 0: 42205.2. Samples: 923019840. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:23,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 19:05:28,227][00497] Updated weights for policy 0, policy_version 63538 (0.0020) +[2024-03-29 19:05:28,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1041022976. Throughput: 0: 42374.8. Samples: 923289580. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:28,840][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 19:05:31,736][00497] Updated weights for policy 0, policy_version 63548 (0.0023) +[2024-03-29 19:05:33,839][00126] Fps is (10 sec: 42599.3, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 1041268736. Throughput: 0: 42385.8. Samples: 923413460. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:33,840][00126] Avg episode reward: [(0, '0.506')] +[2024-03-29 19:05:35,010][00497] Updated weights for policy 0, policy_version 63558 (0.0019) +[2024-03-29 19:05:38,098][00476] Signal inference workers to stop experience collection... (32900 times) +[2024-03-29 19:05:38,099][00476] Signal inference workers to resume experience collection... (32900 times) +[2024-03-29 19:05:38,140][00497] InferenceWorker_p0-w0: stopping experience collection (32900 times) +[2024-03-29 19:05:38,140][00497] InferenceWorker_p0-w0: resuming experience collection (32900 times) +[2024-03-29 19:05:38,839][00126] Fps is (10 sec: 45875.3, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 1041481728. Throughput: 0: 42338.7. Samples: 923657080. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:38,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 19:05:39,190][00497] Updated weights for policy 0, policy_version 63568 (0.0024) +[2024-03-29 19:05:43,592][00497] Updated weights for policy 0, policy_version 63578 (0.0027) +[2024-03-29 19:05:43,839][00126] Fps is (10 sec: 39321.5, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 1041661952. Throughput: 0: 42154.2. Samples: 923920500. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:43,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 19:05:47,443][00497] Updated weights for policy 0, policy_version 63588 (0.0029) +[2024-03-29 19:05:48,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1041874944. Throughput: 0: 42132.4. Samples: 924044020. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:48,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 19:05:50,911][00497] Updated weights for policy 0, policy_version 63598 (0.0027) +[2024-03-29 19:05:53,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 1042104320. Throughput: 0: 41974.6. Samples: 924284280. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:53,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 19:05:55,093][00497] Updated weights for policy 0, policy_version 63608 (0.0021) +[2024-03-29 19:05:58,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42325.2, 300 sec: 41820.8). Total num frames: 1042300928. Throughput: 0: 41733.8. Samples: 924541420. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:05:58,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 19:05:59,344][00497] Updated weights for policy 0, policy_version 63618 (0.0022) +[2024-03-29 19:06:03,093][00497] Updated weights for policy 0, policy_version 63628 (0.0021) +[2024-03-29 19:06:03,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.1, 300 sec: 41820.9). Total num frames: 1042513920. Throughput: 0: 42083.4. Samples: 924679900. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:03,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 19:06:06,314][00497] Updated weights for policy 0, policy_version 63638 (0.0019) +[2024-03-29 19:06:08,839][00126] Fps is (10 sec: 44237.4, 60 sec: 42325.4, 300 sec: 41876.4). Total num frames: 1042743296. Throughput: 0: 42231.4. Samples: 924920240. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:08,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 19:06:10,224][00497] Updated weights for policy 0, policy_version 63648 (0.0027) +[2024-03-29 19:06:12,276][00476] Signal inference workers to stop experience collection... (32950 times) +[2024-03-29 19:06:12,307][00497] InferenceWorker_p0-w0: stopping experience collection (32950 times) +[2024-03-29 19:06:12,461][00476] Signal inference workers to resume experience collection... (32950 times) +[2024-03-29 19:06:12,461][00497] InferenceWorker_p0-w0: resuming experience collection (32950 times) +[2024-03-29 19:06:13,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42325.3, 300 sec: 41876.4). Total num frames: 1042939904. Throughput: 0: 41910.2. Samples: 925175540. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:13,840][00126] Avg episode reward: [(0, '0.656')] +[2024-03-29 19:06:14,859][00497] Updated weights for policy 0, policy_version 63658 (0.0019) +[2024-03-29 19:06:18,570][00497] Updated weights for policy 0, policy_version 63668 (0.0025) +[2024-03-29 19:06:18,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 1043152896. Throughput: 0: 42261.4. Samples: 925315220. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:18,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 19:06:21,880][00497] Updated weights for policy 0, policy_version 63678 (0.0024) +[2024-03-29 19:06:23,839][00126] Fps is (10 sec: 44237.4, 60 sec: 42325.5, 300 sec: 41932.0). Total num frames: 1043382272. Throughput: 0: 42148.1. Samples: 925553740. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:23,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 19:06:25,514][00497] Updated weights for policy 0, policy_version 63688 (0.0022) +[2024-03-29 19:06:28,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42871.5, 300 sec: 42043.0). Total num frames: 1043595264. Throughput: 0: 41957.7. Samples: 925808600. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:28,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 19:06:30,536][00497] Updated weights for policy 0, policy_version 63698 (0.0030) +[2024-03-29 19:06:33,839][00126] Fps is (10 sec: 39321.0, 60 sec: 41779.1, 300 sec: 41876.4). Total num frames: 1043775488. Throughput: 0: 42173.7. Samples: 925941840. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:33,842][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 19:06:34,039][00497] Updated weights for policy 0, policy_version 63708 (0.0023) +[2024-03-29 19:06:37,324][00497] Updated weights for policy 0, policy_version 63718 (0.0023) +[2024-03-29 19:06:38,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 1044004864. Throughput: 0: 42397.3. Samples: 926192160. Policy #0 lag: (min: 0.0, avg: 19.0, max: 42.0) +[2024-03-29 19:06:38,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 19:06:40,984][00497] Updated weights for policy 0, policy_version 63728 (0.0018) +[2024-03-29 19:06:43,839][00126] Fps is (10 sec: 45875.8, 60 sec: 42871.5, 300 sec: 42098.6). Total num frames: 1044234240. Throughput: 0: 42204.6. Samples: 926440620. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:06:43,840][00126] Avg episode reward: [(0, '0.571')] +[2024-03-29 19:06:45,889][00476] Signal inference workers to stop experience collection... (33000 times) +[2024-03-29 19:06:45,952][00497] InferenceWorker_p0-w0: stopping experience collection (33000 times) +[2024-03-29 19:06:45,960][00476] Signal inference workers to resume experience collection... (33000 times) +[2024-03-29 19:06:45,978][00497] InferenceWorker_p0-w0: resuming experience collection (33000 times) +[2024-03-29 19:06:45,985][00497] Updated weights for policy 0, policy_version 63738 (0.0023) +[2024-03-29 19:06:48,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42325.3, 300 sec: 41931.9). Total num frames: 1044414464. Throughput: 0: 42307.7. Samples: 926583740. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:06:48,840][00126] Avg episode reward: [(0, '0.601')] +[2024-03-29 19:06:49,646][00497] Updated weights for policy 0, policy_version 63748 (0.0020) +[2024-03-29 19:06:52,904][00497] Updated weights for policy 0, policy_version 63758 (0.0026) +[2024-03-29 19:06:53,839][00126] Fps is (10 sec: 39321.3, 60 sec: 42052.3, 300 sec: 41931.9). Total num frames: 1044627456. Throughput: 0: 42099.1. Samples: 926814700. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:06:53,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 19:06:56,769][00497] Updated weights for policy 0, policy_version 63768 (0.0021) +[2024-03-29 19:06:58,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42598.5, 300 sec: 42098.6). Total num frames: 1044856832. Throughput: 0: 42273.9. Samples: 927077860. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:06:58,840][00126] Avg episode reward: [(0, '0.615')] +[2024-03-29 19:07:01,449][00497] Updated weights for policy 0, policy_version 63778 (0.0026) +[2024-03-29 19:07:03,839][00126] Fps is (10 sec: 40959.9, 60 sec: 42052.3, 300 sec: 41987.6). Total num frames: 1045037056. Throughput: 0: 42001.7. Samples: 927205300. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:07:03,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 19:07:04,378][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063786_1045069824.pth... +[2024-03-29 19:07:04,684][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063171_1034993664.pth +[2024-03-29 19:07:05,352][00497] Updated weights for policy 0, policy_version 63788 (0.0023) +[2024-03-29 19:07:08,397][00497] Updated weights for policy 0, policy_version 63798 (0.0025) +[2024-03-29 19:07:08,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.3, 300 sec: 42043.0). Total num frames: 1045266432. Throughput: 0: 42260.0. Samples: 927455440. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:07:08,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 19:07:12,447][00497] Updated weights for policy 0, policy_version 63808 (0.0018) +[2024-03-29 19:07:13,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.4, 300 sec: 41987.5). Total num frames: 1045479424. Throughput: 0: 42036.5. Samples: 927700240. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:07:13,840][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 19:07:16,936][00497] Updated weights for policy 0, policy_version 63818 (0.0022) +[2024-03-29 19:07:17,571][00476] Signal inference workers to stop experience collection... (33050 times) +[2024-03-29 19:07:17,644][00497] InferenceWorker_p0-w0: stopping experience collection (33050 times) +[2024-03-29 19:07:17,644][00476] Signal inference workers to resume experience collection... (33050 times) +[2024-03-29 19:07:17,670][00497] InferenceWorker_p0-w0: resuming experience collection (33050 times) +[2024-03-29 19:07:18,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 1045659648. Throughput: 0: 41857.0. Samples: 927825400. Policy #0 lag: (min: 1.0, avg: 21.9, max: 42.0) +[2024-03-29 19:07:18,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 19:07:20,904][00497] Updated weights for policy 0, policy_version 63828 (0.0021) +[2024-03-29 19:07:23,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 1045889024. Throughput: 0: 41867.1. Samples: 928076180. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:23,840][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 19:07:24,324][00497] Updated weights for policy 0, policy_version 63838 (0.0023) +[2024-03-29 19:07:28,007][00497] Updated weights for policy 0, policy_version 63848 (0.0024) +[2024-03-29 19:07:28,840][00126] Fps is (10 sec: 44233.5, 60 sec: 41778.7, 300 sec: 41987.4). Total num frames: 1046102016. Throughput: 0: 41869.5. Samples: 928324780. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:28,840][00126] Avg episode reward: [(0, '0.597')] +[2024-03-29 19:07:32,818][00497] Updated weights for policy 0, policy_version 63858 (0.0024) +[2024-03-29 19:07:33,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 1046298624. Throughput: 0: 41310.6. Samples: 928442720. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:33,840][00126] Avg episode reward: [(0, '0.583')] +[2024-03-29 19:07:36,750][00497] Updated weights for policy 0, policy_version 63868 (0.0027) +[2024-03-29 19:07:38,839][00126] Fps is (10 sec: 40962.9, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 1046511616. Throughput: 0: 42189.3. Samples: 928713220. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:38,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 19:07:39,961][00497] Updated weights for policy 0, policy_version 63878 (0.0029) +[2024-03-29 19:07:43,794][00497] Updated weights for policy 0, policy_version 63888 (0.0029) +[2024-03-29 19:07:43,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.0, 300 sec: 41987.4). Total num frames: 1046740992. Throughput: 0: 41612.3. Samples: 928950420. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:43,840][00126] Avg episode reward: [(0, '0.651')] +[2024-03-29 19:07:48,531][00497] Updated weights for policy 0, policy_version 63898 (0.0029) +[2024-03-29 19:07:48,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 1046921216. Throughput: 0: 41411.9. Samples: 929068840. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:48,840][00126] Avg episode reward: [(0, '0.692')] +[2024-03-29 19:07:49,111][00476] Signal inference workers to stop experience collection... (33100 times) +[2024-03-29 19:07:49,152][00497] InferenceWorker_p0-w0: stopping experience collection (33100 times) +[2024-03-29 19:07:49,332][00476] Signal inference workers to resume experience collection... (33100 times) +[2024-03-29 19:07:49,333][00497] InferenceWorker_p0-w0: resuming experience collection (33100 times) +[2024-03-29 19:07:52,625][00497] Updated weights for policy 0, policy_version 63908 (0.0020) +[2024-03-29 19:07:53,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 1047134208. Throughput: 0: 41783.1. Samples: 929335680. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:53,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 19:07:55,630][00497] Updated weights for policy 0, policy_version 63918 (0.0028) +[2024-03-29 19:07:58,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 1047363584. Throughput: 0: 41468.3. Samples: 929566320. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 19:07:58,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 19:07:59,845][00497] Updated weights for policy 0, policy_version 63928 (0.0021) +[2024-03-29 19:08:03,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41779.1, 300 sec: 41931.9). Total num frames: 1047543808. Throughput: 0: 41747.0. Samples: 929704020. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:03,841][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 19:08:04,182][00497] Updated weights for policy 0, policy_version 63938 (0.0029) +[2024-03-29 19:08:08,471][00497] Updated weights for policy 0, policy_version 63948 (0.0028) +[2024-03-29 19:08:08,839][00126] Fps is (10 sec: 37683.7, 60 sec: 41233.0, 300 sec: 41876.4). Total num frames: 1047740416. Throughput: 0: 41778.3. Samples: 929956200. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:08,840][00126] Avg episode reward: [(0, '0.613')] +[2024-03-29 19:08:11,789][00497] Updated weights for policy 0, policy_version 63958 (0.0022) +[2024-03-29 19:08:13,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.1, 300 sec: 41931.9). Total num frames: 1047969792. Throughput: 0: 41657.5. Samples: 930199340. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:13,840][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 19:08:15,744][00497] Updated weights for policy 0, policy_version 63968 (0.0033) +[2024-03-29 19:08:18,839][00126] Fps is (10 sec: 44236.2, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 1048182784. Throughput: 0: 41675.0. Samples: 930318100. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:18,841][00126] Avg episode reward: [(0, '0.525')] +[2024-03-29 19:08:20,097][00497] Updated weights for policy 0, policy_version 63978 (0.0025) +[2024-03-29 19:08:21,248][00476] Signal inference workers to stop experience collection... (33150 times) +[2024-03-29 19:08:21,248][00476] Signal inference workers to resume experience collection... (33150 times) +[2024-03-29 19:08:21,287][00497] InferenceWorker_p0-w0: stopping experience collection (33150 times) +[2024-03-29 19:08:21,287][00497] InferenceWorker_p0-w0: resuming experience collection (33150 times) +[2024-03-29 19:08:23,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41233.1, 300 sec: 41876.4). Total num frames: 1048363008. Throughput: 0: 41291.6. Samples: 930571340. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:23,840][00126] Avg episode reward: [(0, '0.578')] +[2024-03-29 19:08:24,302][00497] Updated weights for policy 0, policy_version 63988 (0.0023) +[2024-03-29 19:08:27,521][00497] Updated weights for policy 0, policy_version 63998 (0.0026) +[2024-03-29 19:08:28,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41506.7, 300 sec: 41876.4). Total num frames: 1048592384. Throughput: 0: 41487.3. Samples: 930817340. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:28,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 19:08:31,352][00497] Updated weights for policy 0, policy_version 64008 (0.0025) +[2024-03-29 19:08:33,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41779.2, 300 sec: 42043.0). Total num frames: 1048805376. Throughput: 0: 41482.4. Samples: 930935540. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:33,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 19:08:36,016][00497] Updated weights for policy 0, policy_version 64018 (0.0034) +[2024-03-29 19:08:38,839][00126] Fps is (10 sec: 39321.1, 60 sec: 41233.0, 300 sec: 41820.9). Total num frames: 1048985600. Throughput: 0: 41227.9. Samples: 931190940. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 19:08:38,840][00126] Avg episode reward: [(0, '0.504')] +[2024-03-29 19:08:40,236][00497] Updated weights for policy 0, policy_version 64028 (0.0019) +[2024-03-29 19:08:43,620][00497] Updated weights for policy 0, policy_version 64038 (0.0028) +[2024-03-29 19:08:43,839][00126] Fps is (10 sec: 39321.3, 60 sec: 40960.1, 300 sec: 41820.9). Total num frames: 1049198592. Throughput: 0: 41283.2. Samples: 931424060. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:08:43,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 19:08:47,335][00497] Updated weights for policy 0, policy_version 64048 (0.0025) +[2024-03-29 19:08:48,839][00126] Fps is (10 sec: 42599.2, 60 sec: 41506.3, 300 sec: 41932.0). Total num frames: 1049411584. Throughput: 0: 40966.0. Samples: 931547480. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:08:48,840][00126] Avg episode reward: [(0, '0.610')] +[2024-03-29 19:08:52,039][00497] Updated weights for policy 0, policy_version 64058 (0.0030) +[2024-03-29 19:08:52,063][00476] Signal inference workers to stop experience collection... (33200 times) +[2024-03-29 19:08:52,104][00497] InferenceWorker_p0-w0: stopping experience collection (33200 times) +[2024-03-29 19:08:52,283][00476] Signal inference workers to resume experience collection... (33200 times) +[2024-03-29 19:08:52,284][00497] InferenceWorker_p0-w0: resuming experience collection (33200 times) +[2024-03-29 19:08:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40959.9, 300 sec: 41765.3). Total num frames: 1049591808. Throughput: 0: 41178.6. Samples: 931809240. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:08:53,840][00126] Avg episode reward: [(0, '0.638')] +[2024-03-29 19:08:56,080][00497] Updated weights for policy 0, policy_version 64068 (0.0018) +[2024-03-29 19:08:58,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41233.2, 300 sec: 41876.4). Total num frames: 1049837568. Throughput: 0: 41061.4. Samples: 932047100. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:08:58,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 19:08:59,332][00497] Updated weights for policy 0, policy_version 64078 (0.0021) +[2024-03-29 19:09:03,127][00497] Updated weights for policy 0, policy_version 64088 (0.0025) +[2024-03-29 19:09:03,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41233.1, 300 sec: 41820.8). Total num frames: 1050017792. Throughput: 0: 41179.1. Samples: 932171160. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:09:03,840][00126] Avg episode reward: [(0, '0.640')] +[2024-03-29 19:09:04,142][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064090_1050050560.pth... +[2024-03-29 19:09:04,468][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063478_1040023552.pth +[2024-03-29 19:09:07,713][00497] Updated weights for policy 0, policy_version 64098 (0.0026) +[2024-03-29 19:09:08,841][00126] Fps is (10 sec: 39314.4, 60 sec: 41504.9, 300 sec: 41820.6). Total num frames: 1050230784. Throughput: 0: 41005.0. Samples: 932416640. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:09:08,841][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 19:09:11,717][00497] Updated weights for policy 0, policy_version 64108 (0.0019) +[2024-03-29 19:09:13,839][00126] Fps is (10 sec: 40960.7, 60 sec: 40960.1, 300 sec: 41820.9). Total num frames: 1050427392. Throughput: 0: 41172.9. Samples: 932670120. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 19:09:13,840][00126] Avg episode reward: [(0, '0.592')] +[2024-03-29 19:09:15,371][00497] Updated weights for policy 0, policy_version 64118 (0.0032) +[2024-03-29 19:09:18,839][00126] Fps is (10 sec: 42606.4, 60 sec: 41233.2, 300 sec: 41820.9). Total num frames: 1050656768. Throughput: 0: 41159.6. Samples: 932787720. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:18,841][00126] Avg episode reward: [(0, '0.623')] +[2024-03-29 19:09:19,041][00497] Updated weights for policy 0, policy_version 64128 (0.0023) +[2024-03-29 19:09:23,524][00497] Updated weights for policy 0, policy_version 64138 (0.0021) +[2024-03-29 19:09:23,839][00126] Fps is (10 sec: 40959.2, 60 sec: 41233.0, 300 sec: 41765.3). Total num frames: 1050836992. Throughput: 0: 40763.1. Samples: 933025280. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:23,840][00126] Avg episode reward: [(0, '0.549')] +[2024-03-29 19:09:26,031][00476] Signal inference workers to stop experience collection... (33250 times) +[2024-03-29 19:09:26,072][00497] InferenceWorker_p0-w0: stopping experience collection (33250 times) +[2024-03-29 19:09:26,254][00476] Signal inference workers to resume experience collection... (33250 times) +[2024-03-29 19:09:26,255][00497] InferenceWorker_p0-w0: resuming experience collection (33250 times) +[2024-03-29 19:09:27,648][00497] Updated weights for policy 0, policy_version 64148 (0.0021) +[2024-03-29 19:09:28,839][00126] Fps is (10 sec: 39321.1, 60 sec: 40959.9, 300 sec: 41765.3). Total num frames: 1051049984. Throughput: 0: 41691.5. Samples: 933300180. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:28,840][00126] Avg episode reward: [(0, '0.622')] +[2024-03-29 19:09:30,990][00497] Updated weights for policy 0, policy_version 64158 (0.0023) +[2024-03-29 19:09:33,841][00126] Fps is (10 sec: 42591.0, 60 sec: 40958.7, 300 sec: 41765.1). Total num frames: 1051262976. Throughput: 0: 41232.0. Samples: 933403000. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:33,842][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 19:09:34,961][00497] Updated weights for policy 0, policy_version 64168 (0.0020) +[2024-03-29 19:09:38,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41876.4). Total num frames: 1051475968. Throughput: 0: 40965.0. Samples: 933652660. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:38,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 19:09:39,338][00497] Updated weights for policy 0, policy_version 64178 (0.0028) +[2024-03-29 19:09:43,416][00497] Updated weights for policy 0, policy_version 64188 (0.0021) +[2024-03-29 19:09:43,839][00126] Fps is (10 sec: 39328.8, 60 sec: 40960.0, 300 sec: 41654.2). Total num frames: 1051656192. Throughput: 0: 41590.2. Samples: 933918660. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:43,840][00126] Avg episode reward: [(0, '0.593')] +[2024-03-29 19:09:46,813][00497] Updated weights for policy 0, policy_version 64198 (0.0024) +[2024-03-29 19:09:48,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.0, 300 sec: 41820.9). Total num frames: 1051901952. Throughput: 0: 41169.8. Samples: 934023800. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:48,840][00126] Avg episode reward: [(0, '0.644')] +[2024-03-29 19:09:50,802][00497] Updated weights for policy 0, policy_version 64208 (0.0023) +[2024-03-29 19:09:53,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 1052114944. Throughput: 0: 41552.6. Samples: 934286440. Policy #0 lag: (min: 1.0, avg: 21.5, max: 41.0) +[2024-03-29 19:09:53,842][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 19:09:55,178][00497] Updated weights for policy 0, policy_version 64218 (0.0028) +[2024-03-29 19:09:58,839][00126] Fps is (10 sec: 39322.3, 60 sec: 40960.0, 300 sec: 41654.2). Total num frames: 1052295168. Throughput: 0: 41698.2. Samples: 934546540. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:09:58,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 19:09:58,989][00497] Updated weights for policy 0, policy_version 64228 (0.0017) +[2024-03-29 19:09:59,811][00476] Signal inference workers to stop experience collection... (33300 times) +[2024-03-29 19:09:59,828][00497] InferenceWorker_p0-w0: stopping experience collection (33300 times) +[2024-03-29 19:10:00,024][00476] Signal inference workers to resume experience collection... (33300 times) +[2024-03-29 19:10:00,025][00497] InferenceWorker_p0-w0: resuming experience collection (33300 times) +[2024-03-29 19:10:02,211][00497] Updated weights for policy 0, policy_version 64238 (0.0024) +[2024-03-29 19:10:03,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.3, 300 sec: 41765.3). Total num frames: 1052524544. Throughput: 0: 41759.9. Samples: 934666920. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:03,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 19:10:06,435][00497] Updated weights for policy 0, policy_version 64248 (0.0019) +[2024-03-29 19:10:08,839][00126] Fps is (10 sec: 44236.8, 60 sec: 41780.5, 300 sec: 41820.9). Total num frames: 1052737536. Throughput: 0: 42069.5. Samples: 934918400. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:08,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 19:10:10,679][00497] Updated weights for policy 0, policy_version 64258 (0.0032) +[2024-03-29 19:10:13,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41506.2, 300 sec: 41654.2). Total num frames: 1052917760. Throughput: 0: 41693.5. Samples: 935176380. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:13,840][00126] Avg episode reward: [(0, '0.591')] +[2024-03-29 19:10:14,644][00497] Updated weights for policy 0, policy_version 64268 (0.0021) +[2024-03-29 19:10:17,841][00497] Updated weights for policy 0, policy_version 64278 (0.0020) +[2024-03-29 19:10:18,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 1053147136. Throughput: 0: 42096.0. Samples: 935297240. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:18,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 19:10:22,160][00497] Updated weights for policy 0, policy_version 64288 (0.0023) +[2024-03-29 19:10:23,839][00126] Fps is (10 sec: 42598.0, 60 sec: 41779.3, 300 sec: 41765.3). Total num frames: 1053343744. Throughput: 0: 41779.1. Samples: 935532720. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:23,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 19:10:26,449][00497] Updated weights for policy 0, policy_version 64298 (0.0018) +[2024-03-29 19:10:28,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 1053540352. Throughput: 0: 41573.8. Samples: 935789480. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:28,841][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 19:10:30,454][00497] Updated weights for policy 0, policy_version 64308 (0.0019) +[2024-03-29 19:10:32,265][00476] Signal inference workers to stop experience collection... (33350 times) +[2024-03-29 19:10:32,305][00497] InferenceWorker_p0-w0: stopping experience collection (33350 times) +[2024-03-29 19:10:32,489][00476] Signal inference workers to resume experience collection... (33350 times) +[2024-03-29 19:10:32,490][00497] InferenceWorker_p0-w0: resuming experience collection (33350 times) +[2024-03-29 19:10:33,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41780.5, 300 sec: 41654.2). Total num frames: 1053769728. Throughput: 0: 42232.1. Samples: 935924240. Policy #0 lag: (min: 0.0, avg: 19.3, max: 41.0) +[2024-03-29 19:10:33,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 19:10:33,857][00497] Updated weights for policy 0, policy_version 64318 (0.0023) +[2024-03-29 19:10:38,048][00497] Updated weights for policy 0, policy_version 64328 (0.0019) +[2024-03-29 19:10:38,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 1053966336. Throughput: 0: 41614.0. Samples: 936159060. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:10:38,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 19:10:42,472][00497] Updated weights for policy 0, policy_version 64338 (0.0022) +[2024-03-29 19:10:43,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41779.3, 300 sec: 41654.2). Total num frames: 1054162944. Throughput: 0: 41340.5. Samples: 936406860. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:10:43,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 19:10:46,367][00497] Updated weights for policy 0, policy_version 64348 (0.0020) +[2024-03-29 19:10:48,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41233.1, 300 sec: 41598.7). Total num frames: 1054375936. Throughput: 0: 41565.8. Samples: 936537380. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:10:48,840][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 19:10:49,831][00497] Updated weights for policy 0, policy_version 64358 (0.0021) +[2024-03-29 19:10:53,839][00126] Fps is (10 sec: 42597.5, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 1054588928. Throughput: 0: 40940.7. Samples: 936760740. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:10:53,841][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 19:10:54,146][00497] Updated weights for policy 0, policy_version 64368 (0.0025) +[2024-03-29 19:10:58,181][00497] Updated weights for policy 0, policy_version 64378 (0.0027) +[2024-03-29 19:10:58,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 1054801920. Throughput: 0: 41211.4. Samples: 937030900. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:10:58,840][00126] Avg episode reward: [(0, '0.554')] +[2024-03-29 19:11:01,950][00497] Updated weights for policy 0, policy_version 64388 (0.0026) +[2024-03-29 19:11:03,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1055014912. Throughput: 0: 41457.7. Samples: 937162840. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:11:03,840][00126] Avg episode reward: [(0, '0.569')] +[2024-03-29 19:11:03,994][00476] Signal inference workers to stop experience collection... (33400 times) +[2024-03-29 19:11:04,063][00497] InferenceWorker_p0-w0: stopping experience collection (33400 times) +[2024-03-29 19:11:04,069][00476] Signal inference workers to resume experience collection... (33400 times) +[2024-03-29 19:11:04,071][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064394_1055031296.pth... +[2024-03-29 19:11:04,087][00497] InferenceWorker_p0-w0: resuming experience collection (33400 times) +[2024-03-29 19:11:04,378][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000063786_1045069824.pth +[2024-03-29 19:11:05,719][00497] Updated weights for policy 0, policy_version 64398 (0.0022) +[2024-03-29 19:11:08,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41233.0, 300 sec: 41598.7). Total num frames: 1055211520. Throughput: 0: 41501.3. Samples: 937400280. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:11:08,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 19:11:09,850][00497] Updated weights for policy 0, policy_version 64408 (0.0028) +[2024-03-29 19:11:13,829][00497] Updated weights for policy 0, policy_version 64418 (0.0019) +[2024-03-29 19:11:13,839][00126] Fps is (10 sec: 40961.0, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1055424512. Throughput: 0: 41162.8. Samples: 937641800. Policy #0 lag: (min: 2.0, avg: 21.5, max: 43.0) +[2024-03-29 19:11:13,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 19:11:17,925][00497] Updated weights for policy 0, policy_version 64428 (0.0018) +[2024-03-29 19:11:18,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1055621120. Throughput: 0: 41292.4. Samples: 937782400. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:18,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 19:11:21,196][00497] Updated weights for policy 0, policy_version 64438 (0.0019) +[2024-03-29 19:11:23,839][00126] Fps is (10 sec: 42597.3, 60 sec: 41779.1, 300 sec: 41543.1). Total num frames: 1055850496. Throughput: 0: 41786.5. Samples: 938039460. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:23,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 19:11:25,377][00497] Updated weights for policy 0, policy_version 64448 (0.0025) +[2024-03-29 19:11:28,839][00126] Fps is (10 sec: 44236.6, 60 sec: 42052.2, 300 sec: 41654.2). Total num frames: 1056063488. Throughput: 0: 41432.7. Samples: 938271340. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:28,841][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 19:11:29,610][00497] Updated weights for policy 0, policy_version 64458 (0.0022) +[2024-03-29 19:11:33,542][00497] Updated weights for policy 0, policy_version 64468 (0.0023) +[2024-03-29 19:11:33,839][00126] Fps is (10 sec: 39322.4, 60 sec: 41233.1, 300 sec: 41487.6). Total num frames: 1056243712. Throughput: 0: 41515.6. Samples: 938405580. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:33,840][00126] Avg episode reward: [(0, '0.596')] +[2024-03-29 19:11:34,944][00476] Signal inference workers to stop experience collection... (33450 times) +[2024-03-29 19:11:34,999][00497] InferenceWorker_p0-w0: stopping experience collection (33450 times) +[2024-03-29 19:11:35,035][00476] Signal inference workers to resume experience collection... (33450 times) +[2024-03-29 19:11:35,042][00497] InferenceWorker_p0-w0: resuming experience collection (33450 times) +[2024-03-29 19:11:37,072][00497] Updated weights for policy 0, policy_version 64478 (0.0026) +[2024-03-29 19:11:38,839][00126] Fps is (10 sec: 40960.7, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 1056473088. Throughput: 0: 42015.3. Samples: 938651420. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:38,840][00126] Avg episode reward: [(0, '0.552')] +[2024-03-29 19:11:41,272][00497] Updated weights for policy 0, policy_version 64488 (0.0029) +[2024-03-29 19:11:43,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42052.1, 300 sec: 41598.7). Total num frames: 1056686080. Throughput: 0: 41645.3. Samples: 938904940. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:43,841][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 19:11:45,158][00497] Updated weights for policy 0, policy_version 64498 (0.0022) +[2024-03-29 19:11:48,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 1056882688. Throughput: 0: 41563.7. Samples: 939033200. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:48,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 19:11:49,484][00497] Updated weights for policy 0, policy_version 64508 (0.0023) +[2024-03-29 19:11:52,960][00497] Updated weights for policy 0, policy_version 64518 (0.0024) +[2024-03-29 19:11:53,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41506.2, 300 sec: 41432.1). Total num frames: 1057079296. Throughput: 0: 41508.4. Samples: 939268160. Policy #0 lag: (min: 0.0, avg: 19.8, max: 41.0) +[2024-03-29 19:11:53,840][00126] Avg episode reward: [(0, '0.667')] +[2024-03-29 19:11:56,941][00497] Updated weights for policy 0, policy_version 64528 (0.0024) +[2024-03-29 19:11:58,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 1057292288. Throughput: 0: 41907.5. Samples: 939527640. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:11:58,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 19:12:00,675][00497] Updated weights for policy 0, policy_version 64538 (0.0018) +[2024-03-29 19:12:03,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41506.3, 300 sec: 41487.6). Total num frames: 1057505280. Throughput: 0: 41789.4. Samples: 939662920. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:03,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 19:12:05,058][00497] Updated weights for policy 0, policy_version 64548 (0.0027) +[2024-03-29 19:12:08,202][00497] Updated weights for policy 0, policy_version 64558 (0.0023) +[2024-03-29 19:12:08,839][00126] Fps is (10 sec: 42598.1, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 1057718272. Throughput: 0: 41547.3. Samples: 939909080. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:08,840][00126] Avg episode reward: [(0, '0.666')] +[2024-03-29 19:12:08,843][00476] Signal inference workers to stop experience collection... (33500 times) +[2024-03-29 19:12:08,938][00497] InferenceWorker_p0-w0: stopping experience collection (33500 times) +[2024-03-29 19:12:09,005][00476] Signal inference workers to resume experience collection... (33500 times) +[2024-03-29 19:12:09,006][00497] InferenceWorker_p0-w0: resuming experience collection (33500 times) +[2024-03-29 19:12:12,582][00497] Updated weights for policy 0, policy_version 64568 (0.0026) +[2024-03-29 19:12:13,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1057931264. Throughput: 0: 41984.1. Samples: 940160620. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:13,840][00126] Avg episode reward: [(0, '0.655')] +[2024-03-29 19:12:16,147][00497] Updated weights for policy 0, policy_version 64578 (0.0018) +[2024-03-29 19:12:18,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 1058127872. Throughput: 0: 41849.3. Samples: 940288800. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:18,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 19:12:20,447][00497] Updated weights for policy 0, policy_version 64588 (0.0022) +[2024-03-29 19:12:23,514][00497] Updated weights for policy 0, policy_version 64598 (0.0019) +[2024-03-29 19:12:23,839][00126] Fps is (10 sec: 44236.0, 60 sec: 42052.3, 300 sec: 41598.8). Total num frames: 1058373632. Throughput: 0: 42234.0. Samples: 940551960. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:23,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 19:12:28,052][00497] Updated weights for policy 0, policy_version 64608 (0.0020) +[2024-03-29 19:12:28,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 1058570240. Throughput: 0: 41960.1. Samples: 940793140. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:28,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:12:31,814][00497] Updated weights for policy 0, policy_version 64618 (0.0032) +[2024-03-29 19:12:33,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 1058750464. Throughput: 0: 41692.0. Samples: 940909340. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 19:12:33,840][00126] Avg episode reward: [(0, '0.656')] +[2024-03-29 19:12:36,121][00497] Updated weights for policy 0, policy_version 64628 (0.0026) +[2024-03-29 19:12:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 41598.7). Total num frames: 1059012608. Throughput: 0: 42544.9. Samples: 941182680. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:12:38,840][00126] Avg episode reward: [(0, '0.614')] +[2024-03-29 19:12:39,129][00497] Updated weights for policy 0, policy_version 64638 (0.0024) +[2024-03-29 19:12:41,038][00476] Signal inference workers to stop experience collection... (33550 times) +[2024-03-29 19:12:41,105][00497] InferenceWorker_p0-w0: stopping experience collection (33550 times) +[2024-03-29 19:12:41,110][00476] Signal inference workers to resume experience collection... (33550 times) +[2024-03-29 19:12:41,130][00497] InferenceWorker_p0-w0: resuming experience collection (33550 times) +[2024-03-29 19:12:43,555][00497] Updated weights for policy 0, policy_version 64648 (0.0027) +[2024-03-29 19:12:43,839][00126] Fps is (10 sec: 45875.0, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 1059209216. Throughput: 0: 42156.8. Samples: 941424700. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:12:43,840][00126] Avg episode reward: [(0, '0.657')] +[2024-03-29 19:12:47,391][00497] Updated weights for policy 0, policy_version 64658 (0.0024) +[2024-03-29 19:12:48,839][00126] Fps is (10 sec: 37683.1, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 1059389440. Throughput: 0: 41798.6. Samples: 941543860. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:12:48,840][00126] Avg episode reward: [(0, '0.579')] +[2024-03-29 19:12:52,021][00497] Updated weights for policy 0, policy_version 64668 (0.0022) +[2024-03-29 19:12:53,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42325.3, 300 sec: 41543.2). Total num frames: 1059618816. Throughput: 0: 42269.7. Samples: 941811220. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:12:53,840][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 19:12:54,862][00497] Updated weights for policy 0, policy_version 64678 (0.0024) +[2024-03-29 19:12:58,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.3, 300 sec: 41654.3). Total num frames: 1059831808. Throughput: 0: 41853.8. Samples: 942044040. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:12:58,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 19:12:59,379][00497] Updated weights for policy 0, policy_version 64688 (0.0024) +[2024-03-29 19:13:03,112][00497] Updated weights for policy 0, policy_version 64698 (0.0023) +[2024-03-29 19:13:03,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1060012032. Throughput: 0: 41928.1. Samples: 942175560. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:13:03,840][00126] Avg episode reward: [(0, '0.620')] +[2024-03-29 19:13:03,959][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064699_1060028416.pth... +[2024-03-29 19:13:04,299][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064090_1050050560.pth +[2024-03-29 19:13:07,396][00497] Updated weights for policy 0, policy_version 64708 (0.0035) +[2024-03-29 19:13:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 41598.7). Total num frames: 1060241408. Throughput: 0: 42040.6. Samples: 942443780. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:13:08,840][00126] Avg episode reward: [(0, '0.590')] +[2024-03-29 19:13:10,304][00497] Updated weights for policy 0, policy_version 64718 (0.0026) +[2024-03-29 19:13:13,036][00476] Signal inference workers to stop experience collection... (33600 times) +[2024-03-29 19:13:13,067][00497] InferenceWorker_p0-w0: stopping experience collection (33600 times) +[2024-03-29 19:13:13,231][00476] Signal inference workers to resume experience collection... (33600 times) +[2024-03-29 19:13:13,232][00497] InferenceWorker_p0-w0: resuming experience collection (33600 times) +[2024-03-29 19:13:13,839][00126] Fps is (10 sec: 44236.1, 60 sec: 42052.2, 300 sec: 41598.7). Total num frames: 1060454400. Throughput: 0: 41841.2. Samples: 942676000. Policy #0 lag: (min: 1.0, avg: 20.4, max: 42.0) +[2024-03-29 19:13:13,840][00126] Avg episode reward: [(0, '0.641')] +[2024-03-29 19:13:15,269][00497] Updated weights for policy 0, policy_version 64728 (0.0019) +[2024-03-29 19:13:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 1060651008. Throughput: 0: 41980.1. Samples: 942798440. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:18,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:13:19,003][00497] Updated weights for policy 0, policy_version 64738 (0.0023) +[2024-03-29 19:13:23,181][00497] Updated weights for policy 0, policy_version 64748 (0.0031) +[2024-03-29 19:13:23,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 1060864000. Throughput: 0: 41982.2. Samples: 943071880. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:23,840][00126] Avg episode reward: [(0, '0.621')] +[2024-03-29 19:13:26,125][00497] Updated weights for policy 0, policy_version 64758 (0.0020) +[2024-03-29 19:13:28,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 1061076992. Throughput: 0: 41807.2. Samples: 943306020. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:28,840][00126] Avg episode reward: [(0, '0.635')] +[2024-03-29 19:13:30,494][00497] Updated weights for policy 0, policy_version 64768 (0.0022) +[2024-03-29 19:13:33,839][00126] Fps is (10 sec: 42599.0, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 1061289984. Throughput: 0: 42101.4. Samples: 943438420. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:33,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 19:13:34,364][00497] Updated weights for policy 0, policy_version 64778 (0.0017) +[2024-03-29 19:13:38,644][00497] Updated weights for policy 0, policy_version 64788 (0.0022) +[2024-03-29 19:13:38,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.1, 300 sec: 41654.2). Total num frames: 1061486592. Throughput: 0: 42024.1. Samples: 943702300. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:38,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 19:13:41,940][00497] Updated weights for policy 0, policy_version 64798 (0.0023) +[2024-03-29 19:13:43,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1061715968. Throughput: 0: 42104.8. Samples: 943938760. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:43,842][00126] Avg episode reward: [(0, '0.582')] +[2024-03-29 19:13:46,246][00497] Updated weights for policy 0, policy_version 64808 (0.0020) +[2024-03-29 19:13:47,237][00476] Signal inference workers to stop experience collection... (33650 times) +[2024-03-29 19:13:47,269][00497] InferenceWorker_p0-w0: stopping experience collection (33650 times) +[2024-03-29 19:13:47,458][00476] Signal inference workers to resume experience collection... (33650 times) +[2024-03-29 19:13:47,459][00497] InferenceWorker_p0-w0: resuming experience collection (33650 times) +[2024-03-29 19:13:48,839][00126] Fps is (10 sec: 42598.0, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 1061912576. Throughput: 0: 41899.5. Samples: 944061040. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:48,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 19:13:50,224][00497] Updated weights for policy 0, policy_version 64818 (0.0025) +[2024-03-29 19:13:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41506.1, 300 sec: 41598.7). Total num frames: 1062109184. Throughput: 0: 41789.6. Samples: 944324320. Policy #0 lag: (min: 1.0, avg: 21.3, max: 42.0) +[2024-03-29 19:13:53,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 19:13:54,489][00497] Updated weights for policy 0, policy_version 64828 (0.0025) +[2024-03-29 19:13:57,554][00497] Updated weights for policy 0, policy_version 64838 (0.0028) +[2024-03-29 19:13:58,839][00126] Fps is (10 sec: 40960.3, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 1062322176. Throughput: 0: 41695.7. Samples: 944552300. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:13:58,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 19:14:01,940][00497] Updated weights for policy 0, policy_version 64848 (0.0019) +[2024-03-29 19:14:03,839][00126] Fps is (10 sec: 42599.1, 60 sec: 42052.3, 300 sec: 41710.0). Total num frames: 1062535168. Throughput: 0: 41893.8. Samples: 944683660. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:03,840][00126] Avg episode reward: [(0, '0.607')] +[2024-03-29 19:14:06,023][00497] Updated weights for policy 0, policy_version 64858 (0.0025) +[2024-03-29 19:14:08,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 1062715392. Throughput: 0: 41575.1. Samples: 944942760. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:08,840][00126] Avg episode reward: [(0, '0.646')] +[2024-03-29 19:14:10,270][00497] Updated weights for policy 0, policy_version 64868 (0.0024) +[2024-03-29 19:14:13,587][00497] Updated weights for policy 0, policy_version 64878 (0.0028) +[2024-03-29 19:14:13,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 1062961152. Throughput: 0: 41173.8. Samples: 945158840. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:13,840][00126] Avg episode reward: [(0, '0.668')] +[2024-03-29 19:14:17,997][00497] Updated weights for policy 0, policy_version 64888 (0.0022) +[2024-03-29 19:14:18,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41779.1, 300 sec: 41765.3). Total num frames: 1063157760. Throughput: 0: 41326.9. Samples: 945298140. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:18,840][00126] Avg episode reward: [(0, '0.598')] +[2024-03-29 19:14:22,123][00497] Updated weights for policy 0, policy_version 64898 (0.0018) +[2024-03-29 19:14:23,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41233.0, 300 sec: 41654.2). Total num frames: 1063337984. Throughput: 0: 41191.0. Samples: 945555900. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:23,840][00126] Avg episode reward: [(0, '0.631')] +[2024-03-29 19:14:25,348][00476] Signal inference workers to stop experience collection... (33700 times) +[2024-03-29 19:14:25,373][00497] InferenceWorker_p0-w0: stopping experience collection (33700 times) +[2024-03-29 19:14:25,535][00476] Signal inference workers to resume experience collection... (33700 times) +[2024-03-29 19:14:25,536][00497] InferenceWorker_p0-w0: resuming experience collection (33700 times) +[2024-03-29 19:14:26,186][00497] Updated weights for policy 0, policy_version 64908 (0.0020) +[2024-03-29 19:14:28,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42052.2, 300 sec: 41821.1). Total num frames: 1063600128. Throughput: 0: 41275.5. Samples: 945796160. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:28,841][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 19:14:29,207][00497] Updated weights for policy 0, policy_version 64918 (0.0023) +[2024-03-29 19:14:33,651][00497] Updated weights for policy 0, policy_version 64928 (0.0024) +[2024-03-29 19:14:33,839][00126] Fps is (10 sec: 44237.3, 60 sec: 41506.0, 300 sec: 41709.8). Total num frames: 1063780352. Throughput: 0: 41544.5. Samples: 945930540. Policy #0 lag: (min: 0.0, avg: 22.7, max: 42.0) +[2024-03-29 19:14:33,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 19:14:37,785][00497] Updated weights for policy 0, policy_version 64938 (0.0022) +[2024-03-29 19:14:38,839][00126] Fps is (10 sec: 36045.4, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 1063960576. Throughput: 0: 41363.2. Samples: 946185660. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:14:38,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 19:14:41,909][00497] Updated weights for policy 0, policy_version 64948 (0.0021) +[2024-03-29 19:14:43,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 1064206336. Throughput: 0: 41652.5. Samples: 946426660. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:14:43,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 19:14:45,009][00497] Updated weights for policy 0, policy_version 64958 (0.0028) +[2024-03-29 19:14:48,839][00126] Fps is (10 sec: 45874.7, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 1064419328. Throughput: 0: 41530.1. Samples: 946552520. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:14:48,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 19:14:49,053][00497] Updated weights for policy 0, policy_version 64968 (0.0026) +[2024-03-29 19:14:53,475][00497] Updated weights for policy 0, policy_version 64978 (0.0018) +[2024-03-29 19:14:53,839][00126] Fps is (10 sec: 39320.8, 60 sec: 41506.1, 300 sec: 41709.7). Total num frames: 1064599552. Throughput: 0: 41355.5. Samples: 946803760. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:14:53,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 19:14:57,481][00497] Updated weights for policy 0, policy_version 64988 (0.0034) +[2024-03-29 19:14:57,757][00476] Signal inference workers to stop experience collection... (33750 times) +[2024-03-29 19:14:57,792][00497] InferenceWorker_p0-w0: stopping experience collection (33750 times) +[2024-03-29 19:14:57,978][00476] Signal inference workers to resume experience collection... (33750 times) +[2024-03-29 19:14:57,978][00497] InferenceWorker_p0-w0: resuming experience collection (33750 times) +[2024-03-29 19:14:58,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 41709.8). Total num frames: 1064828928. Throughput: 0: 42261.3. Samples: 947060600. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:14:58,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 19:15:00,446][00497] Updated weights for policy 0, policy_version 64998 (0.0029) +[2024-03-29 19:15:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 42052.1, 300 sec: 41765.3). Total num frames: 1065058304. Throughput: 0: 41817.7. Samples: 947179940. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:15:03,841][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 19:15:04,037][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000065007_1065074688.pth... +[2024-03-29 19:15:04,335][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064394_1055031296.pth +[2024-03-29 19:15:04,661][00497] Updated weights for policy 0, policy_version 65008 (0.0026) +[2024-03-29 19:15:08,839][00126] Fps is (10 sec: 40960.3, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 1065238528. Throughput: 0: 41664.6. Samples: 947430800. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:15:08,840][00126] Avg episode reward: [(0, '0.619')] +[2024-03-29 19:15:09,042][00497] Updated weights for policy 0, policy_version 65018 (0.0022) +[2024-03-29 19:15:13,043][00497] Updated weights for policy 0, policy_version 65028 (0.0022) +[2024-03-29 19:15:13,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 1065451520. Throughput: 0: 42334.7. Samples: 947701220. Policy #0 lag: (min: 0.0, avg: 20.5, max: 40.0) +[2024-03-29 19:15:13,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:15:16,047][00497] Updated weights for policy 0, policy_version 65038 (0.0023) +[2024-03-29 19:15:18,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41779.3, 300 sec: 41765.3). Total num frames: 1065664512. Throughput: 0: 41577.8. Samples: 947801540. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:18,840][00126] Avg episode reward: [(0, '0.589')] +[2024-03-29 19:15:20,312][00497] Updated weights for policy 0, policy_version 65048 (0.0024) +[2024-03-29 19:15:23,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42325.4, 300 sec: 41820.9). Total num frames: 1065877504. Throughput: 0: 41663.6. Samples: 948060520. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:23,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 19:15:24,967][00497] Updated weights for policy 0, policy_version 65058 (0.0018) +[2024-03-29 19:15:28,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40960.2, 300 sec: 41654.3). Total num frames: 1066057728. Throughput: 0: 42142.3. Samples: 948323060. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:28,841][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 19:15:28,973][00497] Updated weights for policy 0, policy_version 65068 (0.0023) +[2024-03-29 19:15:30,399][00476] Signal inference workers to stop experience collection... (33800 times) +[2024-03-29 19:15:30,479][00476] Signal inference workers to resume experience collection... (33800 times) +[2024-03-29 19:15:30,481][00497] InferenceWorker_p0-w0: stopping experience collection (33800 times) +[2024-03-29 19:15:30,512][00497] InferenceWorker_p0-w0: resuming experience collection (33800 times) +[2024-03-29 19:15:31,972][00497] Updated weights for policy 0, policy_version 65078 (0.0030) +[2024-03-29 19:15:33,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1066287104. Throughput: 0: 41715.6. Samples: 948429720. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:33,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 19:15:36,219][00497] Updated weights for policy 0, policy_version 65088 (0.0023) +[2024-03-29 19:15:38,839][00126] Fps is (10 sec: 45874.8, 60 sec: 42598.4, 300 sec: 41876.4). Total num frames: 1066516480. Throughput: 0: 41643.7. Samples: 948677720. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:38,840][00126] Avg episode reward: [(0, '0.701')] +[2024-03-29 19:15:40,924][00497] Updated weights for policy 0, policy_version 65098 (0.0027) +[2024-03-29 19:15:43,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1066680320. Throughput: 0: 42188.1. Samples: 948959060. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:43,840][00126] Avg episode reward: [(0, '0.630')] +[2024-03-29 19:15:44,744][00497] Updated weights for policy 0, policy_version 65108 (0.0027) +[2024-03-29 19:15:47,955][00497] Updated weights for policy 0, policy_version 65118 (0.0022) +[2024-03-29 19:15:48,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 1066909696. Throughput: 0: 41686.3. Samples: 949055820. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:48,841][00126] Avg episode reward: [(0, '0.586')] +[2024-03-29 19:15:51,822][00497] Updated weights for policy 0, policy_version 65128 (0.0017) +[2024-03-29 19:15:53,839][00126] Fps is (10 sec: 45875.0, 60 sec: 42325.5, 300 sec: 41820.9). Total num frames: 1067139072. Throughput: 0: 41690.7. Samples: 949306880. Policy #0 lag: (min: 1.0, avg: 21.0, max: 41.0) +[2024-03-29 19:15:53,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 19:15:56,542][00497] Updated weights for policy 0, policy_version 65138 (0.0020) +[2024-03-29 19:15:58,839][00126] Fps is (10 sec: 37683.5, 60 sec: 40960.0, 300 sec: 41598.7). Total num frames: 1067286528. Throughput: 0: 41656.9. Samples: 949575780. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:15:58,840][00126] Avg episode reward: [(0, '0.605')] +[2024-03-29 19:16:00,395][00497] Updated weights for policy 0, policy_version 65148 (0.0024) +[2024-03-29 19:16:03,726][00497] Updated weights for policy 0, policy_version 65158 (0.0021) +[2024-03-29 19:16:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41506.2, 300 sec: 41820.8). Total num frames: 1067548672. Throughput: 0: 41849.7. Samples: 949684780. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:03,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 19:16:05,303][00476] Signal inference workers to stop experience collection... (33850 times) +[2024-03-29 19:16:05,358][00497] InferenceWorker_p0-w0: stopping experience collection (33850 times) +[2024-03-29 19:16:05,389][00476] Signal inference workers to resume experience collection... (33850 times) +[2024-03-29 19:16:05,392][00497] InferenceWorker_p0-w0: resuming experience collection (33850 times) +[2024-03-29 19:16:07,916][00497] Updated weights for policy 0, policy_version 65168 (0.0031) +[2024-03-29 19:16:08,839][00126] Fps is (10 sec: 45875.0, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1067745280. Throughput: 0: 41185.3. Samples: 949913860. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:08,840][00126] Avg episode reward: [(0, '0.629')] +[2024-03-29 19:16:12,460][00497] Updated weights for policy 0, policy_version 65178 (0.0018) +[2024-03-29 19:16:13,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1067925504. Throughput: 0: 41330.1. Samples: 950182920. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:13,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 19:16:16,420][00497] Updated weights for policy 0, policy_version 65188 (0.0027) +[2024-03-29 19:16:18,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1068171264. Throughput: 0: 41856.1. Samples: 950313240. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:18,840][00126] Avg episode reward: [(0, '0.658')] +[2024-03-29 19:16:19,543][00497] Updated weights for policy 0, policy_version 65198 (0.0022) +[2024-03-29 19:16:23,839][00126] Fps is (10 sec: 42598.7, 60 sec: 41233.1, 300 sec: 41654.3). Total num frames: 1068351488. Throughput: 0: 41302.3. Samples: 950536320. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:23,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 19:16:23,857][00497] Updated weights for policy 0, policy_version 65208 (0.0020) +[2024-03-29 19:16:28,331][00497] Updated weights for policy 0, policy_version 65218 (0.0032) +[2024-03-29 19:16:28,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41506.0, 300 sec: 41709.8). Total num frames: 1068548096. Throughput: 0: 41043.0. Samples: 950806000. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:28,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 19:16:32,098][00497] Updated weights for policy 0, policy_version 65228 (0.0022) +[2024-03-29 19:16:33,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 41709.8). Total num frames: 1068777472. Throughput: 0: 41857.5. Samples: 950939400. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:33,841][00126] Avg episode reward: [(0, '0.602')] +[2024-03-29 19:16:35,456][00497] Updated weights for policy 0, policy_version 65238 (0.0026) +[2024-03-29 19:16:38,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1068990464. Throughput: 0: 41412.9. Samples: 951170460. Policy #0 lag: (min: 0.0, avg: 18.2, max: 41.0) +[2024-03-29 19:16:38,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 19:16:38,986][00476] Signal inference workers to stop experience collection... (33900 times) +[2024-03-29 19:16:39,056][00497] InferenceWorker_p0-w0: stopping experience collection (33900 times) +[2024-03-29 19:16:39,062][00476] Signal inference workers to resume experience collection... (33900 times) +[2024-03-29 19:16:39,083][00497] InferenceWorker_p0-w0: resuming experience collection (33900 times) +[2024-03-29 19:16:39,364][00497] Updated weights for policy 0, policy_version 65248 (0.0027) +[2024-03-29 19:16:43,839][00126] Fps is (10 sec: 39320.9, 60 sec: 41506.0, 300 sec: 41654.2). Total num frames: 1069170688. Throughput: 0: 41172.3. Samples: 951428540. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:16:43,841][00126] Avg episode reward: [(0, '0.626')] +[2024-03-29 19:16:44,031][00497] Updated weights for policy 0, policy_version 65258 (0.0020) +[2024-03-29 19:16:48,079][00497] Updated weights for policy 0, policy_version 65268 (0.0027) +[2024-03-29 19:16:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1069383680. Throughput: 0: 41601.3. Samples: 951556840. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:16:48,840][00126] Avg episode reward: [(0, '0.670')] +[2024-03-29 19:16:51,429][00497] Updated weights for policy 0, policy_version 65278 (0.0026) +[2024-03-29 19:16:53,839][00126] Fps is (10 sec: 45875.6, 60 sec: 41506.1, 300 sec: 41820.8). Total num frames: 1069629440. Throughput: 0: 41552.9. Samples: 951783740. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:16:53,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 19:16:55,455][00497] Updated weights for policy 0, policy_version 65288 (0.0019) +[2024-03-29 19:16:58,839][00126] Fps is (10 sec: 42598.7, 60 sec: 42052.3, 300 sec: 41709.8). Total num frames: 1069809664. Throughput: 0: 41336.5. Samples: 952043060. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:16:58,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 19:16:59,576][00497] Updated weights for policy 0, policy_version 65298 (0.0019) +[2024-03-29 19:17:03,518][00497] Updated weights for policy 0, policy_version 65308 (0.0027) +[2024-03-29 19:17:03,839][00126] Fps is (10 sec: 39321.9, 60 sec: 41233.1, 300 sec: 41709.8). Total num frames: 1070022656. Throughput: 0: 41631.1. Samples: 952186640. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:17:03,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 19:17:04,317][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000065311_1070055424.pth... +[2024-03-29 19:17:04,622][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000064699_1060028416.pth +[2024-03-29 19:17:06,977][00497] Updated weights for policy 0, policy_version 65318 (0.0030) +[2024-03-29 19:17:08,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.2, 300 sec: 41765.3). Total num frames: 1070252032. Throughput: 0: 41788.7. Samples: 952416820. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:17:08,840][00126] Avg episode reward: [(0, '0.617')] +[2024-03-29 19:17:10,784][00497] Updated weights for policy 0, policy_version 65328 (0.0025) +[2024-03-29 19:17:13,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 1070448640. Throughput: 0: 41420.4. Samples: 952669920. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:17:13,840][00126] Avg episode reward: [(0, '0.624')] +[2024-03-29 19:17:15,256][00497] Updated weights for policy 0, policy_version 65338 (0.0027) +[2024-03-29 19:17:16,373][00476] Signal inference workers to stop experience collection... (33950 times) +[2024-03-29 19:17:16,399][00497] InferenceWorker_p0-w0: stopping experience collection (33950 times) +[2024-03-29 19:17:16,554][00476] Signal inference workers to resume experience collection... (33950 times) +[2024-03-29 19:17:16,555][00497] InferenceWorker_p0-w0: resuming experience collection (33950 times) +[2024-03-29 19:17:18,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40960.0, 300 sec: 41543.2). Total num frames: 1070628864. Throughput: 0: 41437.3. Samples: 952804080. Policy #0 lag: (min: 0.0, avg: 22.6, max: 43.0) +[2024-03-29 19:17:18,840][00126] Avg episode reward: [(0, '0.674')] +[2024-03-29 19:17:19,341][00497] Updated weights for policy 0, policy_version 65348 (0.0022) +[2024-03-29 19:17:22,518][00497] Updated weights for policy 0, policy_version 65358 (0.0020) +[2024-03-29 19:17:23,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 1070858240. Throughput: 0: 41523.4. Samples: 953039020. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:23,840][00126] Avg episode reward: [(0, '0.628')] +[2024-03-29 19:17:26,652][00497] Updated weights for policy 0, policy_version 65368 (0.0027) +[2024-03-29 19:17:28,839][00126] Fps is (10 sec: 44236.5, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 1071071232. Throughput: 0: 41381.4. Samples: 953290700. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:28,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 19:17:31,155][00497] Updated weights for policy 0, policy_version 65378 (0.0025) +[2024-03-29 19:17:33,840][00126] Fps is (10 sec: 39321.3, 60 sec: 41232.9, 300 sec: 41487.6). Total num frames: 1071251456. Throughput: 0: 41555.4. Samples: 953426840. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:33,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 19:17:35,256][00497] Updated weights for policy 0, policy_version 65388 (0.0024) +[2024-03-29 19:17:38,332][00497] Updated weights for policy 0, policy_version 65398 (0.0031) +[2024-03-29 19:17:38,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1071497216. Throughput: 0: 41966.2. Samples: 953672220. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:38,840][00126] Avg episode reward: [(0, '0.649')] +[2024-03-29 19:17:42,477][00497] Updated weights for policy 0, policy_version 65408 (0.0024) +[2024-03-29 19:17:43,839][00126] Fps is (10 sec: 44238.0, 60 sec: 42052.4, 300 sec: 41709.8). Total num frames: 1071693824. Throughput: 0: 41404.5. Samples: 953906260. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:43,840][00126] Avg episode reward: [(0, '0.595')] +[2024-03-29 19:17:46,953][00497] Updated weights for policy 0, policy_version 65418 (0.0017) +[2024-03-29 19:17:48,839][00126] Fps is (10 sec: 37683.6, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 1071874048. Throughput: 0: 41409.8. Samples: 954050080. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:48,840][00126] Avg episode reward: [(0, '0.558')] +[2024-03-29 19:17:50,959][00497] Updated weights for policy 0, policy_version 65428 (0.0023) +[2024-03-29 19:17:50,981][00476] Signal inference workers to stop experience collection... (34000 times) +[2024-03-29 19:17:50,982][00476] Signal inference workers to resume experience collection... (34000 times) +[2024-03-29 19:17:51,027][00497] InferenceWorker_p0-w0: stopping experience collection (34000 times) +[2024-03-29 19:17:51,027][00497] InferenceWorker_p0-w0: resuming experience collection (34000 times) +[2024-03-29 19:17:53,839][00126] Fps is (10 sec: 42597.7, 60 sec: 41506.1, 300 sec: 41654.2). Total num frames: 1072119808. Throughput: 0: 41840.9. Samples: 954299660. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:53,840][00126] Avg episode reward: [(0, '0.608')] +[2024-03-29 19:17:54,092][00497] Updated weights for policy 0, policy_version 65438 (0.0032) +[2024-03-29 19:17:57,920][00497] Updated weights for policy 0, policy_version 65448 (0.0025) +[2024-03-29 19:17:58,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 1072332800. Throughput: 0: 41649.8. Samples: 954544160. Policy #0 lag: (min: 0.0, avg: 18.3, max: 40.0) +[2024-03-29 19:17:58,840][00126] Avg episode reward: [(0, '0.544')] +[2024-03-29 19:18:02,385][00497] Updated weights for policy 0, policy_version 65458 (0.0020) +[2024-03-29 19:18:03,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 1072529408. Throughput: 0: 41762.2. Samples: 954683380. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:03,840][00126] Avg episode reward: [(0, '0.650')] +[2024-03-29 19:18:06,286][00497] Updated weights for policy 0, policy_version 65468 (0.0023) +[2024-03-29 19:18:08,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 1072758784. Throughput: 0: 42345.1. Samples: 954944540. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:08,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 19:18:09,586][00497] Updated weights for policy 0, policy_version 65478 (0.0024) +[2024-03-29 19:18:13,459][00497] Updated weights for policy 0, policy_version 65488 (0.0023) +[2024-03-29 19:18:13,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 1072955392. Throughput: 0: 41523.6. Samples: 955159260. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:13,840][00126] Avg episode reward: [(0, '0.611')] +[2024-03-29 19:18:18,134][00497] Updated weights for policy 0, policy_version 65498 (0.0021) +[2024-03-29 19:18:18,839][00126] Fps is (10 sec: 37683.4, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 1073135616. Throughput: 0: 41511.4. Samples: 955294840. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:18,840][00126] Avg episode reward: [(0, '0.585')] +[2024-03-29 19:18:20,678][00476] Signal inference workers to stop experience collection... (34050 times) +[2024-03-29 19:18:20,713][00497] InferenceWorker_p0-w0: stopping experience collection (34050 times) +[2024-03-29 19:18:20,895][00476] Signal inference workers to resume experience collection... (34050 times) +[2024-03-29 19:18:20,895][00497] InferenceWorker_p0-w0: resuming experience collection (34050 times) +[2024-03-29 19:18:22,290][00497] Updated weights for policy 0, policy_version 65508 (0.0022) +[2024-03-29 19:18:23,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 1073348608. Throughput: 0: 41929.3. Samples: 955559040. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:23,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 19:18:25,391][00497] Updated weights for policy 0, policy_version 65518 (0.0029) +[2024-03-29 19:18:28,839][00126] Fps is (10 sec: 44236.1, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 1073577984. Throughput: 0: 41620.8. Samples: 955779200. Policy #0 lag: (min: 1.0, avg: 22.3, max: 41.0) +[2024-03-29 19:18:28,840][00126] Avg episode reward: [(0, '0.563')] +[2024-03-29 19:18:29,465][00497] Updated weights for policy 0, policy_version 65528 (0.0025)