diff --git "a/sf_log.txt" "b/sf_log.txt" --- "a/sf_log.txt" +++ "b/sf_log.txt" @@ -12625,3 +12625,1069 @@ [2024-03-29 16:08:37,642][00497] Updated weights for policy 0, policy_version 36494 (0.0018) [2024-03-29 16:08:38,839][00126] Fps is (10 sec: 39327.1, 60 sec: 41233.1, 300 sec: 41931.9). Total num frames: 597966848. Throughput: 0: 41328.9. Samples: 480171700. Policy #0 lag: (min: 0.0, avg: 19.0, max: 41.0) [2024-03-29 16:08:38,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:08:40,779][00497] Updated weights for policy 0, policy_version 36504 (0.0028) +[2024-03-29 16:08:43,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.1, 300 sec: 41987.5). Total num frames: 598196224. Throughput: 0: 41293.3. Samples: 480410400. Policy #0 lag: (min: 0.0, avg: 19.0, max: 41.0) +[2024-03-29 16:08:43,841][00126] Avg episode reward: [(0, '0.594')] +[2024-03-29 16:08:45,223][00497] Updated weights for policy 0, policy_version 36514 (0.0034) +[2024-03-29 16:08:48,839][00126] Fps is (10 sec: 42599.0, 60 sec: 41506.2, 300 sec: 42043.0). Total num frames: 598392832. Throughput: 0: 41398.0. Samples: 480536940. Policy #0 lag: (min: 0.0, avg: 19.0, max: 41.0) +[2024-03-29 16:08:48,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 16:08:49,136][00497] Updated weights for policy 0, policy_version 36524 (0.0027) +[2024-03-29 16:08:53,423][00497] Updated weights for policy 0, policy_version 36534 (0.0020) +[2024-03-29 16:08:53,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40686.9, 300 sec: 41931.9). Total num frames: 598589440. Throughput: 0: 41536.0. Samples: 480793120. Policy #0 lag: (min: 0.0, avg: 19.0, max: 41.0) +[2024-03-29 16:08:53,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 16:08:56,599][00497] Updated weights for policy 0, policy_version 36544 (0.0031) +[2024-03-29 16:08:56,904][00476] Signal inference workers to stop experience collection... (17150 times) +[2024-03-29 16:08:56,905][00476] Signal inference workers to resume experience collection... (17150 times) +[2024-03-29 16:08:56,950][00497] InferenceWorker_p0-w0: stopping experience collection (17150 times) +[2024-03-29 16:08:56,950][00497] InferenceWorker_p0-w0: resuming experience collection (17150 times) +[2024-03-29 16:08:58,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41779.2, 300 sec: 41987.5). Total num frames: 598818816. Throughput: 0: 41672.5. Samples: 481033360. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 16:08:58,840][00126] Avg episode reward: [(0, '0.457')] +[2024-03-29 16:09:00,699][00497] Updated weights for policy 0, policy_version 36554 (0.0024) +[2024-03-29 16:09:03,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.2, 300 sec: 41931.9). Total num frames: 598999040. Throughput: 0: 41448.9. Samples: 481160980. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 16:09:03,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:09:04,150][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000036562_599031808.pth... +[2024-03-29 16:09:04,446][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000035948_588972032.pth +[2024-03-29 16:09:05,034][00497] Updated weights for policy 0, policy_version 36564 (0.0024) +[2024-03-29 16:09:08,839][00126] Fps is (10 sec: 39321.2, 60 sec: 40960.0, 300 sec: 41987.5). Total num frames: 599212032. Throughput: 0: 41622.1. Samples: 481418540. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 16:09:08,840][00126] Avg episode reward: [(0, '0.612')] +[2024-03-29 16:09:09,170][00497] Updated weights for policy 0, policy_version 36574 (0.0021) +[2024-03-29 16:09:12,291][00497] Updated weights for policy 0, policy_version 36584 (0.0025) +[2024-03-29 16:09:13,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 599441408. Throughput: 0: 41459.0. Samples: 481651880. Policy #0 lag: (min: 0.0, avg: 20.9, max: 41.0) +[2024-03-29 16:09:13,840][00126] Avg episode reward: [(0, '0.449')] +[2024-03-29 16:09:16,453][00497] Updated weights for policy 0, policy_version 36594 (0.0030) +[2024-03-29 16:09:18,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 599621632. Throughput: 0: 41311.1. Samples: 481780580. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 16:09:18,841][00126] Avg episode reward: [(0, '0.462')] +[2024-03-29 16:09:20,604][00497] Updated weights for policy 0, policy_version 36604 (0.0021) +[2024-03-29 16:09:23,839][00126] Fps is (10 sec: 39322.1, 60 sec: 41233.1, 300 sec: 41876.4). Total num frames: 599834624. Throughput: 0: 41627.6. Samples: 482044940. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 16:09:23,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:09:24,633][00497] Updated weights for policy 0, policy_version 36614 (0.0019) +[2024-03-29 16:09:27,751][00476] Signal inference workers to stop experience collection... (17200 times) +[2024-03-29 16:09:27,832][00497] InferenceWorker_p0-w0: stopping experience collection (17200 times) +[2024-03-29 16:09:27,834][00476] Signal inference workers to resume experience collection... (17200 times) +[2024-03-29 16:09:27,839][00497] Updated weights for policy 0, policy_version 36624 (0.0032) +[2024-03-29 16:09:27,860][00497] InferenceWorker_p0-w0: resuming experience collection (17200 times) +[2024-03-29 16:09:28,839][00126] Fps is (10 sec: 45875.6, 60 sec: 41780.2, 300 sec: 41931.9). Total num frames: 600080384. Throughput: 0: 41602.7. Samples: 482282520. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 16:09:28,840][00126] Avg episode reward: [(0, '0.422')] +[2024-03-29 16:09:32,037][00497] Updated weights for policy 0, policy_version 36634 (0.0027) +[2024-03-29 16:09:33,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 600260608. Throughput: 0: 41670.0. Samples: 482412100. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 16:09:33,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 16:09:36,018][00497] Updated weights for policy 0, policy_version 36644 (0.0031) +[2024-03-29 16:09:38,839][00126] Fps is (10 sec: 39321.2, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 600473600. Throughput: 0: 41948.4. Samples: 482680800. Policy #0 lag: (min: 0.0, avg: 18.9, max: 40.0) +[2024-03-29 16:09:38,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:09:39,946][00497] Updated weights for policy 0, policy_version 36654 (0.0024) +[2024-03-29 16:09:43,337][00497] Updated weights for policy 0, policy_version 36664 (0.0026) +[2024-03-29 16:09:43,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 600719360. Throughput: 0: 41750.2. Samples: 482912120. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 16:09:43,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:09:47,585][00497] Updated weights for policy 0, policy_version 36674 (0.0020) +[2024-03-29 16:09:48,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42052.2, 300 sec: 41987.5). Total num frames: 600915968. Throughput: 0: 41915.1. Samples: 483047160. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 16:09:48,840][00126] Avg episode reward: [(0, '0.547')] +[2024-03-29 16:09:51,706][00497] Updated weights for policy 0, policy_version 36684 (0.0021) +[2024-03-29 16:09:53,839][00126] Fps is (10 sec: 37683.3, 60 sec: 41779.2, 300 sec: 41932.0). Total num frames: 601096192. Throughput: 0: 41832.1. Samples: 483300980. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 16:09:53,840][00126] Avg episode reward: [(0, '0.406')] +[2024-03-29 16:09:55,737][00497] Updated weights for policy 0, policy_version 36694 (0.0024) +[2024-03-29 16:09:58,839][00126] Fps is (10 sec: 42598.5, 60 sec: 42052.3, 300 sec: 41987.5). Total num frames: 601341952. Throughput: 0: 42275.7. Samples: 483554280. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 16:09:58,840][00126] Avg episode reward: [(0, '0.457')] +[2024-03-29 16:09:59,054][00497] Updated weights for policy 0, policy_version 36704 (0.0021) +[2024-03-29 16:10:03,237][00497] Updated weights for policy 0, policy_version 36714 (0.0018) +[2024-03-29 16:10:03,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 601538560. Throughput: 0: 42228.9. Samples: 483680880. Policy #0 lag: (min: 2.0, avg: 21.2, max: 42.0) +[2024-03-29 16:10:03,840][00126] Avg episode reward: [(0, '0.439')] +[2024-03-29 16:10:07,399][00497] Updated weights for policy 0, policy_version 36724 (0.0021) +[2024-03-29 16:10:08,657][00476] Signal inference workers to stop experience collection... (17250 times) +[2024-03-29 16:10:08,693][00497] InferenceWorker_p0-w0: stopping experience collection (17250 times) +[2024-03-29 16:10:08,821][00476] Signal inference workers to resume experience collection... (17250 times) +[2024-03-29 16:10:08,821][00497] InferenceWorker_p0-w0: resuming experience collection (17250 times) +[2024-03-29 16:10:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.4, 300 sec: 41987.5). Total num frames: 601735168. Throughput: 0: 41884.5. Samples: 483929740. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:10:08,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 16:10:11,614][00497] Updated weights for policy 0, policy_version 36734 (0.0019) +[2024-03-29 16:10:13,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.3, 300 sec: 41931.9). Total num frames: 601948160. Throughput: 0: 42212.9. Samples: 484182100. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:10:13,840][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 16:10:15,029][00497] Updated weights for policy 0, policy_version 36744 (0.0017) +[2024-03-29 16:10:18,839][00126] Fps is (10 sec: 42597.5, 60 sec: 42325.3, 300 sec: 41931.9). Total num frames: 602161152. Throughput: 0: 41680.8. Samples: 484287740. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:10:18,842][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 16:10:19,028][00497] Updated weights for policy 0, policy_version 36754 (0.0024) +[2024-03-29 16:10:23,261][00497] Updated weights for policy 0, policy_version 36764 (0.0026) +[2024-03-29 16:10:23,839][00126] Fps is (10 sec: 40959.6, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 602357760. Throughput: 0: 41888.9. Samples: 484565800. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:10:23,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 16:10:27,239][00497] Updated weights for policy 0, policy_version 36774 (0.0029) +[2024-03-29 16:10:28,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 602570752. Throughput: 0: 42146.7. Samples: 484808720. Policy #0 lag: (min: 1.0, avg: 19.0, max: 42.0) +[2024-03-29 16:10:28,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:10:30,823][00497] Updated weights for policy 0, policy_version 36784 (0.0022) +[2024-03-29 16:10:33,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.4, 300 sec: 41931.9). Total num frames: 602800128. Throughput: 0: 41524.9. Samples: 484915780. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 16:10:33,840][00126] Avg episode reward: [(0, '0.456')] +[2024-03-29 16:10:34,805][00497] Updated weights for policy 0, policy_version 36794 (0.0023) +[2024-03-29 16:10:38,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 602963968. Throughput: 0: 41683.5. Samples: 485176740. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 16:10:38,840][00126] Avg episode reward: [(0, '0.393')] +[2024-03-29 16:10:39,154][00497] Updated weights for policy 0, policy_version 36804 (0.0020) +[2024-03-29 16:10:43,345][00497] Updated weights for policy 0, policy_version 36814 (0.0024) +[2024-03-29 16:10:43,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40960.0, 300 sec: 41709.8). Total num frames: 603176960. Throughput: 0: 41416.8. Samples: 485418040. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 16:10:43,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:10:44,924][00476] Signal inference workers to stop experience collection... (17300 times) +[2024-03-29 16:10:44,957][00497] InferenceWorker_p0-w0: stopping experience collection (17300 times) +[2024-03-29 16:10:45,134][00476] Signal inference workers to resume experience collection... (17300 times) +[2024-03-29 16:10:45,134][00497] InferenceWorker_p0-w0: resuming experience collection (17300 times) +[2024-03-29 16:10:46,972][00497] Updated weights for policy 0, policy_version 36824 (0.0024) +[2024-03-29 16:10:48,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41233.0, 300 sec: 41765.3). Total num frames: 603389952. Throughput: 0: 40872.0. Samples: 485520120. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 16:10:48,841][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:10:50,923][00497] Updated weights for policy 0, policy_version 36834 (0.0028) +[2024-03-29 16:10:53,839][00126] Fps is (10 sec: 37683.0, 60 sec: 40959.9, 300 sec: 41654.2). Total num frames: 603553792. Throughput: 0: 41085.6. Samples: 485778600. Policy #0 lag: (min: 1.0, avg: 21.6, max: 41.0) +[2024-03-29 16:10:53,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 16:10:55,515][00497] Updated weights for policy 0, policy_version 36844 (0.0028) +[2024-03-29 16:10:58,839][00126] Fps is (10 sec: 39321.7, 60 sec: 40686.9, 300 sec: 41598.7). Total num frames: 603783168. Throughput: 0: 40953.3. Samples: 486025000. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 16:10:58,840][00126] Avg episode reward: [(0, '0.443')] +[2024-03-29 16:10:59,426][00497] Updated weights for policy 0, policy_version 36854 (0.0019) +[2024-03-29 16:11:02,896][00497] Updated weights for policy 0, policy_version 36864 (0.0029) +[2024-03-29 16:11:03,839][00126] Fps is (10 sec: 47513.5, 60 sec: 41506.1, 300 sec: 41765.3). Total num frames: 604028928. Throughput: 0: 41418.2. Samples: 486151560. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:03,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 16:11:04,049][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000036868_604045312.pth... +[2024-03-29 16:11:04,374][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000036257_594034688.pth +[2024-03-29 16:11:06,637][00497] Updated weights for policy 0, policy_version 36874 (0.0020) +[2024-03-29 16:11:08,839][00126] Fps is (10 sec: 42598.2, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 604209152. Throughput: 0: 40475.6. Samples: 486387200. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:08,841][00126] Avg episode reward: [(0, '0.399')] +[2024-03-29 16:11:11,622][00497] Updated weights for policy 0, policy_version 36884 (0.0018) +[2024-03-29 16:11:13,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40960.0, 300 sec: 41598.7). Total num frames: 604405760. Throughput: 0: 40700.0. Samples: 486640220. Policy #0 lag: (min: 0.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:13,840][00126] Avg episode reward: [(0, '0.440')] +[2024-03-29 16:11:15,589][00497] Updated weights for policy 0, policy_version 36894 (0.0027) +[2024-03-29 16:11:18,839][00126] Fps is (10 sec: 40960.6, 60 sec: 40960.1, 300 sec: 41598.7). Total num frames: 604618752. Throughput: 0: 40945.4. Samples: 486758320. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 16:11:18,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:11:19,110][00497] Updated weights for policy 0, policy_version 36904 (0.0024) +[2024-03-29 16:11:22,981][00497] Updated weights for policy 0, policy_version 36914 (0.0021) +[2024-03-29 16:11:23,839][00126] Fps is (10 sec: 39321.4, 60 sec: 40686.9, 300 sec: 41543.1). Total num frames: 604798976. Throughput: 0: 40528.0. Samples: 487000500. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 16:11:23,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:11:24,259][00476] Signal inference workers to stop experience collection... (17350 times) +[2024-03-29 16:11:24,292][00497] InferenceWorker_p0-w0: stopping experience collection (17350 times) +[2024-03-29 16:11:24,463][00476] Signal inference workers to resume experience collection... (17350 times) +[2024-03-29 16:11:24,463][00497] InferenceWorker_p0-w0: resuming experience collection (17350 times) +[2024-03-29 16:11:27,750][00497] Updated weights for policy 0, policy_version 36924 (0.0022) +[2024-03-29 16:11:28,839][00126] Fps is (10 sec: 39320.9, 60 sec: 40686.9, 300 sec: 41487.6). Total num frames: 605011968. Throughput: 0: 40788.4. Samples: 487253520. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 16:11:28,840][00126] Avg episode reward: [(0, '0.468')] +[2024-03-29 16:11:31,591][00497] Updated weights for policy 0, policy_version 36934 (0.0030) +[2024-03-29 16:11:33,839][00126] Fps is (10 sec: 44237.2, 60 sec: 40686.9, 300 sec: 41543.2). Total num frames: 605241344. Throughput: 0: 41419.1. Samples: 487383980. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 16:11:33,841][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 16:11:34,900][00497] Updated weights for policy 0, policy_version 36944 (0.0024) +[2024-03-29 16:11:38,730][00497] Updated weights for policy 0, policy_version 36954 (0.0029) +[2024-03-29 16:11:38,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41506.2, 300 sec: 41543.2). Total num frames: 605454336. Throughput: 0: 40876.5. Samples: 487618040. Policy #0 lag: (min: 0.0, avg: 21.2, max: 42.0) +[2024-03-29 16:11:38,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:11:43,513][00497] Updated weights for policy 0, policy_version 36964 (0.0019) +[2024-03-29 16:11:43,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40960.0, 300 sec: 41487.6). Total num frames: 605634560. Throughput: 0: 41276.0. Samples: 487882420. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:43,840][00126] Avg episode reward: [(0, '0.520')] +[2024-03-29 16:11:47,383][00497] Updated weights for policy 0, policy_version 36974 (0.0033) +[2024-03-29 16:11:48,839][00126] Fps is (10 sec: 37683.6, 60 sec: 40687.0, 300 sec: 41432.1). Total num frames: 605831168. Throughput: 0: 41045.9. Samples: 487998620. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:48,840][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:11:50,940][00497] Updated weights for policy 0, policy_version 36984 (0.0033) +[2024-03-29 16:11:53,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41779.3, 300 sec: 41487.6). Total num frames: 606060544. Throughput: 0: 40690.8. Samples: 488218280. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:53,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 16:11:54,851][00497] Updated weights for policy 0, policy_version 36994 (0.0025) +[2024-03-29 16:11:58,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40960.0, 300 sec: 41432.1). Total num frames: 606240768. Throughput: 0: 41179.2. Samples: 488493280. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:11:58,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:11:59,307][00476] Signal inference workers to stop experience collection... (17400 times) +[2024-03-29 16:11:59,350][00497] InferenceWorker_p0-w0: stopping experience collection (17400 times) +[2024-03-29 16:11:59,529][00476] Signal inference workers to resume experience collection... (17400 times) +[2024-03-29 16:11:59,530][00497] InferenceWorker_p0-w0: resuming experience collection (17400 times) +[2024-03-29 16:11:59,533][00497] Updated weights for policy 0, policy_version 37004 (0.0029) +[2024-03-29 16:12:03,409][00497] Updated weights for policy 0, policy_version 37014 (0.0031) +[2024-03-29 16:12:03,839][00126] Fps is (10 sec: 39320.9, 60 sec: 40413.9, 300 sec: 41432.1). Total num frames: 606453760. Throughput: 0: 41290.1. Samples: 488616380. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:12:03,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:12:06,906][00497] Updated weights for policy 0, policy_version 37024 (0.0021) +[2024-03-29 16:12:08,839][00126] Fps is (10 sec: 42598.4, 60 sec: 40960.1, 300 sec: 41487.6). Total num frames: 606666752. Throughput: 0: 40997.9. Samples: 488845400. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 16:12:08,840][00126] Avg episode reward: [(0, '0.574')] +[2024-03-29 16:12:10,697][00497] Updated weights for policy 0, policy_version 37034 (0.0027) +[2024-03-29 16:12:13,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40686.9, 300 sec: 41432.1). Total num frames: 606846976. Throughput: 0: 41063.2. Samples: 489101360. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 16:12:13,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 16:12:15,553][00497] Updated weights for policy 0, policy_version 37044 (0.0024) +[2024-03-29 16:12:18,839][00126] Fps is (10 sec: 39321.2, 60 sec: 40686.8, 300 sec: 41376.5). Total num frames: 607059968. Throughput: 0: 41049.3. Samples: 489231200. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 16:12:18,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 16:12:19,320][00497] Updated weights for policy 0, policy_version 37054 (0.0025) +[2024-03-29 16:12:22,809][00497] Updated weights for policy 0, policy_version 37064 (0.0024) +[2024-03-29 16:12:23,839][00126] Fps is (10 sec: 45875.4, 60 sec: 41779.3, 300 sec: 41487.6). Total num frames: 607305728. Throughput: 0: 41148.9. Samples: 489469740. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 16:12:23,840][00126] Avg episode reward: [(0, '0.453')] +[2024-03-29 16:12:26,763][00497] Updated weights for policy 0, policy_version 37074 (0.0024) +[2024-03-29 16:12:28,839][00126] Fps is (10 sec: 42598.5, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 607485952. Throughput: 0: 40980.8. Samples: 489726560. Policy #0 lag: (min: 1.0, avg: 20.4, max: 41.0) +[2024-03-29 16:12:28,840][00126] Avg episode reward: [(0, '0.460')] +[2024-03-29 16:12:31,363][00497] Updated weights for policy 0, policy_version 37084 (0.0022) +[2024-03-29 16:12:31,397][00476] Signal inference workers to stop experience collection... (17450 times) +[2024-03-29 16:12:31,415][00497] InferenceWorker_p0-w0: stopping experience collection (17450 times) +[2024-03-29 16:12:31,611][00476] Signal inference workers to resume experience collection... (17450 times) +[2024-03-29 16:12:31,611][00497] InferenceWorker_p0-w0: resuming experience collection (17450 times) +[2024-03-29 16:12:33,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40687.0, 300 sec: 41321.0). Total num frames: 607682560. Throughput: 0: 41493.8. Samples: 489865840. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 16:12:33,840][00126] Avg episode reward: [(0, '0.466')] +[2024-03-29 16:12:35,087][00497] Updated weights for policy 0, policy_version 37094 (0.0022) +[2024-03-29 16:12:38,712][00497] Updated weights for policy 0, policy_version 37104 (0.0034) +[2024-03-29 16:12:38,839][00126] Fps is (10 sec: 42598.2, 60 sec: 40960.0, 300 sec: 41432.1). Total num frames: 607911936. Throughput: 0: 41837.2. Samples: 490100960. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 16:12:38,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:12:42,682][00497] Updated weights for policy 0, policy_version 37114 (0.0022) +[2024-03-29 16:12:43,839][00126] Fps is (10 sec: 40959.9, 60 sec: 40960.0, 300 sec: 41321.0). Total num frames: 608092160. Throughput: 0: 40976.0. Samples: 490337200. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 16:12:43,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 16:12:47,149][00497] Updated weights for policy 0, policy_version 37124 (0.0021) +[2024-03-29 16:12:48,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.1, 300 sec: 41265.5). Total num frames: 608321536. Throughput: 0: 41421.5. Samples: 490480340. Policy #0 lag: (min: 0.0, avg: 20.0, max: 41.0) +[2024-03-29 16:12:48,840][00126] Avg episode reward: [(0, '0.442')] +[2024-03-29 16:12:50,781][00497] Updated weights for policy 0, policy_version 37134 (0.0026) +[2024-03-29 16:12:53,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41233.0, 300 sec: 41432.1). Total num frames: 608534528. Throughput: 0: 41640.4. Samples: 490719220. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 16:12:53,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:12:54,177][00497] Updated weights for policy 0, policy_version 37144 (0.0024) +[2024-03-29 16:12:57,994][00497] Updated weights for policy 0, policy_version 37154 (0.0027) +[2024-03-29 16:12:58,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41506.2, 300 sec: 41432.1). Total num frames: 608731136. Throughput: 0: 41530.3. Samples: 490970220. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 16:12:58,840][00126] Avg episode reward: [(0, '0.447')] +[2024-03-29 16:13:02,682][00497] Updated weights for policy 0, policy_version 37164 (0.0025) +[2024-03-29 16:13:03,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41506.2, 300 sec: 41321.0). Total num frames: 608944128. Throughput: 0: 41854.3. Samples: 491114640. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:03,840][00126] Avg episode reward: [(0, '0.556')] +[2024-03-29 16:13:04,157][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037169_608976896.pth... +[2024-03-29 16:13:04,471][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000036562_599031808.pth +[2024-03-29 16:13:06,227][00476] Signal inference workers to stop experience collection... (17500 times) +[2024-03-29 16:13:06,228][00476] Signal inference workers to resume experience collection... (17500 times) +[2024-03-29 16:13:06,264][00497] InferenceWorker_p0-w0: stopping experience collection (17500 times) +[2024-03-29 16:13:06,265][00497] InferenceWorker_p0-w0: resuming experience collection (17500 times) +[2024-03-29 16:13:06,527][00497] Updated weights for policy 0, policy_version 37174 (0.0025) +[2024-03-29 16:13:08,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.1, 300 sec: 41376.6). Total num frames: 609140736. Throughput: 0: 41800.0. Samples: 491350740. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:08,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:13:09,989][00497] Updated weights for policy 0, policy_version 37184 (0.0034) +[2024-03-29 16:13:13,840][00126] Fps is (10 sec: 42596.2, 60 sec: 42052.0, 300 sec: 41487.6). Total num frames: 609370112. Throughput: 0: 41284.1. Samples: 491584360. Policy #0 lag: (min: 1.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:13,840][00126] Avg episode reward: [(0, '0.485')] +[2024-03-29 16:13:14,020][00497] Updated weights for policy 0, policy_version 37194 (0.0021) +[2024-03-29 16:13:18,693][00497] Updated weights for policy 0, policy_version 37204 (0.0025) +[2024-03-29 16:13:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.2, 300 sec: 41321.0). Total num frames: 609550336. Throughput: 0: 41231.5. Samples: 491721260. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 16:13:18,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 16:13:22,456][00497] Updated weights for policy 0, policy_version 37214 (0.0022) +[2024-03-29 16:13:23,839][00126] Fps is (10 sec: 40961.8, 60 sec: 41233.0, 300 sec: 41376.7). Total num frames: 609779712. Throughput: 0: 41569.8. Samples: 491971600. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 16:13:23,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:13:25,960][00497] Updated weights for policy 0, policy_version 37224 (0.0022) +[2024-03-29 16:13:28,839][00126] Fps is (10 sec: 44236.2, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 609992704. Throughput: 0: 41499.0. Samples: 492204660. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 16:13:28,842][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 16:13:30,027][00497] Updated weights for policy 0, policy_version 37234 (0.0018) +[2024-03-29 16:13:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.1, 300 sec: 41376.5). Total num frames: 610172928. Throughput: 0: 41422.2. Samples: 492344340. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 16:13:33,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 16:13:34,524][00497] Updated weights for policy 0, policy_version 37244 (0.0029) +[2024-03-29 16:13:36,133][00476] Signal inference workers to stop experience collection... (17550 times) +[2024-03-29 16:13:36,176][00497] InferenceWorker_p0-w0: stopping experience collection (17550 times) +[2024-03-29 16:13:36,365][00476] Signal inference workers to resume experience collection... (17550 times) +[2024-03-29 16:13:36,366][00497] InferenceWorker_p0-w0: resuming experience collection (17550 times) +[2024-03-29 16:13:38,148][00497] Updated weights for policy 0, policy_version 37254 (0.0024) +[2024-03-29 16:13:38,839][00126] Fps is (10 sec: 39322.3, 60 sec: 41233.2, 300 sec: 41321.0). Total num frames: 610385920. Throughput: 0: 41502.7. Samples: 492586840. Policy #0 lag: (min: 1.0, avg: 18.7, max: 41.0) +[2024-03-29 16:13:38,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:13:41,881][00497] Updated weights for policy 0, policy_version 37264 (0.0026) +[2024-03-29 16:13:43,839][00126] Fps is (10 sec: 42597.9, 60 sec: 41779.1, 300 sec: 41376.5). Total num frames: 610598912. Throughput: 0: 40958.1. Samples: 492813340. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:43,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:13:46,058][00497] Updated weights for policy 0, policy_version 37274 (0.0020) +[2024-03-29 16:13:48,839][00126] Fps is (10 sec: 40959.4, 60 sec: 41233.0, 300 sec: 41376.5). Total num frames: 610795520. Throughput: 0: 40930.6. Samples: 492956520. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:48,841][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:13:50,467][00497] Updated weights for policy 0, policy_version 37284 (0.0025) +[2024-03-29 16:13:53,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.0, 300 sec: 41321.0). Total num frames: 611008512. Throughput: 0: 41139.0. Samples: 493202000. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:53,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 16:13:54,083][00497] Updated weights for policy 0, policy_version 37294 (0.0030) +[2024-03-29 16:13:57,433][00497] Updated weights for policy 0, policy_version 37304 (0.0031) +[2024-03-29 16:13:58,839][00126] Fps is (10 sec: 44236.6, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 611237888. Throughput: 0: 41331.9. Samples: 493444280. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 16:13:58,840][00126] Avg episode reward: [(0, '0.468')] +[2024-03-29 16:14:01,590][00497] Updated weights for policy 0, policy_version 37314 (0.0018) +[2024-03-29 16:14:03,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.1, 300 sec: 41432.1). Total num frames: 611434496. Throughput: 0: 41335.0. Samples: 493581340. Policy #0 lag: (min: 0.0, avg: 21.6, max: 42.0) +[2024-03-29 16:14:03,842][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:14:05,889][00497] Updated weights for policy 0, policy_version 37324 (0.0021) +[2024-03-29 16:14:08,839][00126] Fps is (10 sec: 40960.6, 60 sec: 41779.2, 300 sec: 41376.6). Total num frames: 611647488. Throughput: 0: 41457.4. Samples: 493837180. Policy #0 lag: (min: 1.0, avg: 18.2, max: 41.0) +[2024-03-29 16:14:08,840][00126] Avg episode reward: [(0, '0.435')] +[2024-03-29 16:14:09,680][00497] Updated weights for policy 0, policy_version 37334 (0.0036) +[2024-03-29 16:14:09,776][00476] Signal inference workers to stop experience collection... (17600 times) +[2024-03-29 16:14:09,857][00497] InferenceWorker_p0-w0: stopping experience collection (17600 times) +[2024-03-29 16:14:09,950][00476] Signal inference workers to resume experience collection... (17600 times) +[2024-03-29 16:14:09,950][00497] InferenceWorker_p0-w0: resuming experience collection (17600 times) +[2024-03-29 16:14:13,028][00497] Updated weights for policy 0, policy_version 37344 (0.0026) +[2024-03-29 16:14:13,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41779.5, 300 sec: 41543.2). Total num frames: 611876864. Throughput: 0: 41629.9. Samples: 494078000. Policy #0 lag: (min: 1.0, avg: 18.2, max: 41.0) +[2024-03-29 16:14:13,840][00126] Avg episode reward: [(0, '0.496')] +[2024-03-29 16:14:17,181][00497] Updated weights for policy 0, policy_version 37354 (0.0026) +[2024-03-29 16:14:18,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 612057088. Throughput: 0: 41381.4. Samples: 494206500. Policy #0 lag: (min: 1.0, avg: 18.2, max: 41.0) +[2024-03-29 16:14:18,840][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:14:21,563][00497] Updated weights for policy 0, policy_version 37364 (0.0022) +[2024-03-29 16:14:23,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41506.2, 300 sec: 41321.0). Total num frames: 612270080. Throughput: 0: 42012.4. Samples: 494477400. Policy #0 lag: (min: 1.0, avg: 18.2, max: 41.0) +[2024-03-29 16:14:23,840][00126] Avg episode reward: [(0, '0.588')] +[2024-03-29 16:14:25,480][00497] Updated weights for policy 0, policy_version 37374 (0.0024) +[2024-03-29 16:14:28,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 41432.1). Total num frames: 612483072. Throughput: 0: 41981.5. Samples: 494702500. Policy #0 lag: (min: 1.0, avg: 18.2, max: 41.0) +[2024-03-29 16:14:28,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:14:28,917][00497] Updated weights for policy 0, policy_version 37384 (0.0025) +[2024-03-29 16:14:33,241][00497] Updated weights for policy 0, policy_version 37394 (0.0023) +[2024-03-29 16:14:33,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41506.1, 300 sec: 41321.0). Total num frames: 612663296. Throughput: 0: 41467.6. Samples: 494822560. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 16:14:33,840][00126] Avg episode reward: [(0, '0.420')] +[2024-03-29 16:14:37,554][00497] Updated weights for policy 0, policy_version 37404 (0.0024) +[2024-03-29 16:14:38,839][00126] Fps is (10 sec: 40959.5, 60 sec: 41779.1, 300 sec: 41265.5). Total num frames: 612892672. Throughput: 0: 42128.0. Samples: 495097760. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 16:14:38,840][00126] Avg episode reward: [(0, '0.567')] +[2024-03-29 16:14:41,430][00497] Updated weights for policy 0, policy_version 37414 (0.0020) +[2024-03-29 16:14:43,357][00476] Signal inference workers to stop experience collection... (17650 times) +[2024-03-29 16:14:43,378][00497] InferenceWorker_p0-w0: stopping experience collection (17650 times) +[2024-03-29 16:14:43,579][00476] Signal inference workers to resume experience collection... (17650 times) +[2024-03-29 16:14:43,579][00497] InferenceWorker_p0-w0: resuming experience collection (17650 times) +[2024-03-29 16:14:43,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42052.4, 300 sec: 41376.6). Total num frames: 613122048. Throughput: 0: 41988.2. Samples: 495333740. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 16:14:43,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 16:14:44,760][00497] Updated weights for policy 0, policy_version 37424 (0.0024) +[2024-03-29 16:14:48,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.3, 300 sec: 41376.5). Total num frames: 613302272. Throughput: 0: 41404.5. Samples: 495444540. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 16:14:48,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:14:48,996][00497] Updated weights for policy 0, policy_version 37434 (0.0027) +[2024-03-29 16:14:53,237][00497] Updated weights for policy 0, policy_version 37444 (0.0042) +[2024-03-29 16:14:53,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41506.2, 300 sec: 41209.9). Total num frames: 613498880. Throughput: 0: 41661.3. Samples: 495711940. Policy #0 lag: (min: 1.0, avg: 23.3, max: 43.0) +[2024-03-29 16:14:53,840][00126] Avg episode reward: [(0, '0.413')] +[2024-03-29 16:14:57,040][00497] Updated weights for policy 0, policy_version 37454 (0.0028) +[2024-03-29 16:14:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 41506.2, 300 sec: 41321.0). Total num frames: 613728256. Throughput: 0: 41790.7. Samples: 495958580. Policy #0 lag: (min: 1.0, avg: 20.0, max: 42.0) +[2024-03-29 16:14:58,840][00126] Avg episode reward: [(0, '0.419')] +[2024-03-29 16:15:00,213][00497] Updated weights for policy 0, policy_version 37464 (0.0025) +[2024-03-29 16:15:03,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41779.2, 300 sec: 41376.5). Total num frames: 613941248. Throughput: 0: 41441.3. Samples: 496071360. Policy #0 lag: (min: 1.0, avg: 20.0, max: 42.0) +[2024-03-29 16:15:03,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 16:15:04,079][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037473_613957632.pth... +[2024-03-29 16:15:04,393][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000036868_604045312.pth +[2024-03-29 16:15:04,711][00497] Updated weights for policy 0, policy_version 37474 (0.0024) +[2024-03-29 16:15:08,632][00497] Updated weights for policy 0, policy_version 37484 (0.0025) +[2024-03-29 16:15:08,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41321.0). Total num frames: 614137856. Throughput: 0: 41556.0. Samples: 496347420. Policy #0 lag: (min: 1.0, avg: 20.0, max: 42.0) +[2024-03-29 16:15:08,840][00126] Avg episode reward: [(0, '0.484')] +[2024-03-29 16:15:12,757][00497] Updated weights for policy 0, policy_version 37494 (0.0023) +[2024-03-29 16:15:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41233.1, 300 sec: 41321.0). Total num frames: 614350848. Throughput: 0: 42022.2. Samples: 496593500. Policy #0 lag: (min: 1.0, avg: 20.0, max: 42.0) +[2024-03-29 16:15:13,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 16:15:15,473][00476] Signal inference workers to stop experience collection... (17700 times) +[2024-03-29 16:15:15,546][00497] InferenceWorker_p0-w0: stopping experience collection (17700 times) +[2024-03-29 16:15:15,561][00476] Signal inference workers to resume experience collection... (17700 times) +[2024-03-29 16:15:15,579][00497] InferenceWorker_p0-w0: resuming experience collection (17700 times) +[2024-03-29 16:15:15,830][00497] Updated weights for policy 0, policy_version 37504 (0.0033) +[2024-03-29 16:15:18,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42052.3, 300 sec: 41432.1). Total num frames: 614580224. Throughput: 0: 41741.8. Samples: 496700940. Policy #0 lag: (min: 0.0, avg: 22.4, max: 43.0) +[2024-03-29 16:15:18,840][00126] Avg episode reward: [(0, '0.464')] +[2024-03-29 16:15:20,336][00497] Updated weights for policy 0, policy_version 37514 (0.0020) +[2024-03-29 16:15:23,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 41265.5). Total num frames: 614744064. Throughput: 0: 41376.1. Samples: 496959680. Policy #0 lag: (min: 0.0, avg: 22.4, max: 43.0) +[2024-03-29 16:15:23,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 16:15:24,587][00497] Updated weights for policy 0, policy_version 37524 (0.0026) +[2024-03-29 16:15:28,530][00497] Updated weights for policy 0, policy_version 37534 (0.0027) +[2024-03-29 16:15:28,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41233.1, 300 sec: 41209.9). Total num frames: 614957056. Throughput: 0: 41844.0. Samples: 497216720. Policy #0 lag: (min: 0.0, avg: 22.4, max: 43.0) +[2024-03-29 16:15:28,840][00126] Avg episode reward: [(0, '0.481')] +[2024-03-29 16:15:31,595][00497] Updated weights for policy 0, policy_version 37544 (0.0025) +[2024-03-29 16:15:33,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42325.4, 300 sec: 41487.7). Total num frames: 615202816. Throughput: 0: 41991.6. Samples: 497334160. Policy #0 lag: (min: 0.0, avg: 22.4, max: 43.0) +[2024-03-29 16:15:33,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 16:15:36,381][00497] Updated weights for policy 0, policy_version 37554 (0.0023) +[2024-03-29 16:15:38,839][00126] Fps is (10 sec: 42597.8, 60 sec: 41506.2, 300 sec: 41376.5). Total num frames: 615383040. Throughput: 0: 41800.4. Samples: 497592960. Policy #0 lag: (min: 0.0, avg: 22.4, max: 43.0) +[2024-03-29 16:15:38,840][00126] Avg episode reward: [(0, '0.426')] +[2024-03-29 16:15:40,360][00497] Updated weights for policy 0, policy_version 37564 (0.0022) +[2024-03-29 16:15:43,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41233.0, 300 sec: 41376.6). Total num frames: 615596032. Throughput: 0: 41636.4. Samples: 497832220. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 16:15:43,840][00126] Avg episode reward: [(0, '0.506')] +[2024-03-29 16:15:44,249][00497] Updated weights for policy 0, policy_version 37574 (0.0026) +[2024-03-29 16:15:47,488][00497] Updated weights for policy 0, policy_version 37584 (0.0033) +[2024-03-29 16:15:48,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42052.2, 300 sec: 41598.7). Total num frames: 615825408. Throughput: 0: 41813.3. Samples: 497952960. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 16:15:48,840][00126] Avg episode reward: [(0, '0.486')] +[2024-03-29 16:15:50,163][00476] Signal inference workers to stop experience collection... (17750 times) +[2024-03-29 16:15:50,191][00497] InferenceWorker_p0-w0: stopping experience collection (17750 times) +[2024-03-29 16:15:50,349][00476] Signal inference workers to resume experience collection... (17750 times) +[2024-03-29 16:15:50,350][00497] InferenceWorker_p0-w0: resuming experience collection (17750 times) +[2024-03-29 16:15:52,071][00497] Updated weights for policy 0, policy_version 37594 (0.0021) +[2024-03-29 16:15:53,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.2, 300 sec: 41432.1). Total num frames: 616005632. Throughput: 0: 41446.2. Samples: 498212500. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 16:15:53,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 16:15:56,003][00497] Updated weights for policy 0, policy_version 37604 (0.0031) +[2024-03-29 16:15:58,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41779.1, 300 sec: 41376.5). Total num frames: 616235008. Throughput: 0: 41708.4. Samples: 498470380. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 16:15:58,841][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:15:59,885][00497] Updated weights for policy 0, policy_version 37614 (0.0019) +[2024-03-29 16:16:02,964][00497] Updated weights for policy 0, policy_version 37624 (0.0026) +[2024-03-29 16:16:03,839][00126] Fps is (10 sec: 47513.7, 60 sec: 42325.4, 300 sec: 41598.7). Total num frames: 616480768. Throughput: 0: 41908.9. Samples: 498586840. Policy #0 lag: (min: 0.0, avg: 21.0, max: 42.0) +[2024-03-29 16:16:03,840][00126] Avg episode reward: [(0, '0.429')] +[2024-03-29 16:16:07,381][00497] Updated weights for policy 0, policy_version 37634 (0.0019) +[2024-03-29 16:16:08,839][00126] Fps is (10 sec: 40960.5, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 616644608. Throughput: 0: 41902.2. Samples: 498845280. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 16:16:08,840][00126] Avg episode reward: [(0, '0.528')] +[2024-03-29 16:16:11,560][00497] Updated weights for policy 0, policy_version 37644 (0.0019) +[2024-03-29 16:16:13,839][00126] Fps is (10 sec: 37682.7, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 616857600. Throughput: 0: 41776.7. Samples: 499096680. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 16:16:13,840][00126] Avg episode reward: [(0, '0.437')] +[2024-03-29 16:16:15,452][00497] Updated weights for policy 0, policy_version 37654 (0.0027) +[2024-03-29 16:16:18,619][00497] Updated weights for policy 0, policy_version 37664 (0.0031) +[2024-03-29 16:16:18,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41779.1, 300 sec: 41654.2). Total num frames: 617086976. Throughput: 0: 42113.2. Samples: 499229260. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 16:16:18,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 16:16:23,120][00497] Updated weights for policy 0, policy_version 37674 (0.0028) +[2024-03-29 16:16:23,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42052.3, 300 sec: 41543.2). Total num frames: 617267200. Throughput: 0: 41543.6. Samples: 499462420. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 16:16:23,840][00126] Avg episode reward: [(0, '0.565')] +[2024-03-29 16:16:24,026][00476] Signal inference workers to stop experience collection... (17800 times) +[2024-03-29 16:16:24,026][00476] Signal inference workers to resume experience collection... (17800 times) +[2024-03-29 16:16:24,073][00497] InferenceWorker_p0-w0: stopping experience collection (17800 times) +[2024-03-29 16:16:24,074][00497] InferenceWorker_p0-w0: resuming experience collection (17800 times) +[2024-03-29 16:16:27,077][00497] Updated weights for policy 0, policy_version 37684 (0.0029) +[2024-03-29 16:16:28,839][00126] Fps is (10 sec: 39322.0, 60 sec: 42052.2, 300 sec: 41487.6). Total num frames: 617480192. Throughput: 0: 42152.9. Samples: 499729100. Policy #0 lag: (min: 0.0, avg: 21.4, max: 41.0) +[2024-03-29 16:16:28,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 16:16:31,042][00497] Updated weights for policy 0, policy_version 37694 (0.0020) +[2024-03-29 16:16:33,839][00126] Fps is (10 sec: 44236.3, 60 sec: 41779.1, 300 sec: 41543.2). Total num frames: 617709568. Throughput: 0: 42443.1. Samples: 499862900. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:16:33,841][00126] Avg episode reward: [(0, '0.584')] +[2024-03-29 16:16:34,209][00497] Updated weights for policy 0, policy_version 37704 (0.0022) +[2024-03-29 16:16:38,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41543.1). Total num frames: 617889792. Throughput: 0: 41548.4. Samples: 500082180. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:16:38,840][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 16:16:38,881][00497] Updated weights for policy 0, policy_version 37714 (0.0022) +[2024-03-29 16:16:42,926][00497] Updated weights for policy 0, policy_version 37724 (0.0023) +[2024-03-29 16:16:43,839][00126] Fps is (10 sec: 39322.0, 60 sec: 41779.2, 300 sec: 41598.7). Total num frames: 618102784. Throughput: 0: 41857.0. Samples: 500353940. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:16:43,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:16:46,748][00497] Updated weights for policy 0, policy_version 37734 (0.0028) +[2024-03-29 16:16:48,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41779.3, 300 sec: 41598.7). Total num frames: 618332160. Throughput: 0: 42012.4. Samples: 500477400. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:16:48,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:16:50,023][00497] Updated weights for policy 0, policy_version 37744 (0.0028) +[2024-03-29 16:16:53,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41654.2). Total num frames: 618528768. Throughput: 0: 41759.1. Samples: 500724440. Policy #0 lag: (min: 1.0, avg: 19.5, max: 41.0) +[2024-03-29 16:16:53,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 16:16:54,005][00476] Signal inference workers to stop experience collection... (17850 times) +[2024-03-29 16:16:54,009][00476] Signal inference workers to resume experience collection... (17850 times) +[2024-03-29 16:16:54,054][00497] InferenceWorker_p0-w0: stopping experience collection (17850 times) +[2024-03-29 16:16:54,054][00497] InferenceWorker_p0-w0: resuming experience collection (17850 times) +[2024-03-29 16:16:54,315][00497] Updated weights for policy 0, policy_version 37754 (0.0031) +[2024-03-29 16:16:58,259][00497] Updated weights for policy 0, policy_version 37764 (0.0026) +[2024-03-29 16:16:58,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41779.2, 300 sec: 41654.2). Total num frames: 618741760. Throughput: 0: 42109.8. Samples: 500991620. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:16:58,840][00126] Avg episode reward: [(0, '0.540')] +[2024-03-29 16:17:02,162][00497] Updated weights for policy 0, policy_version 37774 (0.0023) +[2024-03-29 16:17:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41506.1, 300 sec: 41709.8). Total num frames: 618971136. Throughput: 0: 42003.2. Samples: 501119400. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:17:03,840][00126] Avg episode reward: [(0, '0.450')] +[2024-03-29 16:17:03,935][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037780_618987520.pth... +[2024-03-29 16:17:04,265][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037169_608976896.pth +[2024-03-29 16:17:05,415][00497] Updated weights for policy 0, policy_version 37784 (0.0031) +[2024-03-29 16:17:08,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.3, 300 sec: 41820.9). Total num frames: 619184128. Throughput: 0: 41987.1. Samples: 501351840. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:17:08,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 16:17:09,851][00497] Updated weights for policy 0, policy_version 37794 (0.0019) +[2024-03-29 16:17:13,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 619364352. Throughput: 0: 41874.2. Samples: 501613440. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:17:13,840][00126] Avg episode reward: [(0, '0.384')] +[2024-03-29 16:17:13,907][00497] Updated weights for policy 0, policy_version 37804 (0.0018) +[2024-03-29 16:17:17,742][00497] Updated weights for policy 0, policy_version 37814 (0.0019) +[2024-03-29 16:17:18,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41506.2, 300 sec: 41598.7). Total num frames: 619577344. Throughput: 0: 41819.6. Samples: 501744780. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:17:18,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:17:21,149][00497] Updated weights for policy 0, policy_version 37824 (0.0026) +[2024-03-29 16:17:23,839][00126] Fps is (10 sec: 44237.0, 60 sec: 42325.3, 300 sec: 41765.3). Total num frames: 619806720. Throughput: 0: 41975.2. Samples: 501971060. Policy #0 lag: (min: 1.0, avg: 21.3, max: 44.0) +[2024-03-29 16:17:23,840][00126] Avg episode reward: [(0, '0.508')] +[2024-03-29 16:17:25,561][00497] Updated weights for policy 0, policy_version 37834 (0.0027) +[2024-03-29 16:17:28,638][00476] Signal inference workers to stop experience collection... (17900 times) +[2024-03-29 16:17:28,698][00497] InferenceWorker_p0-w0: stopping experience collection (17900 times) +[2024-03-29 16:17:28,733][00476] Signal inference workers to resume experience collection... (17900 times) +[2024-03-29 16:17:28,735][00497] InferenceWorker_p0-w0: resuming experience collection (17900 times) +[2024-03-29 16:17:28,839][00126] Fps is (10 sec: 42598.6, 60 sec: 42052.2, 300 sec: 41765.3). Total num frames: 620003328. Throughput: 0: 41922.7. Samples: 502240460. Policy #0 lag: (min: 1.0, avg: 21.3, max: 44.0) +[2024-03-29 16:17:28,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 16:17:29,377][00497] Updated weights for policy 0, policy_version 37844 (0.0019) +[2024-03-29 16:17:33,383][00497] Updated weights for policy 0, policy_version 37854 (0.0020) +[2024-03-29 16:17:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41779.3, 300 sec: 41709.8). Total num frames: 620216320. Throughput: 0: 42159.1. Samples: 502374560. Policy #0 lag: (min: 1.0, avg: 21.3, max: 44.0) +[2024-03-29 16:17:33,840][00126] Avg episode reward: [(0, '0.461')] +[2024-03-29 16:17:36,404][00497] Updated weights for policy 0, policy_version 37864 (0.0030) +[2024-03-29 16:17:38,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42598.4, 300 sec: 41876.4). Total num frames: 620445696. Throughput: 0: 42028.5. Samples: 502615720. Policy #0 lag: (min: 1.0, avg: 21.3, max: 44.0) +[2024-03-29 16:17:38,840][00126] Avg episode reward: [(0, '0.457')] +[2024-03-29 16:17:41,192][00497] Updated weights for policy 0, policy_version 37874 (0.0023) +[2024-03-29 16:17:43,839][00126] Fps is (10 sec: 42597.8, 60 sec: 42325.2, 300 sec: 41765.3). Total num frames: 620642304. Throughput: 0: 42041.7. Samples: 502883500. Policy #0 lag: (min: 1.0, avg: 21.3, max: 44.0) +[2024-03-29 16:17:43,840][00126] Avg episode reward: [(0, '0.398')] +[2024-03-29 16:17:44,737][00497] Updated weights for policy 0, policy_version 37884 (0.0024) +[2024-03-29 16:17:48,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41779.2, 300 sec: 41709.8). Total num frames: 620838912. Throughput: 0: 42156.1. Samples: 503016420. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 16:17:48,840][00126] Avg episode reward: [(0, '0.535')] +[2024-03-29 16:17:48,907][00497] Updated weights for policy 0, policy_version 37894 (0.0023) +[2024-03-29 16:17:52,220][00497] Updated weights for policy 0, policy_version 37904 (0.0026) +[2024-03-29 16:17:53,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42598.4, 300 sec: 41876.4). Total num frames: 621084672. Throughput: 0: 42201.8. Samples: 503250920. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 16:17:53,840][00126] Avg episode reward: [(0, '0.491')] +[2024-03-29 16:17:56,673][00497] Updated weights for policy 0, policy_version 37914 (0.0024) +[2024-03-29 16:17:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 621264896. Throughput: 0: 42257.8. Samples: 503515040. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 16:17:58,840][00126] Avg episode reward: [(0, '0.401')] +[2024-03-29 16:17:59,988][00476] Signal inference workers to stop experience collection... (17950 times) +[2024-03-29 16:17:59,988][00476] Signal inference workers to resume experience collection... (17950 times) +[2024-03-29 16:18:00,023][00497] InferenceWorker_p0-w0: stopping experience collection (17950 times) +[2024-03-29 16:18:00,029][00497] InferenceWorker_p0-w0: resuming experience collection (17950 times) +[2024-03-29 16:18:00,314][00497] Updated weights for policy 0, policy_version 37924 (0.0022) +[2024-03-29 16:18:03,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41820.9). Total num frames: 621477888. Throughput: 0: 42136.0. Samples: 503640900. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 16:18:03,841][00126] Avg episode reward: [(0, '0.468')] +[2024-03-29 16:18:04,410][00497] Updated weights for policy 0, policy_version 37934 (0.0030) +[2024-03-29 16:18:07,601][00497] Updated weights for policy 0, policy_version 37944 (0.0028) +[2024-03-29 16:18:08,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42325.3, 300 sec: 41876.5). Total num frames: 621723648. Throughput: 0: 42454.6. Samples: 503881520. Policy #0 lag: (min: 1.0, avg: 20.1, max: 41.0) +[2024-03-29 16:18:08,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 16:18:12,096][00497] Updated weights for policy 0, policy_version 37954 (0.0018) +[2024-03-29 16:18:13,839][00126] Fps is (10 sec: 42597.9, 60 sec: 42325.2, 300 sec: 41876.4). Total num frames: 621903872. Throughput: 0: 42235.0. Samples: 504141040. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 16:18:13,840][00126] Avg episode reward: [(0, '0.474')] +[2024-03-29 16:18:15,907][00497] Updated weights for policy 0, policy_version 37964 (0.0027) +[2024-03-29 16:18:18,839][00126] Fps is (10 sec: 37683.4, 60 sec: 42052.3, 300 sec: 41765.3). Total num frames: 622100480. Throughput: 0: 41934.2. Samples: 504261600. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 16:18:18,840][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:18:20,161][00497] Updated weights for policy 0, policy_version 37974 (0.0022) +[2024-03-29 16:18:23,681][00497] Updated weights for policy 0, policy_version 37985 (0.0028) +[2024-03-29 16:18:23,839][00126] Fps is (10 sec: 44236.8, 60 sec: 42325.2, 300 sec: 41876.4). Total num frames: 622346240. Throughput: 0: 42119.0. Samples: 504511080. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 16:18:23,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:18:28,194][00497] Updated weights for policy 0, policy_version 37995 (0.0018) +[2024-03-29 16:18:28,843][00126] Fps is (10 sec: 42581.7, 60 sec: 42049.5, 300 sec: 41875.8). Total num frames: 622526464. Throughput: 0: 41736.5. Samples: 504761800. Policy #0 lag: (min: 0.0, avg: 22.3, max: 40.0) +[2024-03-29 16:18:28,844][00126] Avg episode reward: [(0, '0.531')] +[2024-03-29 16:18:32,064][00497] Updated weights for policy 0, policy_version 38005 (0.0017) +[2024-03-29 16:18:33,839][00126] Fps is (10 sec: 39321.9, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 622739456. Throughput: 0: 41715.9. Samples: 504893640. Policy #0 lag: (min: 0.0, avg: 19.7, max: 41.0) +[2024-03-29 16:18:33,842][00126] Avg episode reward: [(0, '0.469')] +[2024-03-29 16:18:36,115][00497] Updated weights for policy 0, policy_version 38015 (0.0018) +[2024-03-29 16:18:38,839][00126] Fps is (10 sec: 44253.6, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 622968832. Throughput: 0: 42126.6. Samples: 505146620. Policy #0 lag: (min: 0.0, avg: 19.7, max: 41.0) +[2024-03-29 16:18:38,840][00126] Avg episode reward: [(0, '0.502')] +[2024-03-29 16:18:39,238][00497] Updated weights for policy 0, policy_version 38025 (0.0025) +[2024-03-29 16:18:41,881][00476] Signal inference workers to stop experience collection... (18000 times) +[2024-03-29 16:18:41,882][00476] Signal inference workers to resume experience collection... (18000 times) +[2024-03-29 16:18:41,918][00497] InferenceWorker_p0-w0: stopping experience collection (18000 times) +[2024-03-29 16:18:41,918][00497] InferenceWorker_p0-w0: resuming experience collection (18000 times) +[2024-03-29 16:18:43,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41779.3, 300 sec: 41876.4). Total num frames: 623149056. Throughput: 0: 41569.8. Samples: 505385680. Policy #0 lag: (min: 0.0, avg: 19.7, max: 41.0) +[2024-03-29 16:18:43,840][00126] Avg episode reward: [(0, '0.536')] +[2024-03-29 16:18:43,911][00497] Updated weights for policy 0, policy_version 38035 (0.0018) +[2024-03-29 16:18:47,977][00497] Updated weights for policy 0, policy_version 38045 (0.0020) +[2024-03-29 16:18:48,848][00126] Fps is (10 sec: 39286.0, 60 sec: 42045.8, 300 sec: 41875.1). Total num frames: 623362048. Throughput: 0: 41592.4. Samples: 505512940. Policy #0 lag: (min: 0.0, avg: 19.7, max: 41.0) +[2024-03-29 16:18:48,849][00126] Avg episode reward: [(0, '0.432')] +[2024-03-29 16:18:51,943][00497] Updated weights for policy 0, policy_version 38055 (0.0022) +[2024-03-29 16:18:53,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 623591424. Throughput: 0: 41972.9. Samples: 505770300. Policy #0 lag: (min: 0.0, avg: 19.7, max: 41.0) +[2024-03-29 16:18:53,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:18:55,097][00497] Updated weights for policy 0, policy_version 38065 (0.0026) +[2024-03-29 16:18:58,839][00126] Fps is (10 sec: 42637.5, 60 sec: 42052.2, 300 sec: 41876.4). Total num frames: 623788032. Throughput: 0: 41580.1. Samples: 506012140. Policy #0 lag: (min: 1.0, avg: 20.8, max: 42.0) +[2024-03-29 16:18:58,840][00126] Avg episode reward: [(0, '0.428')] +[2024-03-29 16:18:59,612][00497] Updated weights for policy 0, policy_version 38075 (0.0021) +[2024-03-29 16:19:03,525][00497] Updated weights for policy 0, policy_version 38085 (0.0030) +[2024-03-29 16:19:03,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42052.3, 300 sec: 41876.4). Total num frames: 624001024. Throughput: 0: 41904.9. Samples: 506147320. Policy #0 lag: (min: 1.0, avg: 20.8, max: 42.0) +[2024-03-29 16:19:03,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 16:19:03,855][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038086_624001024.pth... +[2024-03-29 16:19:04,205][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037473_613957632.pth +[2024-03-29 16:19:07,488][00497] Updated weights for policy 0, policy_version 38095 (0.0032) +[2024-03-29 16:19:08,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 624197632. Throughput: 0: 41875.2. Samples: 506395460. Policy #0 lag: (min: 1.0, avg: 20.8, max: 42.0) +[2024-03-29 16:19:08,840][00126] Avg episode reward: [(0, '0.566')] +[2024-03-29 16:19:10,938][00497] Updated weights for policy 0, policy_version 38105 (0.0025) +[2024-03-29 16:19:13,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 624410624. Throughput: 0: 41402.2. Samples: 506624740. Policy #0 lag: (min: 1.0, avg: 20.8, max: 42.0) +[2024-03-29 16:19:13,840][00126] Avg episode reward: [(0, '0.522')] +[2024-03-29 16:19:15,256][00497] Updated weights for policy 0, policy_version 38115 (0.0018) +[2024-03-29 16:19:18,849][00126] Fps is (10 sec: 40920.0, 60 sec: 41772.4, 300 sec: 41819.5). Total num frames: 624607232. Throughput: 0: 41399.5. Samples: 506757020. Policy #0 lag: (min: 1.0, avg: 20.8, max: 42.0) +[2024-03-29 16:19:18,856][00126] Avg episode reward: [(0, '0.454')] +[2024-03-29 16:19:19,525][00497] Updated weights for policy 0, policy_version 38125 (0.0025) +[2024-03-29 16:19:21,516][00476] Signal inference workers to stop experience collection... (18050 times) +[2024-03-29 16:19:21,581][00497] InferenceWorker_p0-w0: stopping experience collection (18050 times) +[2024-03-29 16:19:21,595][00476] Signal inference workers to resume experience collection... (18050 times) +[2024-03-29 16:19:21,669][00497] InferenceWorker_p0-w0: resuming experience collection (18050 times) +[2024-03-29 16:19:23,396][00497] Updated weights for policy 0, policy_version 38135 (0.0017) +[2024-03-29 16:19:23,839][00126] Fps is (10 sec: 40960.2, 60 sec: 41233.1, 300 sec: 41820.8). Total num frames: 624820224. Throughput: 0: 41439.2. Samples: 507011380. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 16:19:23,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 16:19:26,591][00497] Updated weights for policy 0, policy_version 38145 (0.0032) +[2024-03-29 16:19:28,839][00126] Fps is (10 sec: 42640.5, 60 sec: 41782.0, 300 sec: 41931.9). Total num frames: 625033216. Throughput: 0: 41507.2. Samples: 507253500. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 16:19:28,840][00126] Avg episode reward: [(0, '0.415')] +[2024-03-29 16:19:31,006][00497] Updated weights for policy 0, policy_version 38155 (0.0024) +[2024-03-29 16:19:33,839][00126] Fps is (10 sec: 40960.4, 60 sec: 41506.2, 300 sec: 41820.9). Total num frames: 625229824. Throughput: 0: 41591.7. Samples: 507384180. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 16:19:33,840][00126] Avg episode reward: [(0, '0.433')] +[2024-03-29 16:19:35,274][00497] Updated weights for policy 0, policy_version 38165 (0.0032) +[2024-03-29 16:19:38,839][00126] Fps is (10 sec: 40959.6, 60 sec: 41233.1, 300 sec: 41765.3). Total num frames: 625442816. Throughput: 0: 41428.9. Samples: 507634600. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 16:19:38,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:19:38,909][00497] Updated weights for policy 0, policy_version 38175 (0.0022) +[2024-03-29 16:19:42,501][00497] Updated weights for policy 0, policy_version 38185 (0.0034) +[2024-03-29 16:19:43,839][00126] Fps is (10 sec: 44236.3, 60 sec: 42052.2, 300 sec: 41931.9). Total num frames: 625672192. Throughput: 0: 41180.4. Samples: 507865260. Policy #0 lag: (min: 0.0, avg: 19.2, max: 41.0) +[2024-03-29 16:19:43,840][00126] Avg episode reward: [(0, '0.453')] +[2024-03-29 16:19:46,993][00497] Updated weights for policy 0, policy_version 38195 (0.0025) +[2024-03-29 16:19:48,839][00126] Fps is (10 sec: 39321.4, 60 sec: 41239.3, 300 sec: 41820.8). Total num frames: 625836032. Throughput: 0: 41049.3. Samples: 507994540. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 16:19:48,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:19:51,128][00497] Updated weights for policy 0, policy_version 38205 (0.0017) +[2024-03-29 16:19:53,839][00126] Fps is (10 sec: 39321.8, 60 sec: 41233.1, 300 sec: 41820.9). Total num frames: 626065408. Throughput: 0: 41349.4. Samples: 508256180. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 16:19:53,840][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 16:19:54,780][00497] Updated weights for policy 0, policy_version 38215 (0.0026) +[2024-03-29 16:19:57,531][00476] Signal inference workers to stop experience collection... (18100 times) +[2024-03-29 16:19:57,593][00497] InferenceWorker_p0-w0: stopping experience collection (18100 times) +[2024-03-29 16:19:57,694][00476] Signal inference workers to resume experience collection... (18100 times) +[2024-03-29 16:19:57,695][00497] InferenceWorker_p0-w0: resuming experience collection (18100 times) +[2024-03-29 16:19:58,301][00497] Updated weights for policy 0, policy_version 38225 (0.0024) +[2024-03-29 16:19:58,839][00126] Fps is (10 sec: 45875.7, 60 sec: 41779.2, 300 sec: 41876.4). Total num frames: 626294784. Throughput: 0: 41673.9. Samples: 508500060. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 16:19:58,840][00126] Avg episode reward: [(0, '0.498')] +[2024-03-29 16:20:02,541][00497] Updated weights for policy 0, policy_version 38235 (0.0027) +[2024-03-29 16:20:03,839][00126] Fps is (10 sec: 40959.7, 60 sec: 41233.0, 300 sec: 41820.8). Total num frames: 626475008. Throughput: 0: 41655.7. Samples: 508631120. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 16:20:03,840][00126] Avg episode reward: [(0, '0.557')] +[2024-03-29 16:20:06,935][00497] Updated weights for policy 0, policy_version 38245 (0.0026) +[2024-03-29 16:20:08,839][00126] Fps is (10 sec: 39321.3, 60 sec: 41506.1, 300 sec: 41820.8). Total num frames: 626688000. Throughput: 0: 41638.7. Samples: 508885120. Policy #0 lag: (min: 0.0, avg: 20.6, max: 42.0) +[2024-03-29 16:20:08,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:20:10,564][00497] Updated weights for policy 0, policy_version 38255 (0.0022) +[2024-03-29 16:20:13,839][00126] Fps is (10 sec: 42598.6, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 626900992. Throughput: 0: 41333.3. Samples: 509113500. Policy #0 lag: (min: 0.0, avg: 21.4, max: 45.0) +[2024-03-29 16:20:13,840][00126] Avg episode reward: [(0, '0.483')] +[2024-03-29 16:20:14,304][00497] Updated weights for policy 0, policy_version 38265 (0.0035) +[2024-03-29 16:20:18,434][00497] Updated weights for policy 0, policy_version 38275 (0.0021) +[2024-03-29 16:20:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41786.0, 300 sec: 41931.9). Total num frames: 627113984. Throughput: 0: 41280.4. Samples: 509241800. Policy #0 lag: (min: 0.0, avg: 21.4, max: 45.0) +[2024-03-29 16:20:18,840][00126] Avg episode reward: [(0, '0.448')] +[2024-03-29 16:20:22,941][00497] Updated weights for policy 0, policy_version 38285 (0.0019) +[2024-03-29 16:20:23,839][00126] Fps is (10 sec: 40960.0, 60 sec: 41506.1, 300 sec: 41876.4). Total num frames: 627310592. Throughput: 0: 41562.7. Samples: 509504920. Policy #0 lag: (min: 0.0, avg: 21.4, max: 45.0) +[2024-03-29 16:20:23,840][00126] Avg episode reward: [(0, '0.448')] +[2024-03-29 16:20:26,659][00497] Updated weights for policy 0, policy_version 38295 (0.0026) +[2024-03-29 16:20:28,839][00126] Fps is (10 sec: 40959.8, 60 sec: 41506.0, 300 sec: 41765.3). Total num frames: 627523584. Throughput: 0: 41748.0. Samples: 509743920. Policy #0 lag: (min: 0.0, avg: 21.4, max: 45.0) +[2024-03-29 16:20:28,840][00126] Avg episode reward: [(0, '0.560')] +[2024-03-29 16:20:29,921][00476] Signal inference workers to stop experience collection... (18150 times) +[2024-03-29 16:20:29,991][00497] InferenceWorker_p0-w0: stopping experience collection (18150 times) +[2024-03-29 16:20:29,996][00476] Signal inference workers to resume experience collection... (18150 times) +[2024-03-29 16:20:30,016][00497] InferenceWorker_p0-w0: resuming experience collection (18150 times) +[2024-03-29 16:20:30,266][00497] Updated weights for policy 0, policy_version 38305 (0.0032) +[2024-03-29 16:20:33,839][00126] Fps is (10 sec: 40959.9, 60 sec: 41506.1, 300 sec: 41820.9). Total num frames: 627720192. Throughput: 0: 41188.5. Samples: 509848020. Policy #0 lag: (min: 0.0, avg: 21.4, max: 45.0) +[2024-03-29 16:20:33,840][00126] Avg episode reward: [(0, '0.490')] +[2024-03-29 16:20:34,547][00497] Updated weights for policy 0, policy_version 38315 (0.0019) +[2024-03-29 16:20:38,839][00126] Fps is (10 sec: 37683.5, 60 sec: 40960.0, 300 sec: 41709.8). Total num frames: 627900416. Throughput: 0: 41381.3. Samples: 510118340. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 16:20:38,840][00126] Avg episode reward: [(0, '0.471')] +[2024-03-29 16:20:38,921][00497] Updated weights for policy 0, policy_version 38325 (0.0021) +[2024-03-29 16:20:42,760][00497] Updated weights for policy 0, policy_version 38335 (0.0032) +[2024-03-29 16:20:43,839][00126] Fps is (10 sec: 39321.7, 60 sec: 40687.0, 300 sec: 41654.2). Total num frames: 628113408. Throughput: 0: 41021.7. Samples: 510346040. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 16:20:43,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:20:46,334][00497] Updated weights for policy 0, policy_version 38345 (0.0025) +[2024-03-29 16:20:48,839][00126] Fps is (10 sec: 42598.4, 60 sec: 41506.2, 300 sec: 41765.3). Total num frames: 628326400. Throughput: 0: 40636.9. Samples: 510459780. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 16:20:48,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:20:51,051][00497] Updated weights for policy 0, policy_version 38355 (0.0022) +[2024-03-29 16:20:53,839][00126] Fps is (10 sec: 37683.0, 60 sec: 40413.8, 300 sec: 41543.2). Total num frames: 628490240. Throughput: 0: 40668.4. Samples: 510715200. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 16:20:53,842][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 16:20:55,400][00497] Updated weights for policy 0, policy_version 38365 (0.0026) +[2024-03-29 16:20:58,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40413.8, 300 sec: 41487.6). Total num frames: 628719616. Throughput: 0: 40806.2. Samples: 510949780. Policy #0 lag: (min: 0.0, avg: 19.4, max: 42.0) +[2024-03-29 16:20:58,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 16:20:58,849][00497] Updated weights for policy 0, policy_version 38375 (0.0022) +[2024-03-29 16:21:02,626][00497] Updated weights for policy 0, policy_version 38385 (0.0028) +[2024-03-29 16:21:03,839][00126] Fps is (10 sec: 45875.1, 60 sec: 41233.0, 300 sec: 41709.8). Total num frames: 628948992. Throughput: 0: 40782.2. Samples: 511077000. Policy #0 lag: (min: 2.0, avg: 21.3, max: 41.0) +[2024-03-29 16:21:03,840][00126] Avg episode reward: [(0, '0.553')] +[2024-03-29 16:21:03,856][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038388_628948992.pth... +[2024-03-29 16:21:04,180][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000037780_618987520.pth +[2024-03-29 16:21:05,166][00476] Signal inference workers to stop experience collection... (18200 times) +[2024-03-29 16:21:05,167][00476] Signal inference workers to resume experience collection... (18200 times) +[2024-03-29 16:21:05,200][00497] InferenceWorker_p0-w0: stopping experience collection (18200 times) +[2024-03-29 16:21:05,201][00497] InferenceWorker_p0-w0: resuming experience collection (18200 times) +[2024-03-29 16:21:07,124][00497] Updated weights for policy 0, policy_version 38395 (0.0023) +[2024-03-29 16:21:08,839][00126] Fps is (10 sec: 39321.5, 60 sec: 40413.9, 300 sec: 41543.2). Total num frames: 629112832. Throughput: 0: 40143.5. Samples: 511311380. Policy #0 lag: (min: 2.0, avg: 21.3, max: 41.0) +[2024-03-29 16:21:08,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 16:21:11,571][00497] Updated weights for policy 0, policy_version 38405 (0.0026) +[2024-03-29 16:21:13,839][00126] Fps is (10 sec: 37683.2, 60 sec: 40413.8, 300 sec: 41487.6). Total num frames: 629325824. Throughput: 0: 40140.4. Samples: 511550240. Policy #0 lag: (min: 2.0, avg: 21.3, max: 41.0) +[2024-03-29 16:21:13,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:21:15,256][00497] Updated weights for policy 0, policy_version 38415 (0.0020) +[2024-03-29 16:21:18,709][00497] Updated weights for policy 0, policy_version 38425 (0.0031) +[2024-03-29 16:21:18,839][00126] Fps is (10 sec: 44237.0, 60 sec: 40687.0, 300 sec: 41654.2). Total num frames: 629555200. Throughput: 0: 40569.8. Samples: 511673660. Policy #0 lag: (min: 2.0, avg: 21.3, max: 41.0) +[2024-03-29 16:21:18,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 16:21:23,709][00497] Updated weights for policy 0, policy_version 38435 (0.0021) +[2024-03-29 16:21:23,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40140.8, 300 sec: 41487.6). Total num frames: 629719040. Throughput: 0: 39876.4. Samples: 511912780. Policy #0 lag: (min: 2.0, avg: 21.3, max: 41.0) +[2024-03-29 16:21:23,840][00126] Avg episode reward: [(0, '0.568')] +[2024-03-29 16:21:27,874][00497] Updated weights for policy 0, policy_version 38445 (0.0021) +[2024-03-29 16:21:28,839][00126] Fps is (10 sec: 37683.3, 60 sec: 40140.9, 300 sec: 41432.1). Total num frames: 629932032. Throughput: 0: 40479.2. Samples: 512167600. Policy #0 lag: (min: 1.0, avg: 17.1, max: 41.0) +[2024-03-29 16:21:28,840][00126] Avg episode reward: [(0, '0.424')] +[2024-03-29 16:21:31,459][00497] Updated weights for policy 0, policy_version 38455 (0.0018) +[2024-03-29 16:21:33,839][00126] Fps is (10 sec: 44237.1, 60 sec: 40687.0, 300 sec: 41598.7). Total num frames: 630161408. Throughput: 0: 40684.9. Samples: 512290600. Policy #0 lag: (min: 1.0, avg: 17.1, max: 41.0) +[2024-03-29 16:21:33,840][00126] Avg episode reward: [(0, '0.513')] +[2024-03-29 16:21:34,703][00497] Updated weights for policy 0, policy_version 38465 (0.0022) +[2024-03-29 16:21:38,839][00126] Fps is (10 sec: 40959.7, 60 sec: 40686.9, 300 sec: 41487.6). Total num frames: 630341632. Throughput: 0: 40373.4. Samples: 512532000. Policy #0 lag: (min: 1.0, avg: 17.1, max: 41.0) +[2024-03-29 16:21:38,840][00126] Avg episode reward: [(0, '0.407')] +[2024-03-29 16:21:39,739][00497] Updated weights for policy 0, policy_version 38475 (0.0020) +[2024-03-29 16:21:43,137][00476] Signal inference workers to stop experience collection... (18250 times) +[2024-03-29 16:21:43,156][00497] InferenceWorker_p0-w0: stopping experience collection (18250 times) +[2024-03-29 16:21:43,324][00476] Signal inference workers to resume experience collection... (18250 times) +[2024-03-29 16:21:43,325][00497] InferenceWorker_p0-w0: resuming experience collection (18250 times) +[2024-03-29 16:21:43,839][00126] Fps is (10 sec: 36044.7, 60 sec: 40140.8, 300 sec: 41321.0). Total num frames: 630521856. Throughput: 0: 41050.7. Samples: 512797060. Policy #0 lag: (min: 1.0, avg: 17.1, max: 41.0) +[2024-03-29 16:21:43,848][00126] Avg episode reward: [(0, '0.559')] +[2024-03-29 16:21:43,938][00497] Updated weights for policy 0, policy_version 38485 (0.0026) +[2024-03-29 16:21:47,293][00497] Updated weights for policy 0, policy_version 38495 (0.0022) +[2024-03-29 16:21:48,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40413.9, 300 sec: 41432.1). Total num frames: 630751232. Throughput: 0: 40389.9. Samples: 512894540. Policy #0 lag: (min: 1.0, avg: 17.1, max: 41.0) +[2024-03-29 16:21:48,840][00126] Avg episode reward: [(0, '0.530')] +[2024-03-29 16:21:50,790][00497] Updated weights for policy 0, policy_version 38505 (0.0029) +[2024-03-29 16:21:53,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 630964224. Throughput: 0: 40476.4. Samples: 513132820. Policy #0 lag: (min: 1.0, avg: 23.9, max: 43.0) +[2024-03-29 16:21:53,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:21:55,699][00497] Updated weights for policy 0, policy_version 38515 (0.0025) +[2024-03-29 16:21:58,839][00126] Fps is (10 sec: 36044.9, 60 sec: 39867.8, 300 sec: 41154.4). Total num frames: 631111680. Throughput: 0: 41130.8. Samples: 513401120. Policy #0 lag: (min: 1.0, avg: 23.9, max: 43.0) +[2024-03-29 16:21:58,841][00126] Avg episode reward: [(0, '0.453')] +[2024-03-29 16:22:00,176][00497] Updated weights for policy 0, policy_version 38525 (0.0027) +[2024-03-29 16:22:03,287][00497] Updated weights for policy 0, policy_version 38535 (0.0016) +[2024-03-29 16:22:03,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40413.9, 300 sec: 41321.0). Total num frames: 631373824. Throughput: 0: 40688.9. Samples: 513504660. Policy #0 lag: (min: 1.0, avg: 23.9, max: 43.0) +[2024-03-29 16:22:03,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:22:06,845][00497] Updated weights for policy 0, policy_version 38545 (0.0022) +[2024-03-29 16:22:08,839][00126] Fps is (10 sec: 47513.6, 60 sec: 41233.1, 300 sec: 41432.1). Total num frames: 631586816. Throughput: 0: 40612.1. Samples: 513740320. Policy #0 lag: (min: 1.0, avg: 23.9, max: 43.0) +[2024-03-29 16:22:08,840][00126] Avg episode reward: [(0, '0.465')] +[2024-03-29 16:22:11,859][00497] Updated weights for policy 0, policy_version 38555 (0.0022) +[2024-03-29 16:22:13,839][00126] Fps is (10 sec: 36044.7, 60 sec: 40140.8, 300 sec: 41209.9). Total num frames: 631734272. Throughput: 0: 40800.8. Samples: 514003640. Policy #0 lag: (min: 1.0, avg: 23.9, max: 43.0) +[2024-03-29 16:22:13,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:22:16,287][00497] Updated weights for policy 0, policy_version 38565 (0.0028) +[2024-03-29 16:22:16,832][00476] Signal inference workers to stop experience collection... (18300 times) +[2024-03-29 16:22:16,833][00476] Signal inference workers to resume experience collection... (18300 times) +[2024-03-29 16:22:16,873][00497] InferenceWorker_p0-w0: stopping experience collection (18300 times) +[2024-03-29 16:22:16,874][00497] InferenceWorker_p0-w0: resuming experience collection (18300 times) +[2024-03-29 16:22:18,839][00126] Fps is (10 sec: 39321.8, 60 sec: 40413.9, 300 sec: 41265.5). Total num frames: 631980032. Throughput: 0: 41029.4. Samples: 514136920. Policy #0 lag: (min: 1.0, avg: 17.6, max: 42.0) +[2024-03-29 16:22:18,840][00126] Avg episode reward: [(0, '0.484')] +[2024-03-29 16:22:19,696][00497] Updated weights for policy 0, policy_version 38575 (0.0022) +[2024-03-29 16:22:23,243][00497] Updated weights for policy 0, policy_version 38585 (0.0023) +[2024-03-29 16:22:23,839][00126] Fps is (10 sec: 47513.8, 60 sec: 41506.2, 300 sec: 41376.5). Total num frames: 632209408. Throughput: 0: 40428.5. Samples: 514351280. Policy #0 lag: (min: 1.0, avg: 17.6, max: 42.0) +[2024-03-29 16:22:23,840][00126] Avg episode reward: [(0, '0.503')] +[2024-03-29 16:22:28,312][00497] Updated weights for policy 0, policy_version 38595 (0.0019) +[2024-03-29 16:22:28,839][00126] Fps is (10 sec: 37683.0, 60 sec: 40413.9, 300 sec: 41154.4). Total num frames: 632356864. Throughput: 0: 40316.9. Samples: 514611320. Policy #0 lag: (min: 1.0, avg: 17.6, max: 42.0) +[2024-03-29 16:22:28,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:22:32,470][00497] Updated weights for policy 0, policy_version 38605 (0.0034) +[2024-03-29 16:22:33,839][00126] Fps is (10 sec: 36044.9, 60 sec: 40140.8, 300 sec: 41098.8). Total num frames: 632569856. Throughput: 0: 41313.3. Samples: 514753640. Policy #0 lag: (min: 1.0, avg: 17.6, max: 42.0) +[2024-03-29 16:22:33,840][00126] Avg episode reward: [(0, '0.537')] +[2024-03-29 16:22:35,482][00497] Updated weights for policy 0, policy_version 38615 (0.0024) +[2024-03-29 16:22:38,839][00126] Fps is (10 sec: 44236.7, 60 sec: 40960.0, 300 sec: 41209.9). Total num frames: 632799232. Throughput: 0: 40865.4. Samples: 514971760. Policy #0 lag: (min: 1.0, avg: 17.6, max: 42.0) +[2024-03-29 16:22:38,840][00126] Avg episode reward: [(0, '0.488')] +[2024-03-29 16:22:39,101][00497] Updated weights for policy 0, policy_version 38625 (0.0021) +[2024-03-29 16:22:43,839][00126] Fps is (10 sec: 39321.3, 60 sec: 40686.9, 300 sec: 41098.8). Total num frames: 632963072. Throughput: 0: 39950.1. Samples: 515198880. Policy #0 lag: (min: 0.0, avg: 23.0, max: 40.0) +[2024-03-29 16:22:43,840][00126] Avg episode reward: [(0, '0.413')] +[2024-03-29 16:22:44,364][00497] Updated weights for policy 0, policy_version 38635 (0.0019) +[2024-03-29 16:22:48,839][00126] Fps is (10 sec: 34406.3, 60 sec: 39867.7, 300 sec: 40876.7). Total num frames: 633143296. Throughput: 0: 40807.1. Samples: 515340980. Policy #0 lag: (min: 0.0, avg: 23.0, max: 40.0) +[2024-03-29 16:22:48,840][00126] Avg episode reward: [(0, '0.424')] +[2024-03-29 16:22:49,014][00497] Updated weights for policy 0, policy_version 38645 (0.0021) +[2024-03-29 16:22:50,043][00476] Signal inference workers to stop experience collection... (18350 times) +[2024-03-29 16:22:50,074][00497] InferenceWorker_p0-w0: stopping experience collection (18350 times) +[2024-03-29 16:22:50,256][00476] Signal inference workers to resume experience collection... (18350 times) +[2024-03-29 16:22:50,256][00497] InferenceWorker_p0-w0: resuming experience collection (18350 times) +[2024-03-29 16:22:51,898][00497] Updated weights for policy 0, policy_version 38655 (0.0029) +[2024-03-29 16:22:53,839][00126] Fps is (10 sec: 40960.2, 60 sec: 40140.8, 300 sec: 41043.3). Total num frames: 633372672. Throughput: 0: 40212.0. Samples: 515549860. Policy #0 lag: (min: 0.0, avg: 23.0, max: 40.0) +[2024-03-29 16:22:53,840][00126] Avg episode reward: [(0, '0.538')] +[2024-03-29 16:22:55,837][00497] Updated weights for policy 0, policy_version 38665 (0.0023) +[2024-03-29 16:22:58,839][00126] Fps is (10 sec: 42598.3, 60 sec: 40959.9, 300 sec: 40987.8). Total num frames: 633569280. Throughput: 0: 39810.2. Samples: 515795100. Policy #0 lag: (min: 0.0, avg: 23.0, max: 40.0) +[2024-03-29 16:22:58,840][00126] Avg episode reward: [(0, '0.447')] +[2024-03-29 16:23:01,278][00497] Updated weights for policy 0, policy_version 38675 (0.0019) +[2024-03-29 16:23:03,839][00126] Fps is (10 sec: 36044.7, 60 sec: 39321.6, 300 sec: 40710.1). Total num frames: 633733120. Throughput: 0: 40007.4. Samples: 515937260. Policy #0 lag: (min: 0.0, avg: 23.0, max: 40.0) +[2024-03-29 16:23:03,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:23:04,124][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038681_633749504.pth... +[2024-03-29 16:23:04,441][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038086_624001024.pth +[2024-03-29 16:23:05,511][00497] Updated weights for policy 0, policy_version 38685 (0.0020) +[2024-03-29 16:23:08,839][00126] Fps is (10 sec: 39322.0, 60 sec: 39594.7, 300 sec: 40876.7). Total num frames: 633962496. Throughput: 0: 39853.4. Samples: 516144680. Policy #0 lag: (min: 2.0, avg: 17.6, max: 40.0) +[2024-03-29 16:23:08,840][00126] Avg episode reward: [(0, '0.489')] +[2024-03-29 16:23:08,997][00497] Updated weights for policy 0, policy_version 38695 (0.0021) +[2024-03-29 16:23:12,499][00497] Updated weights for policy 0, policy_version 38705 (0.0024) +[2024-03-29 16:23:13,839][00126] Fps is (10 sec: 45875.5, 60 sec: 40960.0, 300 sec: 40987.8). Total num frames: 634191872. Throughput: 0: 39363.6. Samples: 516382680. Policy #0 lag: (min: 2.0, avg: 17.6, max: 40.0) +[2024-03-29 16:23:13,840][00126] Avg episode reward: [(0, '0.545')] +[2024-03-29 16:23:17,771][00497] Updated weights for policy 0, policy_version 38715 (0.0016) +[2024-03-29 16:23:18,839][00126] Fps is (10 sec: 39321.4, 60 sec: 39594.6, 300 sec: 40710.1). Total num frames: 634355712. Throughput: 0: 39163.5. Samples: 516516000. Policy #0 lag: (min: 2.0, avg: 17.6, max: 40.0) +[2024-03-29 16:23:18,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:23:21,727][00497] Updated weights for policy 0, policy_version 38725 (0.0019) +[2024-03-29 16:23:23,839][00126] Fps is (10 sec: 39321.3, 60 sec: 39594.6, 300 sec: 40877.2). Total num frames: 634585088. Throughput: 0: 39975.1. Samples: 516770640. Policy #0 lag: (min: 2.0, avg: 17.6, max: 40.0) +[2024-03-29 16:23:23,840][00126] Avg episode reward: [(0, '0.492')] +[2024-03-29 16:23:24,825][00497] Updated weights for policy 0, policy_version 38735 (0.0021) +[2024-03-29 16:23:27,515][00476] Signal inference workers to stop experience collection... (18400 times) +[2024-03-29 16:23:27,544][00497] InferenceWorker_p0-w0: stopping experience collection (18400 times) +[2024-03-29 16:23:27,690][00476] Signal inference workers to resume experience collection... (18400 times) +[2024-03-29 16:23:27,690][00497] InferenceWorker_p0-w0: resuming experience collection (18400 times) +[2024-03-29 16:23:28,565][00497] Updated weights for policy 0, policy_version 38745 (0.0024) +[2024-03-29 16:23:28,839][00126] Fps is (10 sec: 44236.8, 60 sec: 40686.9, 300 sec: 40876.7). Total num frames: 634798080. Throughput: 0: 39871.6. Samples: 516993100. Policy #0 lag: (min: 2.0, avg: 17.6, max: 40.0) +[2024-03-29 16:23:28,840][00126] Avg episode reward: [(0, '0.482')] +[2024-03-29 16:23:33,839][00126] Fps is (10 sec: 36045.0, 60 sec: 39594.6, 300 sec: 40599.0). Total num frames: 634945536. Throughput: 0: 39548.0. Samples: 517120640. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 16:23:33,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 16:23:33,932][00497] Updated weights for policy 0, policy_version 38755 (0.0021) +[2024-03-29 16:23:37,859][00497] Updated weights for policy 0, policy_version 38765 (0.0017) +[2024-03-29 16:23:38,839][00126] Fps is (10 sec: 37683.3, 60 sec: 39594.7, 300 sec: 40765.6). Total num frames: 635174912. Throughput: 0: 40885.8. Samples: 517389720. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 16:23:38,840][00126] Avg episode reward: [(0, '0.446')] +[2024-03-29 16:23:41,203][00497] Updated weights for policy 0, policy_version 38775 (0.0027) +[2024-03-29 16:23:43,839][00126] Fps is (10 sec: 44237.1, 60 sec: 40413.9, 300 sec: 40766.9). Total num frames: 635387904. Throughput: 0: 39991.7. Samples: 517594720. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 16:23:43,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 16:23:44,778][00497] Updated weights for policy 0, policy_version 38785 (0.0029) +[2024-03-29 16:23:48,839][00126] Fps is (10 sec: 39321.6, 60 sec: 40413.9, 300 sec: 40599.0). Total num frames: 635568128. Throughput: 0: 39694.3. Samples: 517723500. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 16:23:48,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 16:23:50,241][00497] Updated weights for policy 0, policy_version 38795 (0.0022) +[2024-03-29 16:23:53,839][00126] Fps is (10 sec: 37682.9, 60 sec: 39867.7, 300 sec: 40599.0). Total num frames: 635764736. Throughput: 0: 41127.0. Samples: 517995400. Policy #0 lag: (min: 0.0, avg: 22.9, max: 41.0) +[2024-03-29 16:23:53,840][00126] Avg episode reward: [(0, '0.572')] +[2024-03-29 16:23:53,987][00497] Updated weights for policy 0, policy_version 38805 (0.0033) +[2024-03-29 16:23:57,196][00497] Updated weights for policy 0, policy_version 38815 (0.0021) +[2024-03-29 16:23:58,839][00126] Fps is (10 sec: 45875.0, 60 sec: 40960.0, 300 sec: 40765.6). Total num frames: 636026880. Throughput: 0: 40704.4. Samples: 518214380. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 16:23:58,840][00126] Avg episode reward: [(0, '0.543')] +[2024-03-29 16:24:00,648][00497] Updated weights for policy 0, policy_version 38825 (0.0020) +[2024-03-29 16:24:03,839][00126] Fps is (10 sec: 42597.8, 60 sec: 40959.9, 300 sec: 40654.5). Total num frames: 636190720. Throughput: 0: 40431.4. Samples: 518335420. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 16:24:03,841][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 16:24:06,485][00497] Updated weights for policy 0, policy_version 38835 (0.0018) +[2024-03-29 16:24:06,689][00476] Signal inference workers to stop experience collection... (18450 times) +[2024-03-29 16:24:06,757][00497] InferenceWorker_p0-w0: stopping experience collection (18450 times) +[2024-03-29 16:24:06,762][00476] Signal inference workers to resume experience collection... (18450 times) +[2024-03-29 16:24:06,782][00497] InferenceWorker_p0-w0: resuming experience collection (18450 times) +[2024-03-29 16:24:08,839][00126] Fps is (10 sec: 32768.1, 60 sec: 39867.7, 300 sec: 40487.9). Total num frames: 636354560. Throughput: 0: 40778.3. Samples: 518605660. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 16:24:08,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 16:24:10,064][00497] Updated weights for policy 0, policy_version 38845 (0.0039) +[2024-03-29 16:24:13,407][00497] Updated weights for policy 0, policy_version 38855 (0.0036) +[2024-03-29 16:24:13,839][00126] Fps is (10 sec: 42598.9, 60 sec: 40413.8, 300 sec: 40711.4). Total num frames: 636616704. Throughput: 0: 40727.5. Samples: 518825840. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 16:24:13,840][00126] Avg episode reward: [(0, '0.561')] +[2024-03-29 16:24:16,979][00497] Updated weights for policy 0, policy_version 38865 (0.0021) +[2024-03-29 16:24:18,839][00126] Fps is (10 sec: 45875.1, 60 sec: 40960.0, 300 sec: 40654.5). Total num frames: 636813312. Throughput: 0: 40476.4. Samples: 518942080. Policy #0 lag: (min: 1.0, avg: 20.2, max: 42.0) +[2024-03-29 16:24:18,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:24:22,665][00497] Updated weights for policy 0, policy_version 38875 (0.0021) +[2024-03-29 16:24:23,839][00126] Fps is (10 sec: 36044.7, 60 sec: 39867.7, 300 sec: 40487.9). Total num frames: 636977152. Throughput: 0: 40219.5. Samples: 519199600. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 16:24:23,840][00126] Avg episode reward: [(0, '0.466')] +[2024-03-29 16:24:26,380][00497] Updated weights for policy 0, policy_version 38885 (0.0018) +[2024-03-29 16:24:28,839][00126] Fps is (10 sec: 39321.9, 60 sec: 40140.8, 300 sec: 40599.0). Total num frames: 637206528. Throughput: 0: 41004.4. Samples: 519439920. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 16:24:28,840][00126] Avg episode reward: [(0, '0.493')] +[2024-03-29 16:24:29,626][00497] Updated weights for policy 0, policy_version 38895 (0.0028) +[2024-03-29 16:24:33,082][00497] Updated weights for policy 0, policy_version 38905 (0.0028) +[2024-03-29 16:24:33,839][00126] Fps is (10 sec: 47513.5, 60 sec: 41779.1, 300 sec: 40710.1). Total num frames: 637452288. Throughput: 0: 40608.3. Samples: 519550880. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 16:24:33,840][00126] Avg episode reward: [(0, '0.495')] +[2024-03-29 16:24:38,352][00497] Updated weights for policy 0, policy_version 38915 (0.0017) +[2024-03-29 16:24:38,839][00126] Fps is (10 sec: 37682.9, 60 sec: 40140.8, 300 sec: 40376.8). Total num frames: 637583360. Throughput: 0: 39865.3. Samples: 519789340. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 16:24:38,840][00126] Avg episode reward: [(0, '0.489')] +[2024-03-29 16:24:42,090][00476] Signal inference workers to stop experience collection... (18500 times) +[2024-03-29 16:24:42,150][00497] InferenceWorker_p0-w0: stopping experience collection (18500 times) +[2024-03-29 16:24:42,162][00476] Signal inference workers to resume experience collection... (18500 times) +[2024-03-29 16:24:42,179][00497] InferenceWorker_p0-w0: resuming experience collection (18500 times) +[2024-03-29 16:24:42,709][00497] Updated weights for policy 0, policy_version 38925 (0.0019) +[2024-03-29 16:24:43,839][00126] Fps is (10 sec: 32768.0, 60 sec: 39867.6, 300 sec: 40487.9). Total num frames: 637779968. Throughput: 0: 40671.0. Samples: 520044580. Policy #0 lag: (min: 0.0, avg: 21.3, max: 41.0) +[2024-03-29 16:24:43,840][00126] Avg episode reward: [(0, '0.447')] +[2024-03-29 16:24:46,164][00497] Updated weights for policy 0, policy_version 38935 (0.0024) +[2024-03-29 16:24:48,839][00126] Fps is (10 sec: 44236.9, 60 sec: 40960.0, 300 sec: 40543.5). Total num frames: 638025728. Throughput: 0: 40208.1. Samples: 520144780. Policy #0 lag: (min: 1.0, avg: 22.1, max: 44.0) +[2024-03-29 16:24:48,840][00126] Avg episode reward: [(0, '0.489')] +[2024-03-29 16:24:49,764][00497] Updated weights for policy 0, policy_version 38945 (0.0035) +[2024-03-29 16:24:53,839][00126] Fps is (10 sec: 42598.5, 60 sec: 40686.9, 300 sec: 40376.8). Total num frames: 638205952. Throughput: 0: 39722.6. Samples: 520393180. Policy #0 lag: (min: 1.0, avg: 22.1, max: 44.0) +[2024-03-29 16:24:53,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 16:24:54,990][00497] Updated weights for policy 0, policy_version 38955 (0.0017) +[2024-03-29 16:24:58,839][00126] Fps is (10 sec: 36044.7, 60 sec: 39321.6, 300 sec: 40376.8). Total num frames: 638386176. Throughput: 0: 40553.8. Samples: 520650760. Policy #0 lag: (min: 1.0, avg: 22.1, max: 44.0) +[2024-03-29 16:24:58,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:24:59,058][00497] Updated weights for policy 0, policy_version 38965 (0.0023) +[2024-03-29 16:25:02,109][00497] Updated weights for policy 0, policy_version 38975 (0.0025) +[2024-03-29 16:25:03,839][00126] Fps is (10 sec: 44236.9, 60 sec: 40960.1, 300 sec: 40543.5). Total num frames: 638648320. Throughput: 0: 40637.8. Samples: 520770780. Policy #0 lag: (min: 1.0, avg: 22.1, max: 44.0) +[2024-03-29 16:25:03,840][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:25:03,901][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038981_638664704.pth... +[2024-03-29 16:25:04,208][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038388_628948992.pth +[2024-03-29 16:25:05,794][00497] Updated weights for policy 0, policy_version 38985 (0.0025) +[2024-03-29 16:25:08,839][00126] Fps is (10 sec: 44236.7, 60 sec: 41233.0, 300 sec: 40432.4). Total num frames: 638828544. Throughput: 0: 40233.8. Samples: 521010120. Policy #0 lag: (min: 1.0, avg: 22.1, max: 44.0) +[2024-03-29 16:25:08,840][00126] Avg episode reward: [(0, '0.460')] +[2024-03-29 16:25:11,014][00497] Updated weights for policy 0, policy_version 38995 (0.0018) +[2024-03-29 16:25:13,839][00126] Fps is (10 sec: 36044.6, 60 sec: 39867.7, 300 sec: 40321.3). Total num frames: 639008768. Throughput: 0: 40745.2. Samples: 521273460. Policy #0 lag: (min: 0.0, avg: 19.4, max: 41.0) +[2024-03-29 16:25:13,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:25:14,827][00497] Updated weights for policy 0, policy_version 39005 (0.0027) +[2024-03-29 16:25:16,574][00476] Signal inference workers to stop experience collection... (18550 times) +[2024-03-29 16:25:16,596][00497] InferenceWorker_p0-w0: stopping experience collection (18550 times) +[2024-03-29 16:25:16,792][00476] Signal inference workers to resume experience collection... (18550 times) +[2024-03-29 16:25:16,793][00497] InferenceWorker_p0-w0: resuming experience collection (18550 times) +[2024-03-29 16:25:18,144][00497] Updated weights for policy 0, policy_version 39015 (0.0027) +[2024-03-29 16:25:18,839][00126] Fps is (10 sec: 42598.5, 60 sec: 40686.9, 300 sec: 40487.9). Total num frames: 639254528. Throughput: 0: 40947.2. Samples: 521393500. Policy #0 lag: (min: 0.0, avg: 19.4, max: 41.0) +[2024-03-29 16:25:18,842][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:25:21,764][00497] Updated weights for policy 0, policy_version 39025 (0.0021) +[2024-03-29 16:25:23,839][00126] Fps is (10 sec: 44237.0, 60 sec: 41233.1, 300 sec: 40432.4). Total num frames: 639451136. Throughput: 0: 40745.3. Samples: 521622880. Policy #0 lag: (min: 0.0, avg: 19.4, max: 41.0) +[2024-03-29 16:25:23,840][00126] Avg episode reward: [(0, '0.487')] +[2024-03-29 16:25:26,866][00497] Updated weights for policy 0, policy_version 39035 (0.0029) +[2024-03-29 16:25:28,839][00126] Fps is (10 sec: 36044.7, 60 sec: 40140.7, 300 sec: 40321.3). Total num frames: 639614976. Throughput: 0: 41228.9. Samples: 521899880. Policy #0 lag: (min: 0.0, avg: 19.4, max: 41.0) +[2024-03-29 16:25:28,840][00126] Avg episode reward: [(0, '0.555')] +[2024-03-29 16:25:30,595][00497] Updated weights for policy 0, policy_version 39045 (0.0023) +[2024-03-29 16:25:33,788][00497] Updated weights for policy 0, policy_version 39055 (0.0030) +[2024-03-29 16:25:33,839][00126] Fps is (10 sec: 42598.5, 60 sec: 40413.9, 300 sec: 40599.0). Total num frames: 639877120. Throughput: 0: 41627.1. Samples: 522018000. Policy #0 lag: (min: 0.0, avg: 19.4, max: 41.0) +[2024-03-29 16:25:33,840][00126] Avg episode reward: [(0, '0.516')] +[2024-03-29 16:25:37,208][00497] Updated weights for policy 0, policy_version 39065 (0.0028) +[2024-03-29 16:25:38,839][00126] Fps is (10 sec: 45875.2, 60 sec: 41506.1, 300 sec: 40543.5). Total num frames: 640073728. Throughput: 0: 41389.3. Samples: 522255700. Policy #0 lag: (min: 1.0, avg: 23.2, max: 41.0) +[2024-03-29 16:25:38,840][00126] Avg episode reward: [(0, '0.581')] +[2024-03-29 16:25:42,332][00497] Updated weights for policy 0, policy_version 39075 (0.0028) +[2024-03-29 16:25:43,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41233.1, 300 sec: 40432.4). Total num frames: 640253952. Throughput: 0: 41731.1. Samples: 522528660. Policy #0 lag: (min: 1.0, avg: 23.2, max: 41.0) +[2024-03-29 16:25:43,840][00126] Avg episode reward: [(0, '0.562')] +[2024-03-29 16:25:46,266][00497] Updated weights for policy 0, policy_version 39085 (0.0027) +[2024-03-29 16:25:47,945][00476] Signal inference workers to stop experience collection... (18600 times) +[2024-03-29 16:25:48,022][00497] InferenceWorker_p0-w0: stopping experience collection (18600 times) +[2024-03-29 16:25:48,110][00476] Signal inference workers to resume experience collection... (18600 times) +[2024-03-29 16:25:48,110][00497] InferenceWorker_p0-w0: resuming experience collection (18600 times) +[2024-03-29 16:25:48,839][00126] Fps is (10 sec: 42599.1, 60 sec: 41233.2, 300 sec: 40710.1). Total num frames: 640499712. Throughput: 0: 41785.9. Samples: 522651140. Policy #0 lag: (min: 1.0, avg: 23.2, max: 41.0) +[2024-03-29 16:25:48,840][00126] Avg episode reward: [(0, '0.575')] +[2024-03-29 16:25:49,306][00497] Updated weights for policy 0, policy_version 39095 (0.0030) +[2024-03-29 16:25:52,870][00497] Updated weights for policy 0, policy_version 39105 (0.0027) +[2024-03-29 16:25:53,839][00126] Fps is (10 sec: 47513.8, 60 sec: 42052.3, 300 sec: 40710.1). Total num frames: 640729088. Throughput: 0: 41589.9. Samples: 522881660. Policy #0 lag: (min: 1.0, avg: 23.2, max: 41.0) +[2024-03-29 16:25:53,840][00126] Avg episode reward: [(0, '0.521')] +[2024-03-29 16:25:57,890][00497] Updated weights for policy 0, policy_version 39115 (0.0035) +[2024-03-29 16:25:58,839][00126] Fps is (10 sec: 37682.3, 60 sec: 41506.1, 300 sec: 40432.4). Total num frames: 640876544. Throughput: 0: 41705.8. Samples: 523150220. Policy #0 lag: (min: 1.0, avg: 23.2, max: 41.0) +[2024-03-29 16:25:58,840][00126] Avg episode reward: [(0, '0.461')] +[2024-03-29 16:26:01,945][00497] Updated weights for policy 0, policy_version 39125 (0.0027) +[2024-03-29 16:26:03,839][00126] Fps is (10 sec: 37683.1, 60 sec: 40960.0, 300 sec: 40654.5). Total num frames: 641105920. Throughput: 0: 41913.4. Samples: 523279600. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 16:26:03,840][00126] Avg episode reward: [(0, '0.426')] +[2024-03-29 16:26:05,069][00497] Updated weights for policy 0, policy_version 39135 (0.0023) +[2024-03-29 16:26:08,680][00497] Updated weights for policy 0, policy_version 39145 (0.0027) +[2024-03-29 16:26:08,839][00126] Fps is (10 sec: 47514.4, 60 sec: 42052.3, 300 sec: 40765.6). Total num frames: 641351680. Throughput: 0: 41595.2. Samples: 523494660. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 16:26:08,840][00126] Avg episode reward: [(0, '0.523')] +[2024-03-29 16:26:13,802][00497] Updated weights for policy 0, policy_version 39155 (0.0028) +[2024-03-29 16:26:13,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.3, 300 sec: 40543.5). Total num frames: 641515520. Throughput: 0: 41533.4. Samples: 523768880. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 16:26:13,840][00126] Avg episode reward: [(0, '0.512')] +[2024-03-29 16:26:17,674][00497] Updated weights for policy 0, policy_version 39165 (0.0019) +[2024-03-29 16:26:18,839][00126] Fps is (10 sec: 37682.8, 60 sec: 41233.1, 300 sec: 40710.1). Total num frames: 641728512. Throughput: 0: 41946.2. Samples: 523905580. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 16:26:18,840][00126] Avg episode reward: [(0, '0.446')] +[2024-03-29 16:26:19,505][00476] Signal inference workers to stop experience collection... (18650 times) +[2024-03-29 16:26:19,542][00497] InferenceWorker_p0-w0: stopping experience collection (18650 times) +[2024-03-29 16:26:19,736][00476] Signal inference workers to resume experience collection... (18650 times) +[2024-03-29 16:26:19,736][00497] InferenceWorker_p0-w0: resuming experience collection (18650 times) +[2024-03-29 16:26:20,843][00497] Updated weights for policy 0, policy_version 39175 (0.0030) +[2024-03-29 16:26:23,839][00126] Fps is (10 sec: 45874.7, 60 sec: 42052.2, 300 sec: 40821.1). Total num frames: 641974272. Throughput: 0: 41315.5. Samples: 524114900. Policy #0 lag: (min: 0.0, avg: 19.6, max: 42.0) +[2024-03-29 16:26:23,840][00126] Avg episode reward: [(0, '0.501')] +[2024-03-29 16:26:24,747][00497] Updated weights for policy 0, policy_version 39185 (0.0026) +[2024-03-29 16:26:28,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42052.3, 300 sec: 40599.0). Total num frames: 642138112. Throughput: 0: 40971.1. Samples: 524372360. Policy #0 lag: (min: 0.0, avg: 21.6, max: 40.0) +[2024-03-29 16:26:28,840][00126] Avg episode reward: [(0, '0.461')] +[2024-03-29 16:26:29,748][00497] Updated weights for policy 0, policy_version 39195 (0.0025) +[2024-03-29 16:26:33,839][00126] Fps is (10 sec: 34406.4, 60 sec: 40686.9, 300 sec: 40599.0). Total num frames: 642318336. Throughput: 0: 41447.8. Samples: 524516300. Policy #0 lag: (min: 0.0, avg: 21.6, max: 40.0) +[2024-03-29 16:26:33,842][00126] Avg episode reward: [(0, '0.419')] +[2024-03-29 16:26:33,920][00497] Updated weights for policy 0, policy_version 39205 (0.0023) +[2024-03-29 16:26:37,012][00497] Updated weights for policy 0, policy_version 39215 (0.0025) +[2024-03-29 16:26:38,839][00126] Fps is (10 sec: 44237.1, 60 sec: 41779.3, 300 sec: 40876.7). Total num frames: 642580480. Throughput: 0: 41189.4. Samples: 524735180. Policy #0 lag: (min: 0.0, avg: 21.6, max: 40.0) +[2024-03-29 16:26:38,840][00126] Avg episode reward: [(0, '0.527')] +[2024-03-29 16:26:40,523][00497] Updated weights for policy 0, policy_version 39225 (0.0020) +[2024-03-29 16:26:43,839][00126] Fps is (10 sec: 45875.6, 60 sec: 42052.3, 300 sec: 40765.6). Total num frames: 642777088. Throughput: 0: 40876.5. Samples: 524989660. Policy #0 lag: (min: 0.0, avg: 21.6, max: 40.0) +[2024-03-29 16:26:43,840][00126] Avg episode reward: [(0, '0.463')] +[2024-03-29 16:26:45,628][00497] Updated weights for policy 0, policy_version 39235 (0.0026) +[2024-03-29 16:26:48,839][00126] Fps is (10 sec: 36044.6, 60 sec: 40686.8, 300 sec: 40599.0). Total num frames: 642940928. Throughput: 0: 41075.5. Samples: 525128000. Policy #0 lag: (min: 0.0, avg: 21.6, max: 40.0) +[2024-03-29 16:26:48,840][00126] Avg episode reward: [(0, '0.546')] +[2024-03-29 16:26:49,699][00497] Updated weights for policy 0, policy_version 39245 (0.0033) +[2024-03-29 16:26:51,176][00476] Signal inference workers to stop experience collection... (18700 times) +[2024-03-29 16:26:51,209][00497] InferenceWorker_p0-w0: stopping experience collection (18700 times) +[2024-03-29 16:26:51,379][00476] Signal inference workers to resume experience collection... (18700 times) +[2024-03-29 16:26:51,379][00497] InferenceWorker_p0-w0: resuming experience collection (18700 times) +[2024-03-29 16:26:52,841][00497] Updated weights for policy 0, policy_version 39255 (0.0025) +[2024-03-29 16:26:53,839][00126] Fps is (10 sec: 40960.3, 60 sec: 40960.0, 300 sec: 40932.2). Total num frames: 643186688. Throughput: 0: 41623.1. Samples: 525367700. Policy #0 lag: (min: 2.0, avg: 20.3, max: 43.0) +[2024-03-29 16:26:53,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:26:56,483][00497] Updated weights for policy 0, policy_version 39265 (0.0031) +[2024-03-29 16:26:58,839][00126] Fps is (10 sec: 44236.5, 60 sec: 41779.2, 300 sec: 40710.1). Total num frames: 643383296. Throughput: 0: 40871.9. Samples: 525608120. Policy #0 lag: (min: 2.0, avg: 20.3, max: 43.0) +[2024-03-29 16:26:58,840][00126] Avg episode reward: [(0, '0.606')] +[2024-03-29 16:27:01,489][00497] Updated weights for policy 0, policy_version 39275 (0.0028) +[2024-03-29 16:27:03,839][00126] Fps is (10 sec: 37683.0, 60 sec: 40960.0, 300 sec: 40599.0). Total num frames: 643563520. Throughput: 0: 40902.7. Samples: 525746200. Policy #0 lag: (min: 2.0, avg: 20.3, max: 43.0) +[2024-03-29 16:27:03,840][00126] Avg episode reward: [(0, '0.573')] +[2024-03-29 16:27:03,859][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000039280_643563520.pth... +[2024-03-29 16:27:04,163][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038681_633749504.pth +[2024-03-29 16:27:05,610][00497] Updated weights for policy 0, policy_version 39285 (0.0026) +[2024-03-29 16:27:08,720][00497] Updated weights for policy 0, policy_version 39295 (0.0024) +[2024-03-29 16:27:08,839][00126] Fps is (10 sec: 42598.3, 60 sec: 40959.9, 300 sec: 40932.2). Total num frames: 643809280. Throughput: 0: 41884.9. Samples: 525999720. Policy #0 lag: (min: 2.0, avg: 20.3, max: 43.0) +[2024-03-29 16:27:08,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:27:12,308][00497] Updated weights for policy 0, policy_version 39305 (0.0024) +[2024-03-29 16:27:13,839][00126] Fps is (10 sec: 44236.9, 60 sec: 41506.1, 300 sec: 40765.6). Total num frames: 644005888. Throughput: 0: 41124.1. Samples: 526222940. Policy #0 lag: (min: 2.0, avg: 20.3, max: 43.0) +[2024-03-29 16:27:13,840][00126] Avg episode reward: [(0, '0.507')] +[2024-03-29 16:27:17,058][00497] Updated weights for policy 0, policy_version 39315 (0.0025) +[2024-03-29 16:27:18,839][00126] Fps is (10 sec: 37683.4, 60 sec: 40960.0, 300 sec: 40599.0). Total num frames: 644186112. Throughput: 0: 41047.1. Samples: 526363420. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:27:18,840][00126] Avg episode reward: [(0, '0.487')] +[2024-03-29 16:27:21,159][00497] Updated weights for policy 0, policy_version 39325 (0.0022) +[2024-03-29 16:27:22,749][00476] Signal inference workers to stop experience collection... (18750 times) +[2024-03-29 16:27:22,771][00497] InferenceWorker_p0-w0: stopping experience collection (18750 times) +[2024-03-29 16:27:22,966][00476] Signal inference workers to resume experience collection... (18750 times) +[2024-03-29 16:27:22,966][00497] InferenceWorker_p0-w0: resuming experience collection (18750 times) +[2024-03-29 16:27:23,839][00126] Fps is (10 sec: 44237.2, 60 sec: 41233.2, 300 sec: 40987.8). Total num frames: 644448256. Throughput: 0: 42225.8. Samples: 526635340. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:27:23,840][00126] Avg episode reward: [(0, '0.576')] +[2024-03-29 16:27:24,114][00497] Updated weights for policy 0, policy_version 39335 (0.0020) +[2024-03-29 16:27:27,804][00497] Updated weights for policy 0, policy_version 39345 (0.0029) +[2024-03-29 16:27:28,839][00126] Fps is (10 sec: 47513.9, 60 sec: 42052.3, 300 sec: 40987.8). Total num frames: 644661248. Throughput: 0: 41383.1. Samples: 526851900. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:27:28,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:27:32,616][00497] Updated weights for policy 0, policy_version 39355 (0.0019) +[2024-03-29 16:27:33,839][00126] Fps is (10 sec: 37682.9, 60 sec: 41779.3, 300 sec: 40765.6). Total num frames: 644825088. Throughput: 0: 41445.8. Samples: 526993060. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:27:33,840][00126] Avg episode reward: [(0, '0.423')] +[2024-03-29 16:27:36,926][00497] Updated weights for policy 0, policy_version 39365 (0.0022) +[2024-03-29 16:27:38,839][00126] Fps is (10 sec: 39321.7, 60 sec: 41233.1, 300 sec: 40987.8). Total num frames: 645054464. Throughput: 0: 41953.7. Samples: 527255620. Policy #0 lag: (min: 1.0, avg: 19.3, max: 41.0) +[2024-03-29 16:27:38,840][00126] Avg episode reward: [(0, '0.460')] +[2024-03-29 16:27:39,959][00497] Updated weights for policy 0, policy_version 39375 (0.0035) +[2024-03-29 16:27:43,327][00497] Updated weights for policy 0, policy_version 39385 (0.0028) +[2024-03-29 16:27:43,839][00126] Fps is (10 sec: 45874.4, 60 sec: 41779.1, 300 sec: 41154.4). Total num frames: 645283840. Throughput: 0: 41572.8. Samples: 527478900. Policy #0 lag: (min: 0.0, avg: 24.0, max: 41.0) +[2024-03-29 16:27:43,840][00126] Avg episode reward: [(0, '0.500')] +[2024-03-29 16:27:48,182][00497] Updated weights for policy 0, policy_version 39395 (0.0024) +[2024-03-29 16:27:48,839][00126] Fps is (10 sec: 39321.5, 60 sec: 41779.2, 300 sec: 40932.2). Total num frames: 645447680. Throughput: 0: 41633.3. Samples: 527619700. Policy #0 lag: (min: 0.0, avg: 24.0, max: 41.0) +[2024-03-29 16:27:48,840][00126] Avg episode reward: [(0, '0.524')] +[2024-03-29 16:27:52,559][00497] Updated weights for policy 0, policy_version 39405 (0.0023) +[2024-03-29 16:27:53,704][00476] Signal inference workers to stop experience collection... (18800 times) +[2024-03-29 16:27:53,704][00476] Signal inference workers to resume experience collection... (18800 times) +[2024-03-29 16:27:53,727][00497] InferenceWorker_p0-w0: stopping experience collection (18800 times) +[2024-03-29 16:27:53,728][00497] InferenceWorker_p0-w0: resuming experience collection (18800 times) +[2024-03-29 16:27:53,839][00126] Fps is (10 sec: 39322.2, 60 sec: 41506.1, 300 sec: 41043.3). Total num frames: 645677056. Throughput: 0: 42088.1. Samples: 527893680. Policy #0 lag: (min: 0.0, avg: 24.0, max: 41.0) +[2024-03-29 16:27:53,840][00126] Avg episode reward: [(0, '0.515')] +[2024-03-29 16:27:55,707][00497] Updated weights for policy 0, policy_version 39415 (0.0021) +[2024-03-29 16:27:58,839][00126] Fps is (10 sec: 47513.7, 60 sec: 42325.4, 300 sec: 41321.0). Total num frames: 645922816. Throughput: 0: 41797.3. Samples: 528103820. Policy #0 lag: (min: 0.0, avg: 24.0, max: 41.0) +[2024-03-29 16:27:58,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:27:59,021][00497] Updated weights for policy 0, policy_version 39425 (0.0022) +[2024-03-29 16:28:03,786][00497] Updated weights for policy 0, policy_version 39435 (0.0030) +[2024-03-29 16:28:03,839][00126] Fps is (10 sec: 42598.3, 60 sec: 42325.3, 300 sec: 41154.4). Total num frames: 646103040. Throughput: 0: 41879.2. Samples: 528247980. Policy #0 lag: (min: 0.0, avg: 24.0, max: 41.0) +[2024-03-29 16:28:03,840][00126] Avg episode reward: [(0, '0.451')] +[2024-03-29 16:28:08,186][00497] Updated weights for policy 0, policy_version 39445 (0.0024) +[2024-03-29 16:28:08,839][00126] Fps is (10 sec: 37683.2, 60 sec: 41506.2, 300 sec: 41043.3). Total num frames: 646299648. Throughput: 0: 41947.9. Samples: 528523000. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 16:28:08,840][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 16:28:11,289][00497] Updated weights for policy 0, policy_version 39455 (0.0028) +[2024-03-29 16:28:13,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.3, 300 sec: 41376.5). Total num frames: 646561792. Throughput: 0: 41871.9. Samples: 528736140. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 16:28:13,842][00126] Avg episode reward: [(0, '0.511')] +[2024-03-29 16:28:14,815][00497] Updated weights for policy 0, policy_version 39465 (0.0019) +[2024-03-29 16:28:18,839][00126] Fps is (10 sec: 42598.4, 60 sec: 42325.4, 300 sec: 41154.4). Total num frames: 646725632. Throughput: 0: 41689.8. Samples: 528869100. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 16:28:18,840][00126] Avg episode reward: [(0, '0.430')] +[2024-03-29 16:28:19,413][00497] Updated weights for policy 0, policy_version 39475 (0.0017) +[2024-03-29 16:28:23,752][00497] Updated weights for policy 0, policy_version 39485 (0.0019) +[2024-03-29 16:28:23,839][00126] Fps is (10 sec: 36045.1, 60 sec: 41233.0, 300 sec: 41098.9). Total num frames: 646922240. Throughput: 0: 42172.0. Samples: 529153360. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 16:28:23,840][00126] Avg episode reward: [(0, '0.529')] +[2024-03-29 16:28:24,435][00476] Signal inference workers to stop experience collection... (18850 times) +[2024-03-29 16:28:24,454][00497] InferenceWorker_p0-w0: stopping experience collection (18850 times) +[2024-03-29 16:28:24,646][00476] Signal inference workers to resume experience collection... (18850 times) +[2024-03-29 16:28:24,646][00497] InferenceWorker_p0-w0: resuming experience collection (18850 times) +[2024-03-29 16:28:26,920][00497] Updated weights for policy 0, policy_version 39495 (0.0032) +[2024-03-29 16:28:28,839][00126] Fps is (10 sec: 45875.7, 60 sec: 42052.4, 300 sec: 41487.6). Total num frames: 647184384. Throughput: 0: 42165.2. Samples: 529376320. Policy #0 lag: (min: 0.0, avg: 18.1, max: 41.0) +[2024-03-29 16:28:28,840][00126] Avg episode reward: [(0, '0.542')] +[2024-03-29 16:28:30,294][00497] Updated weights for policy 0, policy_version 39505 (0.0035) +[2024-03-29 16:28:33,839][00126] Fps is (10 sec: 44236.4, 60 sec: 42325.3, 300 sec: 41321.0). Total num frames: 647364608. Throughput: 0: 41752.4. Samples: 529498560. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:28:33,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 16:28:34,943][00497] Updated weights for policy 0, policy_version 39515 (0.0022) +[2024-03-29 16:28:38,839][00126] Fps is (10 sec: 36044.2, 60 sec: 41506.1, 300 sec: 41209.9). Total num frames: 647544832. Throughput: 0: 41988.9. Samples: 529783180. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:28:38,840][00126] Avg episode reward: [(0, '0.453')] +[2024-03-29 16:28:39,471][00497] Updated weights for policy 0, policy_version 39525 (0.0030) +[2024-03-29 16:28:42,674][00497] Updated weights for policy 0, policy_version 39535 (0.0028) +[2024-03-29 16:28:43,839][00126] Fps is (10 sec: 42598.8, 60 sec: 41779.3, 300 sec: 41432.1). Total num frames: 647790592. Throughput: 0: 42254.2. Samples: 530005260. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:28:43,840][00126] Avg episode reward: [(0, '0.391')] +[2024-03-29 16:28:45,845][00497] Updated weights for policy 0, policy_version 39545 (0.0024) +[2024-03-29 16:28:48,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42325.4, 300 sec: 41432.1). Total num frames: 647987200. Throughput: 0: 41667.6. Samples: 530123020. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:28:48,840][00126] Avg episode reward: [(0, '0.550')] +[2024-03-29 16:28:50,497][00497] Updated weights for policy 0, policy_version 39555 (0.0026) +[2024-03-29 16:28:53,839][00126] Fps is (10 sec: 39321.6, 60 sec: 41779.2, 300 sec: 41209.9). Total num frames: 648183808. Throughput: 0: 41773.3. Samples: 530402800. Policy #0 lag: (min: 0.0, avg: 20.7, max: 41.0) +[2024-03-29 16:28:53,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 16:28:54,991][00497] Updated weights for policy 0, policy_version 39565 (0.0017) +[2024-03-29 16:28:56,102][00476] Signal inference workers to stop experience collection... (18900 times) +[2024-03-29 16:28:56,102][00476] Signal inference workers to resume experience collection... (18900 times) +[2024-03-29 16:28:56,140][00497] InferenceWorker_p0-w0: stopping experience collection (18900 times) +[2024-03-29 16:28:56,141][00497] InferenceWorker_p0-w0: resuming experience collection (18900 times) +[2024-03-29 16:28:58,074][00497] Updated weights for policy 0, policy_version 39575 (0.0025) +[2024-03-29 16:28:58,839][00126] Fps is (10 sec: 44236.4, 60 sec: 41779.2, 300 sec: 41487.6). Total num frames: 648429568. Throughput: 0: 42488.5. Samples: 530648120. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:28:58,840][00126] Avg episode reward: [(0, '0.483')] +[2024-03-29 16:29:01,412][00497] Updated weights for policy 0, policy_version 39585 (0.0027) +[2024-03-29 16:29:03,839][00126] Fps is (10 sec: 45874.3, 60 sec: 42325.2, 300 sec: 41654.2). Total num frames: 648642560. Throughput: 0: 42193.2. Samples: 530767800. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:29:03,840][00126] Avg episode reward: [(0, '0.480')] +[2024-03-29 16:29:03,860][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000039590_648642560.pth... +[2024-03-29 16:29:04,163][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000038981_638664704.pth +[2024-03-29 16:29:06,068][00497] Updated weights for policy 0, policy_version 39595 (0.0019) +[2024-03-29 16:29:08,839][00126] Fps is (10 sec: 39321.8, 60 sec: 42052.3, 300 sec: 41376.6). Total num frames: 648822784. Throughput: 0: 41970.2. Samples: 531042020. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:29:08,840][00126] Avg episode reward: [(0, '0.477')] +[2024-03-29 16:29:10,256][00497] Updated weights for policy 0, policy_version 39605 (0.0022) +[2024-03-29 16:29:13,538][00497] Updated weights for policy 0, policy_version 39615 (0.0029) +[2024-03-29 16:29:13,839][00126] Fps is (10 sec: 42598.9, 60 sec: 41779.2, 300 sec: 41543.2). Total num frames: 649068544. Throughput: 0: 42230.1. Samples: 531276680. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:29:13,840][00126] Avg episode reward: [(0, '0.451')] +[2024-03-29 16:29:17,068][00497] Updated weights for policy 0, policy_version 39625 (0.0026) +[2024-03-29 16:29:18,839][00126] Fps is (10 sec: 45875.0, 60 sec: 42598.4, 300 sec: 41709.8). Total num frames: 649281536. Throughput: 0: 42059.2. Samples: 531391220. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:29:18,840][00126] Avg episode reward: [(0, '0.518')] +[2024-03-29 16:29:21,683][00497] Updated weights for policy 0, policy_version 39635 (0.0021) +[2024-03-29 16:29:23,839][00126] Fps is (10 sec: 37683.4, 60 sec: 42052.3, 300 sec: 41487.6). Total num frames: 649445376. Throughput: 0: 41972.9. Samples: 531671960. Policy #0 lag: (min: 1.0, avg: 21.7, max: 42.0) +[2024-03-29 16:29:23,840][00126] Avg episode reward: [(0, '0.432')] +[2024-03-29 16:29:25,968][00497] Updated weights for policy 0, policy_version 39645 (0.0027) +[2024-03-29 16:29:28,839][00126] Fps is (10 sec: 40960.1, 60 sec: 41779.1, 300 sec: 41487.6). Total num frames: 649691136. Throughput: 0: 42532.4. Samples: 531919220. Policy #0 lag: (min: 1.0, avg: 18.6, max: 41.0) +[2024-03-29 16:29:28,840][00126] Avg episode reward: [(0, '0.526')] +[2024-03-29 16:29:28,934][00497] Updated weights for policy 0, policy_version 39655 (0.0031) +[2024-03-29 16:29:28,970][00476] Signal inference workers to stop experience collection... (18950 times) +[2024-03-29 16:29:29,004][00497] InferenceWorker_p0-w0: stopping experience collection (18950 times) +[2024-03-29 16:29:29,184][00476] Signal inference workers to resume experience collection... (18950 times) +[2024-03-29 16:29:29,184][00497] InferenceWorker_p0-w0: resuming experience collection (18950 times) +[2024-03-29 16:29:32,376][00497] Updated weights for policy 0, policy_version 39665 (0.0018) +[2024-03-29 16:29:33,840][00126] Fps is (10 sec: 47512.6, 60 sec: 42598.3, 300 sec: 41820.8). Total num frames: 649920512. Throughput: 0: 42513.0. Samples: 532036120. Policy #0 lag: (min: 1.0, avg: 18.6, max: 41.0) +[2024-03-29 16:29:33,840][00126] Avg episode reward: [(0, '0.564')] +[2024-03-29 16:29:36,784][00497] Updated weights for policy 0, policy_version 39675 (0.0022) +[2024-03-29 16:29:38,839][00126] Fps is (10 sec: 39321.9, 60 sec: 42325.4, 300 sec: 41709.8). Total num frames: 650084352. Throughput: 0: 42409.9. Samples: 532311240. Policy #0 lag: (min: 1.0, avg: 18.6, max: 41.0) +[2024-03-29 16:29:38,840][00126] Avg episode reward: [(0, '0.438')] +[2024-03-29 16:29:41,203][00497] Updated weights for policy 0, policy_version 39685 (0.0027) +[2024-03-29 16:29:43,839][00126] Fps is (10 sec: 40960.1, 60 sec: 42325.2, 300 sec: 41709.8). Total num frames: 650330112. Throughput: 0: 42644.7. Samples: 532567140. Policy #0 lag: (min: 1.0, avg: 18.6, max: 41.0) +[2024-03-29 16:29:43,840][00126] Avg episode reward: [(0, '0.548')] +[2024-03-29 16:29:44,178][00497] Updated weights for policy 0, policy_version 39695 (0.0019) +[2024-03-29 16:29:47,504][00497] Updated weights for policy 0, policy_version 39705 (0.0018) +[2024-03-29 16:29:48,839][00126] Fps is (10 sec: 47512.5, 60 sec: 42871.3, 300 sec: 41876.4). Total num frames: 650559488. Throughput: 0: 42445.8. Samples: 532677860. Policy #0 lag: (min: 1.0, avg: 18.6, max: 41.0) +[2024-03-29 16:29:48,840][00126] Avg episode reward: [(0, '0.541')] +[2024-03-29 16:29:51,862][00497] Updated weights for policy 0, policy_version 39715 (0.0022) +[2024-03-29 16:29:53,839][00126] Fps is (10 sec: 40960.5, 60 sec: 42598.3, 300 sec: 41876.4). Total num frames: 650739712. Throughput: 0: 42435.0. Samples: 532951600. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 16:29:53,840][00126] Avg episode reward: [(0, '0.408')] +[2024-03-29 16:29:56,493][00497] Updated weights for policy 0, policy_version 39725 (0.0018) +[2024-03-29 16:29:58,839][00126] Fps is (10 sec: 40960.7, 60 sec: 42325.4, 300 sec: 41765.3). Total num frames: 650969088. Throughput: 0: 42935.6. Samples: 533208780. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 16:29:58,841][00126] Avg episode reward: [(0, '0.533')] +[2024-03-29 16:29:59,483][00497] Updated weights for policy 0, policy_version 39735 (0.0021) +[2024-03-29 16:30:01,184][00476] Signal inference workers to stop experience collection... (19000 times) +[2024-03-29 16:30:01,252][00497] InferenceWorker_p0-w0: stopping experience collection (19000 times) +[2024-03-29 16:30:01,259][00476] Signal inference workers to resume experience collection... (19000 times) +[2024-03-29 16:30:01,278][00497] InferenceWorker_p0-w0: resuming experience collection (19000 times) +[2024-03-29 16:30:02,919][00497] Updated weights for policy 0, policy_version 39745 (0.0029) +[2024-03-29 16:30:03,839][00126] Fps is (10 sec: 45875.4, 60 sec: 42598.5, 300 sec: 41931.9). Total num frames: 651198464. Throughput: 0: 42905.8. Samples: 533321980. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 16:30:03,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 16:30:07,401][00497] Updated weights for policy 0, policy_version 39755 (0.0025) +[2024-03-29 16:30:08,839][00126] Fps is (10 sec: 40959.8, 60 sec: 42598.4, 300 sec: 41931.9). Total num frames: 651378688. Throughput: 0: 42598.2. Samples: 533588880. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 16:30:08,840][00126] Avg episode reward: [(0, '0.587')] +[2024-03-29 16:30:11,912][00497] Updated weights for policy 0, policy_version 39765 (0.0022) +[2024-03-29 16:30:13,839][00126] Fps is (10 sec: 40960.2, 60 sec: 42325.4, 300 sec: 41876.4). Total num frames: 651608064. Throughput: 0: 42909.8. Samples: 533850160. Policy #0 lag: (min: 1.0, avg: 22.0, max: 42.0) +[2024-03-29 16:30:13,840][00126] Avg episode reward: [(0, '0.401')] +[2024-03-29 16:30:14,941][00497] Updated weights for policy 0, policy_version 39775 (0.0030) +[2024-03-29 16:30:18,703][00497] Updated weights for policy 0, policy_version 39785 (0.0019) +[2024-03-29 16:30:18,839][00126] Fps is (10 sec: 45874.9, 60 sec: 42598.3, 300 sec: 41987.5). Total num frames: 651837440. Throughput: 0: 42653.0. Samples: 533955500. Policy #0 lag: (min: 1.0, avg: 21.3, max: 43.0) +[2024-03-29 16:30:18,840][00126] Avg episode reward: [(0, '0.499')] +[2024-03-29 16:30:23,233][00497] Updated weights for policy 0, policy_version 39795 (0.0025) +[2024-03-29 16:30:23,839][00126] Fps is (10 sec: 40959.4, 60 sec: 42871.4, 300 sec: 42043.0). Total num frames: 652017664. Throughput: 0: 42311.3. Samples: 534215260. Policy #0 lag: (min: 1.0, avg: 21.3, max: 43.0) +[2024-03-29 16:30:23,840][00126] Avg episode reward: [(0, '0.539')] +[2024-03-29 16:30:27,763][00497] Updated weights for policy 0, policy_version 39805 (0.0027) +[2024-03-29 16:30:28,839][00126] Fps is (10 sec: 37683.6, 60 sec: 42052.3, 300 sec: 41820.9). Total num frames: 652214272. Throughput: 0: 42590.9. Samples: 534483720. Policy #0 lag: (min: 1.0, avg: 21.3, max: 43.0) +[2024-03-29 16:30:28,840][00126] Avg episode reward: [(0, '0.497')] +[2024-03-29 16:30:30,632][00497] Updated weights for policy 0, policy_version 39815 (0.0022) +[2024-03-29 16:30:32,443][00476] Signal inference workers to stop experience collection... (19050 times) +[2024-03-29 16:30:32,513][00497] InferenceWorker_p0-w0: stopping experience collection (19050 times) +[2024-03-29 16:30:32,608][00476] Signal inference workers to resume experience collection... (19050 times) +[2024-03-29 16:30:32,609][00497] InferenceWorker_p0-w0: resuming experience collection (19050 times) +[2024-03-29 16:30:33,839][00126] Fps is (10 sec: 44237.3, 60 sec: 42325.5, 300 sec: 41987.5). Total num frames: 652460032. Throughput: 0: 42270.8. Samples: 534580040. Policy #0 lag: (min: 1.0, avg: 21.3, max: 43.0) +[2024-03-29 16:30:33,840][00126] Avg episode reward: [(0, '0.478')] +[2024-03-29 16:30:34,301][00497] Updated weights for policy 0, policy_version 39825 (0.0020) +[2024-03-29 16:30:38,803][00497] Updated weights for policy 0, policy_version 39835 (0.0019) +[2024-03-29 16:30:38,839][00126] Fps is (10 sec: 44236.7, 60 sec: 42871.4, 300 sec: 42043.0). Total num frames: 652656640. Throughput: 0: 42058.3. Samples: 534844220. Policy #0 lag: (min: 1.0, avg: 21.3, max: 43.0) +[2024-03-29 16:30:38,840][00126] Avg episode reward: [(0, '0.494')] +[2024-03-29 16:30:43,311][00497] Updated weights for policy 0, policy_version 39845 (0.0026) +[2024-03-29 16:30:43,839][00126] Fps is (10 sec: 37683.5, 60 sec: 41779.4, 300 sec: 41820.8). Total num frames: 652836864. Throughput: 0: 42357.4. Samples: 535114860. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 16:30:43,840][00126] Avg episode reward: [(0, '0.604')] +[2024-03-29 16:30:46,389][00497] Updated weights for policy 0, policy_version 39855 (0.0027) +[2024-03-29 16:30:48,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.5, 300 sec: 41931.9). Total num frames: 653099008. Throughput: 0: 42182.7. Samples: 535220200. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 16:30:48,840][00126] Avg episode reward: [(0, '0.505')] +[2024-03-29 16:30:49,956][00497] Updated weights for policy 0, policy_version 39865 (0.0020) +[2024-03-29 16:30:53,839][00126] Fps is (10 sec: 45874.5, 60 sec: 42598.4, 300 sec: 42098.6). Total num frames: 653295616. Throughput: 0: 41719.5. Samples: 535466260. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 16:30:53,840][00126] Avg episode reward: [(0, '0.532')] +[2024-03-29 16:30:54,463][00497] Updated weights for policy 0, policy_version 39875 (0.0018) +[2024-03-29 16:30:58,806][00497] Updated weights for policy 0, policy_version 39885 (0.0019) +[2024-03-29 16:30:58,839][00126] Fps is (10 sec: 37683.1, 60 sec: 41779.2, 300 sec: 41931.9). Total num frames: 653475840. Throughput: 0: 41919.5. Samples: 535736540. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 16:30:58,840][00126] Avg episode reward: [(0, '0.510')] +[2024-03-29 16:31:01,972][00497] Updated weights for policy 0, policy_version 39895 (0.0033) +[2024-03-29 16:31:03,839][00126] Fps is (10 sec: 44237.1, 60 sec: 42325.3, 300 sec: 41987.5). Total num frames: 653737984. Throughput: 0: 42012.5. Samples: 535846060. Policy #0 lag: (min: 0.0, avg: 18.9, max: 42.0) +[2024-03-29 16:31:03,841][00126] Avg episode reward: [(0, '0.519')] +[2024-03-29 16:31:04,100][00476] Saving /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000039902_653754368.pth... +[2024-03-29 16:31:04,443][00476] Removing /workspace/metta/train_dir/b.a20.20x20_40x40.norm/checkpoint_p0/checkpoint_000039280_643563520.pth +[2024-03-29 16:31:05,717][00497] Updated weights for policy 0, policy_version 39905 (0.0028) +[2024-03-29 16:31:08,839][00126] Fps is (10 sec: 44236.9, 60 sec: 42325.4, 300 sec: 42043.0). Total num frames: 653918208. Throughput: 0: 41890.4. Samples: 536100320. Policy #0 lag: (min: 0.0, avg: 22.1, max: 42.0) +[2024-03-29 16:31:08,840][00126] Avg episode reward: [(0, '0.514')] +[2024-03-29 16:31:10,284][00497] Updated weights for policy 0, policy_version 39915 (0.0023) +[2024-03-29 16:31:11,924][00476] Signal inference workers to stop experience collection... (19100 times) +[2024-03-29 16:31:11,958][00497] InferenceWorker_p0-w0: stopping experience collection (19100 times) +[2024-03-29 16:31:12,113][00476] Signal inference workers to resume experience collection... (19100 times) +[2024-03-29 16:31:12,114][00497] InferenceWorker_p0-w0: resuming experience collection (19100 times)