Karthik Raja
Upload folder using huggingface_hub
bbef688
[2023-05-29 03:12:00,498][02542] Saving configuration to /content/train_dir/default_experiment/config.json...
[2023-05-29 03:12:00,503][02542] Rollout worker 0 uses device cpu
[2023-05-29 03:12:00,504][02542] Rollout worker 1 uses device cpu
[2023-05-29 03:12:00,507][02542] Rollout worker 2 uses device cpu
[2023-05-29 03:12:00,508][02542] Rollout worker 3 uses device cpu
[2023-05-29 03:12:00,511][02542] Rollout worker 4 uses device cpu
[2023-05-29 03:12:00,512][02542] Rollout worker 5 uses device cpu
[2023-05-29 03:12:00,513][02542] Rollout worker 6 uses device cpu
[2023-05-29 03:12:00,514][02542] Rollout worker 7 uses device cpu
[2023-05-29 03:12:00,706][02542] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-05-29 03:12:00,711][02542] InferenceWorker_p0-w0: min num requests: 2
[2023-05-29 03:12:00,750][02542] Starting all processes...
[2023-05-29 03:12:00,751][02542] Starting process learner_proc0
[2023-05-29 03:12:00,818][02542] Starting all processes...
[2023-05-29 03:12:00,830][02542] Starting process inference_proc0-0
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc0
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc1
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc2
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc3
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc4
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc5
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc6
[2023-05-29 03:12:00,839][02542] Starting process rollout_proc7
[2023-05-29 03:12:10,206][23536] Worker 4 uses CPU cores [0]
[2023-05-29 03:12:10,684][23534] Worker 3 uses CPU cores [1]
[2023-05-29 03:12:10,797][23519] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-05-29 03:12:10,801][23519] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
[2023-05-29 03:12:10,830][23519] Num visible devices: 1
[2023-05-29 03:12:10,854][23519] Starting seed is not provided
[2023-05-29 03:12:10,855][23519] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-05-29 03:12:10,855][23519] Initializing actor-critic model on device cuda:0
[2023-05-29 03:12:10,856][23519] RunningMeanStd input shape: (3, 72, 128)
[2023-05-29 03:12:10,858][23519] RunningMeanStd input shape: (1,)
[2023-05-29 03:12:10,884][23539] Worker 5 uses CPU cores [1]
[2023-05-29 03:12:10,889][23532] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-05-29 03:12:10,889][23532] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
[2023-05-29 03:12:10,915][23519] ConvEncoder: input_channels=3
[2023-05-29 03:12:10,961][23532] Num visible devices: 1
[2023-05-29 03:12:11,008][23537] Worker 0 uses CPU cores [0]
[2023-05-29 03:12:11,042][23538] Worker 6 uses CPU cores [0]
[2023-05-29 03:12:11,080][23540] Worker 7 uses CPU cores [1]
[2023-05-29 03:12:11,097][23535] Worker 2 uses CPU cores [0]
[2023-05-29 03:12:11,100][23533] Worker 1 uses CPU cores [1]
[2023-05-29 03:12:11,221][23519] Conv encoder output size: 512
[2023-05-29 03:12:11,222][23519] Policy head output size: 512
[2023-05-29 03:12:11,245][23519] Created Actor Critic model with architecture:
[2023-05-29 03:12:11,246][23519] ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): VizdoomEncoder(
(basic_encoder): ConvEncoder(
(enc): RecursiveScriptModule(
original_name=ConvEncoderImpl
(conv_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ELU)
)
(mlp_layers): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(512, 512)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=512, out_features=5, bias=True)
)
)
[2023-05-29 03:12:17,500][23519] Using optimizer <class 'torch.optim.adam.Adam'>
[2023-05-29 03:12:17,500][23519] No checkpoints found
[2023-05-29 03:12:17,501][23519] Did not load from checkpoint, starting from scratch!
[2023-05-29 03:12:17,501][23519] Initialized policy 0 weights for model version 0
[2023-05-29 03:12:17,510][23519] LearnerWorker_p0 finished initialization!
[2023-05-29 03:12:17,510][23519] Using GPUs [0] for process 0 (actually maps to GPUs [0])
[2023-05-29 03:12:17,831][23532] RunningMeanStd input shape: (3, 72, 128)
[2023-05-29 03:12:17,833][23532] RunningMeanStd input shape: (1,)
[2023-05-29 03:12:17,854][23532] ConvEncoder: input_channels=3
[2023-05-29 03:12:18,020][23532] Conv encoder output size: 512
[2023-05-29 03:12:18,021][23532] Policy head output size: 512
[2023-05-29 03:12:19,732][02542] Inference worker 0-0 is ready!
[2023-05-29 03:12:19,734][02542] All inference workers are ready! Signal rollout workers to start!
[2023-05-29 03:12:19,880][23538] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,895][23540] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,898][23536] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,895][23533] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,909][23537] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,910][23535] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,938][23534] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,948][23539] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:12:19,997][02542] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-05-29 03:12:20,696][02542] Heartbeat connected on Batcher_0
[2023-05-29 03:12:20,701][02542] Heartbeat connected on LearnerWorker_p0
[2023-05-29 03:12:20,753][02542] Heartbeat connected on InferenceWorker_p0-w0
[2023-05-29 03:12:21,316][23537] Decorrelating experience for 0 frames...
[2023-05-29 03:12:21,317][23538] Decorrelating experience for 0 frames...
[2023-05-29 03:12:21,318][23536] Decorrelating experience for 0 frames...
[2023-05-29 03:12:21,317][23540] Decorrelating experience for 0 frames...
[2023-05-29 03:12:21,319][23539] Decorrelating experience for 0 frames...
[2023-05-29 03:12:21,318][23534] Decorrelating experience for 0 frames...
[2023-05-29 03:12:22,006][23533] Decorrelating experience for 0 frames...
[2023-05-29 03:12:22,035][23540] Decorrelating experience for 32 frames...
[2023-05-29 03:12:22,464][23540] Decorrelating experience for 64 frames...
[2023-05-29 03:12:22,551][23538] Decorrelating experience for 32 frames...
[2023-05-29 03:12:22,553][23536] Decorrelating experience for 32 frames...
[2023-05-29 03:12:22,559][23537] Decorrelating experience for 32 frames...
[2023-05-29 03:12:23,208][23540] Decorrelating experience for 96 frames...
[2023-05-29 03:12:23,341][02542] Heartbeat connected on RolloutWorker_w7
[2023-05-29 03:12:23,358][23533] Decorrelating experience for 32 frames...
[2023-05-29 03:12:23,380][23535] Decorrelating experience for 0 frames...
[2023-05-29 03:12:23,959][23536] Decorrelating experience for 64 frames...
[2023-05-29 03:12:23,991][23537] Decorrelating experience for 64 frames...
[2023-05-29 03:12:24,331][23539] Decorrelating experience for 32 frames...
[2023-05-29 03:12:24,518][23533] Decorrelating experience for 64 frames...
[2023-05-29 03:12:24,605][23535] Decorrelating experience for 32 frames...
[2023-05-29 03:12:24,997][02542] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-05-29 03:12:25,195][23538] Decorrelating experience for 64 frames...
[2023-05-29 03:12:25,286][23536] Decorrelating experience for 96 frames...
[2023-05-29 03:12:25,537][02542] Heartbeat connected on RolloutWorker_w4
[2023-05-29 03:12:25,624][23539] Decorrelating experience for 64 frames...
[2023-05-29 03:12:25,768][23534] Decorrelating experience for 32 frames...
[2023-05-29 03:12:26,218][23535] Decorrelating experience for 64 frames...
[2023-05-29 03:12:26,712][23533] Decorrelating experience for 96 frames...
[2023-05-29 03:12:26,880][23537] Decorrelating experience for 96 frames...
[2023-05-29 03:12:26,890][23538] Decorrelating experience for 96 frames...
[2023-05-29 03:12:27,057][02542] Heartbeat connected on RolloutWorker_w1
[2023-05-29 03:12:27,201][02542] Heartbeat connected on RolloutWorker_w0
[2023-05-29 03:12:27,218][02542] Heartbeat connected on RolloutWorker_w6
[2023-05-29 03:12:27,516][23534] Decorrelating experience for 64 frames...
[2023-05-29 03:12:28,395][23535] Decorrelating experience for 96 frames...
[2023-05-29 03:12:29,164][02542] Heartbeat connected on RolloutWorker_w2
[2023-05-29 03:12:29,633][23539] Decorrelating experience for 96 frames...
[2023-05-29 03:12:29,997][02542] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 149.4. Samples: 1494. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
[2023-05-29 03:12:30,000][02542] Avg episode reward: [(0, '2.263')]
[2023-05-29 03:12:30,078][02542] Heartbeat connected on RolloutWorker_w5
[2023-05-29 03:12:30,934][23534] Decorrelating experience for 96 frames...
[2023-05-29 03:12:31,642][23519] Signal inference workers to stop experience collection...
[2023-05-29 03:12:31,652][23532] InferenceWorker_p0-w0: stopping experience collection
[2023-05-29 03:12:31,805][02542] Heartbeat connected on RolloutWorker_w3
[2023-05-29 03:12:33,638][23519] Signal inference workers to resume experience collection...
[2023-05-29 03:12:33,639][23532] InferenceWorker_p0-w0: resuming experience collection
[2023-05-29 03:12:34,997][02542] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 205.2. Samples: 3078. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
[2023-05-29 03:12:35,001][02542] Avg episode reward: [(0, '2.963')]
[2023-05-29 03:12:39,998][02542] Fps is (10 sec: 2047.7, 60 sec: 1023.9, 300 sec: 1023.9). Total num frames: 20480. Throughput: 0: 225.4. Samples: 4508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 3.0)
[2023-05-29 03:12:40,006][02542] Avg episode reward: [(0, '3.352')]
[2023-05-29 03:12:44,997][02542] Fps is (10 sec: 3276.8, 60 sec: 1474.6, 300 sec: 1474.6). Total num frames: 36864. Throughput: 0: 353.8. Samples: 8846. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-05-29 03:12:44,999][02542] Avg episode reward: [(0, '3.830')]
[2023-05-29 03:12:45,826][23532] Updated weights for policy 0, policy_version 10 (0.0024)
[2023-05-29 03:12:49,997][02542] Fps is (10 sec: 3686.9, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 57344. Throughput: 0: 500.4. Samples: 15012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:12:49,999][02542] Avg episode reward: [(0, '4.454')]
[2023-05-29 03:12:54,997][02542] Fps is (10 sec: 4096.0, 60 sec: 2223.5, 300 sec: 2223.5). Total num frames: 77824. Throughput: 0: 526.8. Samples: 18438. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:12:54,999][02542] Avg episode reward: [(0, '4.392')]
[2023-05-29 03:12:55,028][23532] Updated weights for policy 0, policy_version 20 (0.0014)
[2023-05-29 03:13:00,002][02542] Fps is (10 sec: 3684.3, 60 sec: 2354.9, 300 sec: 2354.9). Total num frames: 94208. Throughput: 0: 592.0. Samples: 23684. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:00,005][02542] Avg episode reward: [(0, '4.451')]
[2023-05-29 03:13:04,997][02542] Fps is (10 sec: 2867.2, 60 sec: 2366.6, 300 sec: 2366.6). Total num frames: 106496. Throughput: 0: 621.4. Samples: 27962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:13:04,999][02542] Avg episode reward: [(0, '4.386')]
[2023-05-29 03:13:05,006][23519] Saving new best policy, reward=4.386!
[2023-05-29 03:13:09,711][23532] Updated weights for policy 0, policy_version 30 (0.0032)
[2023-05-29 03:13:09,997][02542] Fps is (10 sec: 2868.9, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 122880. Throughput: 0: 667.8. Samples: 30050. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:13:09,999][02542] Avg episode reward: [(0, '4.542')]
[2023-05-29 03:13:10,006][23519] Saving new best policy, reward=4.542!
[2023-05-29 03:13:14,997][02542] Fps is (10 sec: 3686.4, 60 sec: 2606.5, 300 sec: 2606.5). Total num frames: 143360. Throughput: 0: 749.1. Samples: 35204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:15,003][02542] Avg episode reward: [(0, '4.608')]
[2023-05-29 03:13:15,017][23519] Saving new best policy, reward=4.608!
[2023-05-29 03:13:19,294][23532] Updated weights for policy 0, policy_version 40 (0.0031)
[2023-05-29 03:13:19,997][02542] Fps is (10 sec: 4096.0, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 163840. Throughput: 0: 862.5. Samples: 41890. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:13:19,999][02542] Avg episode reward: [(0, '4.534')]
[2023-05-29 03:13:24,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 2772.7). Total num frames: 180224. Throughput: 0: 892.0. Samples: 44648. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
[2023-05-29 03:13:25,002][02542] Avg episode reward: [(0, '4.291')]
[2023-05-29 03:13:29,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 2808.7). Total num frames: 196608. Throughput: 0: 890.6. Samples: 48924. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:13:29,999][02542] Avg episode reward: [(0, '4.311')]
[2023-05-29 03:13:32,680][23532] Updated weights for policy 0, policy_version 50 (0.0022)
[2023-05-29 03:13:34,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 2785.3). Total num frames: 208896. Throughput: 0: 854.1. Samples: 53446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:35,003][02542] Avg episode reward: [(0, '4.403')]
[2023-05-29 03:13:39,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 2867.2). Total num frames: 229376. Throughput: 0: 825.5. Samples: 55586. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:39,999][02542] Avg episode reward: [(0, '4.548')]
[2023-05-29 03:13:43,489][23532] Updated weights for policy 0, policy_version 60 (0.0019)
[2023-05-29 03:13:45,005][02542] Fps is (10 sec: 4092.5, 60 sec: 3549.4, 300 sec: 2939.2). Total num frames: 249856. Throughput: 0: 857.5. Samples: 62276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:45,012][02542] Avg episode reward: [(0, '4.470')]
[2023-05-29 03:13:49,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3003.7). Total num frames: 270336. Throughput: 0: 896.3. Samples: 68294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:13:50,004][02542] Avg episode reward: [(0, '4.544')]
[2023-05-29 03:13:55,002][02542] Fps is (10 sec: 3277.8, 60 sec: 3413.0, 300 sec: 2974.8). Total num frames: 282624. Throughput: 0: 898.2. Samples: 70472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:13:55,008][02542] Avg episode reward: [(0, '4.533')]
[2023-05-29 03:13:55,018][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth...
[2023-05-29 03:13:55,302][23532] Updated weights for policy 0, policy_version 70 (0.0017)
[2023-05-29 03:13:59,998][02542] Fps is (10 sec: 2866.7, 60 sec: 3413.6, 300 sec: 2990.0). Total num frames: 299008. Throughput: 0: 880.5. Samples: 74828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:14:00,006][02542] Avg episode reward: [(0, '4.434')]
[2023-05-29 03:14:04,997][02542] Fps is (10 sec: 3278.7, 60 sec: 3481.6, 300 sec: 3003.7). Total num frames: 315392. Throughput: 0: 834.3. Samples: 79432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:14:05,003][02542] Avg episode reward: [(0, '4.412')]
[2023-05-29 03:14:07,516][23532] Updated weights for policy 0, policy_version 80 (0.0025)
[2023-05-29 03:14:09,997][02542] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3053.4). Total num frames: 335872. Throughput: 0: 846.1. Samples: 82724. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:14:09,999][02542] Avg episode reward: [(0, '4.326')]
[2023-05-29 03:14:15,001][02542] Fps is (10 sec: 4094.1, 60 sec: 3549.6, 300 sec: 3098.6). Total num frames: 356352. Throughput: 0: 901.4. Samples: 89492. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:14:15,004][02542] Avg episode reward: [(0, '4.345')]
[2023-05-29 03:14:17,896][23532] Updated weights for policy 0, policy_version 90 (0.0022)
[2023-05-29 03:14:19,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3106.1). Total num frames: 372736. Throughput: 0: 899.1. Samples: 93904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:14:20,002][02542] Avg episode reward: [(0, '4.319')]
[2023-05-29 03:14:24,998][02542] Fps is (10 sec: 2868.1, 60 sec: 3413.2, 300 sec: 3080.2). Total num frames: 385024. Throughput: 0: 897.0. Samples: 95952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:14:25,001][02542] Avg episode reward: [(0, '4.598')]
[2023-05-29 03:14:29,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3087.8). Total num frames: 401408. Throughput: 0: 848.3. Samples: 100440. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:14:30,003][02542] Avg episode reward: [(0, '4.625')]
[2023-05-29 03:14:30,007][23519] Saving new best policy, reward=4.625!
[2023-05-29 03:14:31,650][23532] Updated weights for policy 0, policy_version 100 (0.0038)
[2023-05-29 03:14:34,997][02542] Fps is (10 sec: 3687.0, 60 sec: 3549.9, 300 sec: 3125.1). Total num frames: 421888. Throughput: 0: 848.5. Samples: 106478. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:14:35,004][02542] Avg episode reward: [(0, '4.697')]
[2023-05-29 03:14:35,013][23519] Saving new best policy, reward=4.697!
[2023-05-29 03:14:39,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3189.0). Total num frames: 446464. Throughput: 0: 873.9. Samples: 109794. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:14:39,999][02542] Avg episode reward: [(0, '4.668')]
[2023-05-29 03:14:40,950][23532] Updated weights for policy 0, policy_version 110 (0.0033)
[2023-05-29 03:14:44,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3482.1, 300 sec: 3163.8). Total num frames: 458752. Throughput: 0: 895.4. Samples: 115120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:14:45,001][02542] Avg episode reward: [(0, '4.594')]
[2023-05-29 03:14:49,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3167.6). Total num frames: 475136. Throughput: 0: 886.3. Samples: 119316. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:14:49,999][02542] Avg episode reward: [(0, '4.616')]
[2023-05-29 03:14:54,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.7, 300 sec: 3144.7). Total num frames: 487424. Throughput: 0: 860.0. Samples: 121426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:14:55,000][02542] Avg episode reward: [(0, '4.716')]
[2023-05-29 03:14:55,015][23519] Saving new best policy, reward=4.716!
[2023-05-29 03:14:55,531][23532] Updated weights for policy 0, policy_version 120 (0.0012)
[2023-05-29 03:14:59,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3174.4). Total num frames: 507904. Throughput: 0: 823.2. Samples: 126534. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:14:59,998][02542] Avg episode reward: [(0, '4.609')]
[2023-05-29 03:15:04,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3202.3). Total num frames: 528384. Throughput: 0: 874.4. Samples: 133250. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:15:05,003][02542] Avg episode reward: [(0, '4.563')]
[2023-05-29 03:15:05,110][23532] Updated weights for policy 0, policy_version 130 (0.0029)
[2023-05-29 03:15:10,001][02542] Fps is (10 sec: 3684.7, 60 sec: 3481.3, 300 sec: 3204.4). Total num frames: 544768. Throughput: 0: 893.0. Samples: 136138. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:15:10,004][02542] Avg episode reward: [(0, '4.607')]
[2023-05-29 03:15:14,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.6, 300 sec: 3206.6). Total num frames: 561152. Throughput: 0: 887.4. Samples: 140372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:15:15,001][02542] Avg episode reward: [(0, '4.760')]
[2023-05-29 03:15:15,022][23519] Saving new best policy, reward=4.760!
[2023-05-29 03:15:18,739][23532] Updated weights for policy 0, policy_version 140 (0.0032)
[2023-05-29 03:15:19,997][02542] Fps is (10 sec: 2868.4, 60 sec: 3345.0, 300 sec: 3185.8). Total num frames: 573440. Throughput: 0: 847.6. Samples: 144620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:15:20,007][02542] Avg episode reward: [(0, '4.665')]
[2023-05-29 03:15:24,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3210.4). Total num frames: 593920. Throughput: 0: 824.5. Samples: 146898. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:15:25,001][02542] Avg episode reward: [(0, '4.528')]
[2023-05-29 03:15:29,421][23532] Updated weights for policy 0, policy_version 150 (0.0016)
[2023-05-29 03:15:29,997][02542] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3233.7). Total num frames: 614400. Throughput: 0: 852.6. Samples: 153486. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:15:29,999][02542] Avg episode reward: [(0, '4.551')]
[2023-05-29 03:15:34,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3255.8). Total num frames: 634880. Throughput: 0: 896.5. Samples: 159658. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
[2023-05-29 03:15:34,999][02542] Avg episode reward: [(0, '4.696')]
[2023-05-29 03:15:39,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3235.8). Total num frames: 647168. Throughput: 0: 896.6. Samples: 161772. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:15:40,003][02542] Avg episode reward: [(0, '4.781')]
[2023-05-29 03:15:40,087][23519] Saving new best policy, reward=4.781!
[2023-05-29 03:15:41,504][23532] Updated weights for policy 0, policy_version 160 (0.0031)
[2023-05-29 03:15:44,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3236.8). Total num frames: 663552. Throughput: 0: 875.6. Samples: 165938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:15:45,004][02542] Avg episode reward: [(0, '4.895')]
[2023-05-29 03:15:45,016][23519] Saving new best policy, reward=4.895!
[2023-05-29 03:15:49,997][02542] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3237.8). Total num frames: 679936. Throughput: 0: 822.9. Samples: 170280. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:15:50,004][02542] Avg episode reward: [(0, '4.788')]
[2023-05-29 03:15:53,818][23532] Updated weights for policy 0, policy_version 170 (0.0052)
[2023-05-29 03:15:54,997][02542] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3257.7). Total num frames: 700416. Throughput: 0: 834.0. Samples: 173664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:15:55,003][02542] Avg episode reward: [(0, '4.671')]
[2023-05-29 03:15:55,014][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth...
[2023-05-29 03:15:59,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 720896. Throughput: 0: 890.8. Samples: 180460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:00,001][02542] Avg episode reward: [(0, '4.700')]
[2023-05-29 03:16:04,572][23532] Updated weights for policy 0, policy_version 180 (0.0021)
[2023-05-29 03:16:04,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3276.8). Total num frames: 737280. Throughput: 0: 895.0. Samples: 184894. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:16:04,999][02542] Avg episode reward: [(0, '4.775')]
[2023-05-29 03:16:09,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.6, 300 sec: 3259.0). Total num frames: 749568. Throughput: 0: 891.7. Samples: 187026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:16:09,999][02542] Avg episode reward: [(0, '5.057')]
[2023-05-29 03:16:10,002][23519] Saving new best policy, reward=5.057!
[2023-05-29 03:16:14,997][02542] Fps is (10 sec: 2457.4, 60 sec: 3345.0, 300 sec: 3241.9). Total num frames: 761856. Throughput: 0: 835.5. Samples: 191084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:16:15,002][02542] Avg episode reward: [(0, '5.138')]
[2023-05-29 03:16:15,017][23519] Saving new best policy, reward=5.138!
[2023-05-29 03:16:17,967][23532] Updated weights for policy 0, policy_version 190 (0.0021)
[2023-05-29 03:16:19,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 786432. Throughput: 0: 833.2. Samples: 197152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:20,005][02542] Avg episode reward: [(0, '4.992')]
[2023-05-29 03:16:24,997][02542] Fps is (10 sec: 4505.9, 60 sec: 3549.9, 300 sec: 3293.5). Total num frames: 806912. Throughput: 0: 863.6. Samples: 200634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:25,003][02542] Avg episode reward: [(0, '4.874')]
[2023-05-29 03:16:27,853][23532] Updated weights for policy 0, policy_version 200 (0.0014)
[2023-05-29 03:16:29,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3293.2). Total num frames: 823296. Throughput: 0: 889.7. Samples: 205976. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:30,005][02542] Avg episode reward: [(0, '5.039')]
[2023-05-29 03:16:34,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3276.8). Total num frames: 835584. Throughput: 0: 887.5. Samples: 210218. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:35,004][02542] Avg episode reward: [(0, '5.208')]
[2023-05-29 03:16:35,017][23519] Saving new best policy, reward=5.208!
[2023-05-29 03:16:39,997][02542] Fps is (10 sec: 2867.0, 60 sec: 3413.3, 300 sec: 3276.8). Total num frames: 851968. Throughput: 0: 859.2. Samples: 212328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:16:40,005][02542] Avg episode reward: [(0, '5.283')]
[2023-05-29 03:16:40,014][23519] Saving new best policy, reward=5.283!
[2023-05-29 03:16:42,243][23532] Updated weights for policy 0, policy_version 210 (0.0025)
[2023-05-29 03:16:44,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3292.3). Total num frames: 872448. Throughput: 0: 820.0. Samples: 217360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:16:44,999][02542] Avg episode reward: [(0, '5.237')]
[2023-05-29 03:16:49,997][02542] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3307.1). Total num frames: 892928. Throughput: 0: 874.2. Samples: 224232. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:16:50,001][02542] Avg episode reward: [(0, '5.232')]
[2023-05-29 03:16:51,203][23532] Updated weights for policy 0, policy_version 220 (0.0013)
[2023-05-29 03:16:54,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3306.6). Total num frames: 909312. Throughput: 0: 889.3. Samples: 227046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:16:55,006][02542] Avg episode reward: [(0, '5.241')]
[2023-05-29 03:16:59,997][02542] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3306.1). Total num frames: 925696. Throughput: 0: 893.7. Samples: 231302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:17:00,005][02542] Avg episode reward: [(0, '5.160')]
[2023-05-29 03:17:04,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3291.2). Total num frames: 937984. Throughput: 0: 852.5. Samples: 235514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:17:05,004][02542] Avg episode reward: [(0, '4.959')]
[2023-05-29 03:17:05,665][23532] Updated weights for policy 0, policy_version 230 (0.0019)
[2023-05-29 03:17:09,997][02542] Fps is (10 sec: 3276.9, 60 sec: 3481.6, 300 sec: 3305.0). Total num frames: 958464. Throughput: 0: 826.7. Samples: 237836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:17:09,999][02542] Avg episode reward: [(0, '5.092')]
[2023-05-29 03:17:14,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3318.5). Total num frames: 978944. Throughput: 0: 855.6. Samples: 244478. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:17:15,000][02542] Avg episode reward: [(0, '5.077')]
[2023-05-29 03:17:15,466][23532] Updated weights for policy 0, policy_version 240 (0.0017)
[2023-05-29 03:17:19,997][02542] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 999424. Throughput: 0: 890.0. Samples: 250266. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:17:20,002][02542] Avg episode reward: [(0, '5.161')]
[2023-05-29 03:17:24,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1011712. Throughput: 0: 889.6. Samples: 252360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:17:25,003][02542] Avg episode reward: [(0, '5.353')]
[2023-05-29 03:17:25,018][23519] Saving new best policy, reward=5.353!
[2023-05-29 03:17:28,990][23532] Updated weights for policy 0, policy_version 250 (0.0036)
[2023-05-29 03:17:29,997][02542] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 1024000. Throughput: 0: 869.5. Samples: 256488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:17:30,002][02542] Avg episode reward: [(0, '5.597')]
[2023-05-29 03:17:30,009][23519] Saving new best policy, reward=5.597!
[2023-05-29 03:17:34,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 1044480. Throughput: 0: 825.6. Samples: 261382. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:17:35,002][02542] Avg episode reward: [(0, '5.525')]
[2023-05-29 03:17:39,589][23532] Updated weights for policy 0, policy_version 260 (0.0016)
[2023-05-29 03:17:39,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1064960. Throughput: 0: 838.6. Samples: 264784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:17:39,999][02542] Avg episode reward: [(0, '5.698')]
[2023-05-29 03:17:40,006][23519] Saving new best policy, reward=5.698!
[2023-05-29 03:17:44,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1085440. Throughput: 0: 886.8. Samples: 271208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:17:45,003][02542] Avg episode reward: [(0, '5.562')]
[2023-05-29 03:17:49,997][02542] Fps is (10 sec: 3276.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 1097728. Throughput: 0: 887.1. Samples: 275432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:17:50,004][02542] Avg episode reward: [(0, '5.664')]
[2023-05-29 03:17:52,101][23532] Updated weights for policy 0, policy_version 270 (0.0043)
[2023-05-29 03:17:54,997][02542] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3457.4). Total num frames: 1114112. Throughput: 0: 883.8. Samples: 277608. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:17:55,004][02542] Avg episode reward: [(0, '5.350')]
[2023-05-29 03:17:55,015][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth...
[2023-05-29 03:17:55,140][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000069_282624.pth
[2023-05-29 03:17:59,997][02542] Fps is (10 sec: 2867.4, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 1126400. Throughput: 0: 829.3. Samples: 281796. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:18:00,004][02542] Avg episode reward: [(0, '5.929')]
[2023-05-29 03:18:00,008][23519] Saving new best policy, reward=5.929!
[2023-05-29 03:18:03,776][23532] Updated weights for policy 0, policy_version 280 (0.0035)
[2023-05-29 03:18:04,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1150976. Throughput: 0: 847.1. Samples: 288386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:18:05,004][02542] Avg episode reward: [(0, '6.024')]
[2023-05-29 03:18:05,015][23519] Saving new best policy, reward=6.024!
[2023-05-29 03:18:09,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 1171456. Throughput: 0: 874.7. Samples: 291720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:18:10,003][02542] Avg episode reward: [(0, '6.141')]
[2023-05-29 03:18:10,006][23519] Saving new best policy, reward=6.141!
[2023-05-29 03:18:14,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 1183744. Throughput: 0: 887.6. Samples: 296432. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:18:15,003][02542] Avg episode reward: [(0, '6.417')]
[2023-05-29 03:18:15,018][23519] Saving new best policy, reward=6.417!
[2023-05-29 03:18:15,040][23532] Updated weights for policy 0, policy_version 290 (0.0025)
[2023-05-29 03:18:19,997][02542] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3443.4). Total num frames: 1196032. Throughput: 0: 855.5. Samples: 299878. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-05-29 03:18:20,005][02542] Avg episode reward: [(0, '6.633')]
[2023-05-29 03:18:20,016][23519] Saving new best policy, reward=6.633!
[2023-05-29 03:18:24,997][02542] Fps is (10 sec: 2048.0, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 1204224. Throughput: 0: 802.7. Samples: 300904. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0)
[2023-05-29 03:18:25,006][02542] Avg episode reward: [(0, '6.725')]
[2023-05-29 03:18:25,079][23519] Saving new best policy, reward=6.725!
[2023-05-29 03:18:29,858][23532] Updated weights for policy 0, policy_version 300 (0.0039)
[2023-05-29 03:18:29,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 1228800. Throughput: 0: 786.5. Samples: 306600. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:18:30,004][02542] Avg episode reward: [(0, '7.347')]
[2023-05-29 03:18:30,008][23519] Saving new best policy, reward=7.347!
[2023-05-29 03:18:34,997][02542] Fps is (10 sec: 4505.5, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 1249280. Throughput: 0: 840.7. Samples: 313264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:18:35,002][02542] Avg episode reward: [(0, '7.180')]
[2023-05-29 03:18:39,997][02542] Fps is (10 sec: 3686.3, 60 sec: 3345.1, 300 sec: 3443.5). Total num frames: 1265664. Throughput: 0: 843.1. Samples: 315548. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:18:40,004][02542] Avg episode reward: [(0, '6.921')]
[2023-05-29 03:18:41,350][23532] Updated weights for policy 0, policy_version 310 (0.0019)
[2023-05-29 03:18:44,997][02542] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 1277952. Throughput: 0: 845.3. Samples: 319834. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:18:45,004][02542] Avg episode reward: [(0, '6.770')]
[2023-05-29 03:18:49,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3429.6). Total num frames: 1294336. Throughput: 0: 796.1. Samples: 324210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:18:50,005][02542] Avg episode reward: [(0, '6.682')]
[2023-05-29 03:18:53,901][23532] Updated weights for policy 0, policy_version 320 (0.0022)
[2023-05-29 03:18:54,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 1314816. Throughput: 0: 782.2. Samples: 326918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:18:54,999][02542] Avg episode reward: [(0, '7.623')]
[2023-05-29 03:18:55,009][23519] Saving new best policy, reward=7.623!
[2023-05-29 03:18:59,997][02542] Fps is (10 sec: 4096.1, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 1335296. Throughput: 0: 828.0. Samples: 333694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:18:59,999][02542] Avg episode reward: [(0, '7.603')]
[2023-05-29 03:19:03,803][23532] Updated weights for policy 0, policy_version 330 (0.0012)
[2023-05-29 03:19:04,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 1351680. Throughput: 0: 870.8. Samples: 339062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:19:04,999][02542] Avg episode reward: [(0, '7.330')]
[2023-05-29 03:19:09,997][02542] Fps is (10 sec: 3276.6, 60 sec: 3276.8, 300 sec: 3429.6). Total num frames: 1368064. Throughput: 0: 894.8. Samples: 341172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:10,002][02542] Avg episode reward: [(0, '7.174')]
[2023-05-29 03:19:14,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 1380352. Throughput: 0: 862.8. Samples: 345428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:15,000][02542] Avg episode reward: [(0, '7.545')]
[2023-05-29 03:19:18,109][23532] Updated weights for policy 0, policy_version 340 (0.0031)
[2023-05-29 03:19:19,997][02542] Fps is (10 sec: 3277.0, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 1400832. Throughput: 0: 829.3. Samples: 350584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:20,002][02542] Avg episode reward: [(0, '7.981')]
[2023-05-29 03:19:20,005][23519] Saving new best policy, reward=7.981!
[2023-05-29 03:19:24,997][02542] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 1421312. Throughput: 0: 852.7. Samples: 353918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:25,002][02542] Avg episode reward: [(0, '7.736')]
[2023-05-29 03:19:27,135][23532] Updated weights for policy 0, policy_version 350 (0.0023)
[2023-05-29 03:19:29,997][02542] Fps is (10 sec: 4095.9, 60 sec: 3549.8, 300 sec: 3457.3). Total num frames: 1441792. Throughput: 0: 898.4. Samples: 360260. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:30,000][02542] Avg episode reward: [(0, '7.504')]
[2023-05-29 03:19:34,997][02542] Fps is (10 sec: 3276.9, 60 sec: 3413.4, 300 sec: 3415.6). Total num frames: 1454080. Throughput: 0: 894.8. Samples: 364478. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:19:35,003][02542] Avg episode reward: [(0, '7.988')]
[2023-05-29 03:19:35,027][23519] Saving new best policy, reward=7.988!
[2023-05-29 03:19:39,997][02542] Fps is (10 sec: 2457.6, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1466368. Throughput: 0: 873.2. Samples: 366212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:19:40,001][02542] Avg episode reward: [(0, '8.206')]
[2023-05-29 03:19:40,006][23519] Saving new best policy, reward=8.206!
[2023-05-29 03:19:42,161][23532] Updated weights for policy 0, policy_version 360 (0.0030)
[2023-05-29 03:19:44,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1482752. Throughput: 0: 817.0. Samples: 370460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:19:45,005][02542] Avg episode reward: [(0, '8.602')]
[2023-05-29 03:19:45,015][23519] Saving new best policy, reward=8.602!
[2023-05-29 03:19:49,997][02542] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1507328. Throughput: 0: 848.5. Samples: 377246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:49,999][02542] Avg episode reward: [(0, '9.082')]
[2023-05-29 03:19:50,006][23519] Saving new best policy, reward=9.082!
[2023-05-29 03:19:51,726][23532] Updated weights for policy 0, policy_version 370 (0.0013)
[2023-05-29 03:19:54,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1527808. Throughput: 0: 876.5. Samples: 380614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:19:55,001][02542] Avg episode reward: [(0, '10.154')]
[2023-05-29 03:19:55,017][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000373_1527808.pth...
[2023-05-29 03:19:55,166][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000171_700416.pth
[2023-05-29 03:19:55,182][23519] Saving new best policy, reward=10.154!
[2023-05-29 03:19:59,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1540096. Throughput: 0: 879.6. Samples: 385012. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:20:00,007][02542] Avg episode reward: [(0, '10.596')]
[2023-05-29 03:20:00,014][23519] Saving new best policy, reward=10.596!
[2023-05-29 03:20:04,997][02542] Fps is (10 sec: 2457.4, 60 sec: 3345.0, 300 sec: 3415.7). Total num frames: 1552384. Throughput: 0: 858.0. Samples: 389194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:20:05,008][02542] Avg episode reward: [(0, '10.507')]
[2023-05-29 03:20:05,287][23532] Updated weights for policy 0, policy_version 380 (0.0022)
[2023-05-29 03:20:09,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1568768. Throughput: 0: 830.6. Samples: 391296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:20:10,004][02542] Avg episode reward: [(0, '11.234')]
[2023-05-29 03:20:10,009][23519] Saving new best policy, reward=11.234!
[2023-05-29 03:20:14,997][02542] Fps is (10 sec: 3686.7, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 1589248. Throughput: 0: 820.3. Samples: 397174. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:20:15,003][02542] Avg episode reward: [(0, '10.615')]
[2023-05-29 03:20:16,284][23532] Updated weights for policy 0, policy_version 390 (0.0022)
[2023-05-29 03:20:19,999][02542] Fps is (10 sec: 4504.7, 60 sec: 3549.7, 300 sec: 3457.3). Total num frames: 1613824. Throughput: 0: 874.5. Samples: 403832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:20:20,005][02542] Avg episode reward: [(0, '10.726')]
[2023-05-29 03:20:24,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1626112. Throughput: 0: 883.7. Samples: 405978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:20:25,002][02542] Avg episode reward: [(0, '10.560')]
[2023-05-29 03:20:28,657][23532] Updated weights for policy 0, policy_version 400 (0.0017)
[2023-05-29 03:20:29,997][02542] Fps is (10 sec: 2458.1, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 1638400. Throughput: 0: 881.0. Samples: 410106. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:20:30,000][02542] Avg episode reward: [(0, '10.393')]
[2023-05-29 03:20:34,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.6). Total num frames: 1654784. Throughput: 0: 824.4. Samples: 414342. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:20:35,003][02542] Avg episode reward: [(0, '11.416')]
[2023-05-29 03:20:35,018][23519] Saving new best policy, reward=11.416!
[2023-05-29 03:20:39,997][02542] Fps is (10 sec: 3686.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1675264. Throughput: 0: 816.6. Samples: 417362. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:20:40,000][02542] Avg episode reward: [(0, '12.742')]
[2023-05-29 03:20:40,002][23519] Saving new best policy, reward=12.742!
[2023-05-29 03:20:40,415][23532] Updated weights for policy 0, policy_version 410 (0.0032)
[2023-05-29 03:20:44,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 1699840. Throughput: 0: 867.4. Samples: 424046. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:20:44,999][02542] Avg episode reward: [(0, '12.686')]
[2023-05-29 03:20:49,997][02542] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1712128. Throughput: 0: 884.3. Samples: 428988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:20:50,001][02542] Avg episode reward: [(0, '14.112')]
[2023-05-29 03:20:50,006][23519] Saving new best policy, reward=14.112!
[2023-05-29 03:20:52,195][23532] Updated weights for policy 0, policy_version 420 (0.0030)
[2023-05-29 03:20:54,997][02542] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 1724416. Throughput: 0: 883.4. Samples: 431050. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:20:55,005][02542] Avg episode reward: [(0, '13.655')]
[2023-05-29 03:20:59,997][02542] Fps is (10 sec: 2867.0, 60 sec: 3345.0, 300 sec: 3401.8). Total num frames: 1740800. Throughput: 0: 851.5. Samples: 435494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:00,008][02542] Avg episode reward: [(0, '13.726')]
[2023-05-29 03:21:04,778][23532] Updated weights for policy 0, policy_version 430 (0.0022)
[2023-05-29 03:21:04,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1761280. Throughput: 0: 825.2. Samples: 440966. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:21:05,003][02542] Avg episode reward: [(0, '13.074')]
[2023-05-29 03:21:09,997][02542] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 1781760. Throughput: 0: 852.8. Samples: 444354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:21:09,999][02542] Avg episode reward: [(0, '13.433')]
[2023-05-29 03:21:14,997][02542] Fps is (10 sec: 3686.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1798144. Throughput: 0: 890.7. Samples: 450188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:15,006][02542] Avg episode reward: [(0, '14.748')]
[2023-05-29 03:21:15,019][23519] Saving new best policy, reward=14.748!
[2023-05-29 03:21:15,303][23532] Updated weights for policy 0, policy_version 440 (0.0017)
[2023-05-29 03:21:19,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3415.6). Total num frames: 1814528. Throughput: 0: 888.0. Samples: 454302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:21:20,004][02542] Avg episode reward: [(0, '15.149')]
[2023-05-29 03:21:20,009][23519] Saving new best policy, reward=15.149!
[2023-05-29 03:21:24,997][02542] Fps is (10 sec: 2867.3, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1826816. Throughput: 0: 868.5. Samples: 456446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:21:25,001][02542] Avg episode reward: [(0, '16.232')]
[2023-05-29 03:21:25,010][23519] Saving new best policy, reward=16.232!
[2023-05-29 03:21:28,990][23532] Updated weights for policy 0, policy_version 450 (0.0016)
[2023-05-29 03:21:29,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1847296. Throughput: 0: 824.6. Samples: 461154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:29,998][02542] Avg episode reward: [(0, '17.266')]
[2023-05-29 03:21:30,004][23519] Saving new best policy, reward=17.266!
[2023-05-29 03:21:34,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 1867776. Throughput: 0: 858.0. Samples: 467598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:34,999][02542] Avg episode reward: [(0, '17.662')]
[2023-05-29 03:21:35,007][23519] Saving new best policy, reward=17.662!
[2023-05-29 03:21:39,016][23532] Updated weights for policy 0, policy_version 460 (0.0018)
[2023-05-29 03:21:39,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1884160. Throughput: 0: 881.6. Samples: 470720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:40,000][02542] Avg episode reward: [(0, '17.831')]
[2023-05-29 03:21:40,004][23519] Saving new best policy, reward=17.831!
[2023-05-29 03:21:44,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 1896448. Throughput: 0: 871.3. Samples: 474702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:21:45,002][02542] Avg episode reward: [(0, '15.827')]
[2023-05-29 03:21:49,997][02542] Fps is (10 sec: 2867.0, 60 sec: 3345.0, 300 sec: 3401.8). Total num frames: 1912832. Throughput: 0: 846.3. Samples: 479052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:21:50,005][02542] Avg episode reward: [(0, '15.502')]
[2023-05-29 03:21:53,638][23532] Updated weights for policy 0, policy_version 470 (0.0034)
[2023-05-29 03:21:54,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 1929216. Throughput: 0: 818.5. Samples: 481188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:21:54,999][02542] Avg episode reward: [(0, '14.550')]
[2023-05-29 03:21:55,014][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000471_1929216.pth...
[2023-05-29 03:21:55,148][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000272_1114112.pth
[2023-05-29 03:21:59,997][02542] Fps is (10 sec: 3686.6, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1949696. Throughput: 0: 823.5. Samples: 487246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:22:00,003][02542] Avg episode reward: [(0, '13.619')]
[2023-05-29 03:22:03,103][23532] Updated weights for policy 0, policy_version 480 (0.0029)
[2023-05-29 03:22:04,997][02542] Fps is (10 sec: 4095.8, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1970176. Throughput: 0: 880.5. Samples: 493926. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:22:05,003][02542] Avg episode reward: [(0, '14.243')]
[2023-05-29 03:22:09,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1986560. Throughput: 0: 879.7. Samples: 496032. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:22:09,999][02542] Avg episode reward: [(0, '14.555')]
[2023-05-29 03:22:14,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1998848. Throughput: 0: 868.5. Samples: 500236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:22:15,000][02542] Avg episode reward: [(0, '14.507')]
[2023-05-29 03:22:16,653][23532] Updated weights for policy 0, policy_version 490 (0.0022)
[2023-05-29 03:22:19,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 2015232. Throughput: 0: 822.6. Samples: 504616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:22:20,003][02542] Avg episode reward: [(0, '14.715')]
[2023-05-29 03:22:24,997][02542] Fps is (10 sec: 3686.6, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2035712. Throughput: 0: 817.8. Samples: 507522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:22:25,000][02542] Avg episode reward: [(0, '16.045')]
[2023-05-29 03:22:27,161][23532] Updated weights for policy 0, policy_version 500 (0.0025)
[2023-05-29 03:22:29,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2060288. Throughput: 0: 883.8. Samples: 514472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:22:30,000][02542] Avg episode reward: [(0, '18.150')]
[2023-05-29 03:22:30,005][23519] Saving new best policy, reward=18.150!
[2023-05-29 03:22:35,002][02542] Fps is (10 sec: 4094.0, 60 sec: 3481.3, 300 sec: 3429.5). Total num frames: 2076672. Throughput: 0: 899.9. Samples: 519552. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:22:35,008][02542] Avg episode reward: [(0, '17.778')]
[2023-05-29 03:22:39,228][23532] Updated weights for policy 0, policy_version 510 (0.0026)
[2023-05-29 03:22:39,997][02542] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 2088960. Throughput: 0: 899.5. Samples: 521664. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:22:40,004][02542] Avg episode reward: [(0, '17.995')]
[2023-05-29 03:22:45,001][02542] Fps is (10 sec: 2867.4, 60 sec: 3481.4, 300 sec: 3415.6). Total num frames: 2105344. Throughput: 0: 858.8. Samples: 525894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:22:45,008][02542] Avg episode reward: [(0, '18.343')]
[2023-05-29 03:22:45,027][23519] Saving new best policy, reward=18.343!
[2023-05-29 03:22:49,997][02542] Fps is (10 sec: 3277.0, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 2121728. Throughput: 0: 831.4. Samples: 531340. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:22:50,001][02542] Avg episode reward: [(0, '17.606')]
[2023-05-29 03:22:51,316][23532] Updated weights for policy 0, policy_version 520 (0.0030)
[2023-05-29 03:22:54,997][02542] Fps is (10 sec: 4097.8, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 2146304. Throughput: 0: 860.0. Samples: 534734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:22:54,999][02542] Avg episode reward: [(0, '17.192')]
[2023-05-29 03:22:59,998][02542] Fps is (10 sec: 4095.4, 60 sec: 3549.8, 300 sec: 3429.5). Total num frames: 2162688. Throughput: 0: 900.4. Samples: 540754. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:23:00,001][02542] Avg episode reward: [(0, '18.926')]
[2023-05-29 03:23:00,004][23519] Saving new best policy, reward=18.926!
[2023-05-29 03:23:02,504][23532] Updated weights for policy 0, policy_version 530 (0.0016)
[2023-05-29 03:23:04,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3401.8). Total num frames: 2174976. Throughput: 0: 896.0. Samples: 544938. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:23:04,999][02542] Avg episode reward: [(0, '19.727')]
[2023-05-29 03:23:05,009][23519] Saving new best policy, reward=19.727!
[2023-05-29 03:23:09,997][02542] Fps is (10 sec: 2867.6, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 2191360. Throughput: 0: 879.1. Samples: 547082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:23:10,001][02542] Avg episode reward: [(0, '19.862')]
[2023-05-29 03:23:10,003][23519] Saving new best policy, reward=19.862!
[2023-05-29 03:23:14,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2207744. Throughput: 0: 821.6. Samples: 551444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:23:14,999][02542] Avg episode reward: [(0, '20.085')]
[2023-05-29 03:23:15,010][23519] Saving new best policy, reward=20.085!
[2023-05-29 03:23:15,728][23532] Updated weights for policy 0, policy_version 540 (0.0045)
[2023-05-29 03:23:19,998][02542] Fps is (10 sec: 3685.9, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 2228224. Throughput: 0: 857.3. Samples: 558128. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:23:20,000][02542] Avg episode reward: [(0, '20.232')]
[2023-05-29 03:23:20,011][23519] Saving new best policy, reward=20.232!
[2023-05-29 03:23:24,999][02542] Fps is (10 sec: 4094.9, 60 sec: 3549.7, 300 sec: 3457.3). Total num frames: 2248704. Throughput: 0: 883.5. Samples: 561422. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:23:25,009][02542] Avg episode reward: [(0, '19.997')]
[2023-05-29 03:23:25,757][23532] Updated weights for policy 0, policy_version 550 (0.0012)
[2023-05-29 03:23:29,999][02542] Fps is (10 sec: 3686.0, 60 sec: 3413.2, 300 sec: 3443.4). Total num frames: 2265088. Throughput: 0: 889.0. Samples: 565896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:23:30,002][02542] Avg episode reward: [(0, '18.718')]
[2023-05-29 03:23:34,997][02542] Fps is (10 sec: 2867.9, 60 sec: 3345.3, 300 sec: 3429.5). Total num frames: 2277376. Throughput: 0: 863.1. Samples: 570180. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:23:35,003][02542] Avg episode reward: [(0, '18.114')]
[2023-05-29 03:23:39,827][23532] Updated weights for policy 0, policy_version 560 (0.0045)
[2023-05-29 03:23:39,997][02542] Fps is (10 sec: 2867.8, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2293760. Throughput: 0: 834.7. Samples: 572294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:23:40,002][02542] Avg episode reward: [(0, '18.774')]
[2023-05-29 03:23:44,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.9, 300 sec: 3457.3). Total num frames: 2314240. Throughput: 0: 834.4. Samples: 578302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:23:45,003][02542] Avg episode reward: [(0, '19.503')]
[2023-05-29 03:23:48,966][23532] Updated weights for policy 0, policy_version 570 (0.0017)
[2023-05-29 03:23:49,997][02542] Fps is (10 sec: 4505.8, 60 sec: 3618.1, 300 sec: 3471.2). Total num frames: 2338816. Throughput: 0: 893.6. Samples: 585150. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:23:50,002][02542] Avg episode reward: [(0, '18.700')]
[2023-05-29 03:23:54,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3443.4). Total num frames: 2351104. Throughput: 0: 893.2. Samples: 587278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:23:55,005][02542] Avg episode reward: [(0, '18.912')]
[2023-05-29 03:23:55,018][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000574_2351104.pth...
[2023-05-29 03:23:55,160][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000373_1527808.pth
[2023-05-29 03:23:59,997][02542] Fps is (10 sec: 2457.6, 60 sec: 3345.2, 300 sec: 3429.5). Total num frames: 2363392. Throughput: 0: 889.2. Samples: 591456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:24:00,005][02542] Avg episode reward: [(0, '20.514')]
[2023-05-29 03:24:00,009][23519] Saving new best policy, reward=20.514!
[2023-05-29 03:24:03,009][23532] Updated weights for policy 0, policy_version 580 (0.0018)
[2023-05-29 03:24:04,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2379776. Throughput: 0: 835.9. Samples: 595742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:24:05,004][02542] Avg episode reward: [(0, '20.755')]
[2023-05-29 03:24:05,018][23519] Saving new best policy, reward=20.755!
[2023-05-29 03:24:09,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2400256. Throughput: 0: 826.3. Samples: 598604. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:24:10,000][02542] Avg episode reward: [(0, '21.949')]
[2023-05-29 03:24:10,003][23519] Saving new best policy, reward=21.949!
[2023-05-29 03:24:13,373][23532] Updated weights for policy 0, policy_version 590 (0.0013)
[2023-05-29 03:24:14,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2420736. Throughput: 0: 873.8. Samples: 605214. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:24:14,999][02542] Avg episode reward: [(0, '21.415')]
[2023-05-29 03:24:19,998][02542] Fps is (10 sec: 3685.7, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2437120. Throughput: 0: 891.6. Samples: 610302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:24:20,005][02542] Avg episode reward: [(0, '22.610')]
[2023-05-29 03:24:20,007][23519] Saving new best policy, reward=22.610!
[2023-05-29 03:24:24,999][02542] Fps is (10 sec: 3275.9, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2453504. Throughput: 0: 889.6. Samples: 612328. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:24:25,005][02542] Avg episode reward: [(0, '22.930')]
[2023-05-29 03:24:25,022][23519] Saving new best policy, reward=22.930!
[2023-05-29 03:24:26,635][23532] Updated weights for policy 0, policy_version 600 (0.0030)
[2023-05-29 03:24:29,997][02542] Fps is (10 sec: 2867.7, 60 sec: 3345.2, 300 sec: 3429.5). Total num frames: 2465792. Throughput: 0: 852.2. Samples: 616650. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:24:30,002][02542] Avg episode reward: [(0, '22.886')]
[2023-05-29 03:24:34,997][02542] Fps is (10 sec: 3277.7, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2486272. Throughput: 0: 819.2. Samples: 622016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:24:35,002][02542] Avg episode reward: [(0, '21.484')]
[2023-05-29 03:24:37,652][23532] Updated weights for policy 0, policy_version 610 (0.0017)
[2023-05-29 03:24:39,997][02542] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2506752. Throughput: 0: 847.6. Samples: 625418. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-05-29 03:24:40,000][02542] Avg episode reward: [(0, '20.864')]
[2023-05-29 03:24:44,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2527232. Throughput: 0: 891.2. Samples: 631562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:24:45,000][02542] Avg episode reward: [(0, '20.174')]
[2023-05-29 03:24:49,256][23532] Updated weights for policy 0, policy_version 620 (0.0030)
[2023-05-29 03:24:49,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3429.5). Total num frames: 2539520. Throughput: 0: 890.3. Samples: 635806. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:24:49,999][02542] Avg episode reward: [(0, '20.082')]
[2023-05-29 03:24:55,000][02542] Fps is (10 sec: 2866.1, 60 sec: 3413.1, 300 sec: 3443.4). Total num frames: 2555904. Throughput: 0: 873.6. Samples: 637918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:24:55,003][02542] Avg episode reward: [(0, '19.933')]
[2023-05-29 03:24:59,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2572288. Throughput: 0: 824.9. Samples: 642336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:25:00,003][02542] Avg episode reward: [(0, '20.416')]
[2023-05-29 03:25:01,706][23532] Updated weights for policy 0, policy_version 630 (0.0028)
[2023-05-29 03:25:04,997][02542] Fps is (10 sec: 3687.8, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2592768. Throughput: 0: 864.8. Samples: 649218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:25:04,999][02542] Avg episode reward: [(0, '20.486')]
[2023-05-29 03:25:09,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2613248. Throughput: 0: 895.7. Samples: 652630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:25:09,999][02542] Avg episode reward: [(0, '20.520')]
[2023-05-29 03:25:11,971][23532] Updated weights for policy 0, policy_version 640 (0.0018)
[2023-05-29 03:25:14,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2629632. Throughput: 0: 901.6. Samples: 657224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:25:14,999][02542] Avg episode reward: [(0, '21.030')]
[2023-05-29 03:25:19,997][02542] Fps is (10 sec: 2867.1, 60 sec: 3413.4, 300 sec: 3443.4). Total num frames: 2641920. Throughput: 0: 876.3. Samples: 661450. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:25:20,000][02542] Avg episode reward: [(0, '21.703')]
[2023-05-29 03:25:24,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.5, 300 sec: 3457.3). Total num frames: 2658304. Throughput: 0: 847.8. Samples: 663568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:25:25,002][02542] Avg episode reward: [(0, '21.332')]
[2023-05-29 03:25:26,081][23532] Updated weights for policy 0, policy_version 650 (0.0033)
[2023-05-29 03:25:29,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2678784. Throughput: 0: 839.8. Samples: 669352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:25:30,000][02542] Avg episode reward: [(0, '19.888')]
[2023-05-29 03:25:34,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2699264. Throughput: 0: 896.6. Samples: 676154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:25:35,004][02542] Avg episode reward: [(0, '19.975')]
[2023-05-29 03:25:35,017][23532] Updated weights for policy 0, policy_version 660 (0.0012)
[2023-05-29 03:25:39,998][02542] Fps is (10 sec: 3686.1, 60 sec: 3481.5, 300 sec: 3443.4). Total num frames: 2715648. Throughput: 0: 901.7. Samples: 678494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:25:40,002][02542] Avg episode reward: [(0, '19.597')]
[2023-05-29 03:25:44,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2732032. Throughput: 0: 897.7. Samples: 682732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:25:45,004][02542] Avg episode reward: [(0, '20.826')]
[2023-05-29 03:25:49,357][23532] Updated weights for policy 0, policy_version 670 (0.0021)
[2023-05-29 03:25:49,997][02542] Fps is (10 sec: 2867.4, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2744320. Throughput: 0: 838.8. Samples: 686964. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:25:49,999][02542] Avg episode reward: [(0, '21.695')]
[2023-05-29 03:25:54,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3471.2). Total num frames: 2764800. Throughput: 0: 819.5. Samples: 689508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:25:55,000][02542] Avg episode reward: [(0, '22.328')]
[2023-05-29 03:25:55,013][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000675_2764800.pth...
[2023-05-29 03:25:55,146][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000471_1929216.pth
[2023-05-29 03:25:59,295][23532] Updated weights for policy 0, policy_version 680 (0.0023)
[2023-05-29 03:25:59,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2785280. Throughput: 0: 868.0. Samples: 696282. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:25:59,999][02542] Avg episode reward: [(0, '22.609')]
[2023-05-29 03:26:04,997][02542] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2801664. Throughput: 0: 895.7. Samples: 701758. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:26:05,000][02542] Avg episode reward: [(0, '21.247')]
[2023-05-29 03:26:09,997][02542] Fps is (10 sec: 3276.6, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2818048. Throughput: 0: 896.2. Samples: 703896. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:26:10,004][02542] Avg episode reward: [(0, '23.067')]
[2023-05-29 03:26:10,006][23519] Saving new best policy, reward=23.067!
[2023-05-29 03:26:12,471][23532] Updated weights for policy 0, policy_version 690 (0.0029)
[2023-05-29 03:26:14,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 2830336. Throughput: 0: 859.0. Samples: 708006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:26:15,003][02542] Avg episode reward: [(0, '20.647')]
[2023-05-29 03:26:19,997][02542] Fps is (10 sec: 3277.0, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2850816. Throughput: 0: 819.6. Samples: 713034. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:26:19,999][02542] Avg episode reward: [(0, '20.408')]
[2023-05-29 03:26:23,553][23532] Updated weights for policy 0, policy_version 700 (0.0024)
[2023-05-29 03:26:24,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.2). Total num frames: 2871296. Throughput: 0: 841.4. Samples: 716358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:26:24,999][02542] Avg episode reward: [(0, '19.393')]
[2023-05-29 03:26:30,001][02542] Fps is (10 sec: 4094.0, 60 sec: 3549.6, 300 sec: 3471.1). Total num frames: 2891776. Throughput: 0: 895.5. Samples: 723034. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:26:30,008][02542] Avg episode reward: [(0, '20.442')]
[2023-05-29 03:26:34,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 2904064. Throughput: 0: 896.4. Samples: 727300. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:26:34,999][02542] Avg episode reward: [(0, '20.159')]
[2023-05-29 03:26:35,327][23532] Updated weights for policy 0, policy_version 710 (0.0024)
[2023-05-29 03:26:39,997][02542] Fps is (10 sec: 2868.6, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 2920448. Throughput: 0: 886.2. Samples: 729386. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:26:40,005][02542] Avg episode reward: [(0, '20.451')]
[2023-05-29 03:26:45,000][02542] Fps is (10 sec: 2866.4, 60 sec: 3344.9, 300 sec: 3457.3). Total num frames: 2932736. Throughput: 0: 829.9. Samples: 733630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:26:45,007][02542] Avg episode reward: [(0, '20.195')]
[2023-05-29 03:26:47,856][23532] Updated weights for policy 0, policy_version 720 (0.0039)
[2023-05-29 03:26:49,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2957312. Throughput: 0: 852.0. Samples: 740098. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:26:50,004][02542] Avg episode reward: [(0, '22.100')]
[2023-05-29 03:26:54,999][02542] Fps is (10 sec: 4505.7, 60 sec: 3549.7, 300 sec: 3485.0). Total num frames: 2977792. Throughput: 0: 880.6. Samples: 743526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:26:55,002][02542] Avg episode reward: [(0, '22.242')]
[2023-05-29 03:26:58,331][23532] Updated weights for policy 0, policy_version 730 (0.0034)
[2023-05-29 03:26:59,997][02542] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2994176. Throughput: 0: 899.4. Samples: 748480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:00,002][02542] Avg episode reward: [(0, '22.398')]
[2023-05-29 03:27:04,997][02542] Fps is (10 sec: 2867.9, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3006464. Throughput: 0: 879.1. Samples: 752592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:27:05,002][02542] Avg episode reward: [(0, '22.805')]
[2023-05-29 03:27:09,997][02542] Fps is (10 sec: 2867.3, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 3022848. Throughput: 0: 852.8. Samples: 754734. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:10,003][02542] Avg episode reward: [(0, '22.762')]
[2023-05-29 03:27:12,054][23532] Updated weights for policy 0, policy_version 740 (0.0034)
[2023-05-29 03:27:14,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3043328. Throughput: 0: 826.6. Samples: 760226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:27:14,999][02542] Avg episode reward: [(0, '22.491')]
[2023-05-29 03:27:19,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3063808. Throughput: 0: 885.0. Samples: 767124. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:20,003][02542] Avg episode reward: [(0, '21.293')]
[2023-05-29 03:27:21,118][23532] Updated weights for policy 0, policy_version 750 (0.0014)
[2023-05-29 03:27:24,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 3080192. Throughput: 0: 896.0. Samples: 769708. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:27:25,004][02542] Avg episode reward: [(0, '22.394')]
[2023-05-29 03:27:29,997][02542] Fps is (10 sec: 2867.1, 60 sec: 3345.3, 300 sec: 3443.5). Total num frames: 3092480. Throughput: 0: 895.5. Samples: 773926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:30,000][02542] Avg episode reward: [(0, '22.545')]
[2023-05-29 03:27:34,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3108864. Throughput: 0: 850.0. Samples: 778350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:35,001][02542] Avg episode reward: [(0, '22.485')]
[2023-05-29 03:27:35,476][23532] Updated weights for policy 0, policy_version 760 (0.0020)
[2023-05-29 03:27:39,997][02542] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3129344. Throughput: 0: 827.0. Samples: 780738. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:27:39,999][02542] Avg episode reward: [(0, '22.286')]
[2023-05-29 03:27:44,991][23532] Updated weights for policy 0, policy_version 770 (0.0015)
[2023-05-29 03:27:44,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3686.6, 300 sec: 3499.0). Total num frames: 3153920. Throughput: 0: 868.1. Samples: 787542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:27:44,999][02542] Avg episode reward: [(0, '22.798')]
[2023-05-29 03:27:49,998][02542] Fps is (10 sec: 4095.3, 60 sec: 3549.8, 300 sec: 3471.2). Total num frames: 3170304. Throughput: 0: 905.2. Samples: 793328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:27:50,005][02542] Avg episode reward: [(0, '21.735')]
[2023-05-29 03:27:54,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.5, 300 sec: 3457.3). Total num frames: 3182592. Throughput: 0: 904.7. Samples: 795446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:27:55,005][02542] Avg episode reward: [(0, '22.040')]
[2023-05-29 03:27:55,024][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth...
[2023-05-29 03:27:55,168][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000574_2351104.pth
[2023-05-29 03:27:58,187][23532] Updated weights for policy 0, policy_version 780 (0.0018)
[2023-05-29 03:27:59,997][02542] Fps is (10 sec: 2867.7, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 3198976. Throughput: 0: 877.7. Samples: 799722. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:28:00,006][02542] Avg episode reward: [(0, '22.435')]
[2023-05-29 03:28:04,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3215360. Throughput: 0: 827.2. Samples: 804350. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:28:04,999][02542] Avg episode reward: [(0, '22.665')]
[2023-05-29 03:28:09,410][23532] Updated weights for policy 0, policy_version 790 (0.0021)
[2023-05-29 03:28:09,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3235840. Throughput: 0: 845.1. Samples: 807738. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:28:09,999][02542] Avg episode reward: [(0, '22.434')]
[2023-05-29 03:28:14,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3256320. Throughput: 0: 901.5. Samples: 814492. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:28:15,008][02542] Avg episode reward: [(0, '22.436')]
[2023-05-29 03:28:19,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3272704. Throughput: 0: 898.5. Samples: 818784. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:28:20,009][02542] Avg episode reward: [(0, '22.330')]
[2023-05-29 03:28:20,966][23532] Updated weights for policy 0, policy_version 800 (0.0035)
[2023-05-29 03:28:24,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3284992. Throughput: 0: 892.5. Samples: 820902. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
[2023-05-29 03:28:24,999][02542] Avg episode reward: [(0, '22.042')]
[2023-05-29 03:28:29,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3301376. Throughput: 0: 837.2. Samples: 825214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:28:30,002][02542] Avg episode reward: [(0, '21.211')]
[2023-05-29 03:28:33,607][23532] Updated weights for policy 0, policy_version 810 (0.0019)
[2023-05-29 03:28:34,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3321856. Throughput: 0: 844.6. Samples: 831334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:28:34,999][02542] Avg episode reward: [(0, '21.826')]
[2023-05-29 03:28:39,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3499.0). Total num frames: 3346432. Throughput: 0: 873.8. Samples: 834766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
[2023-05-29 03:28:40,001][02542] Avg episode reward: [(0, '22.285')]
[2023-05-29 03:28:44,095][23532] Updated weights for policy 0, policy_version 820 (0.0014)
[2023-05-29 03:28:44,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3358720. Throughput: 0: 896.3. Samples: 840054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:28:45,002][02542] Avg episode reward: [(0, '23.541')]
[2023-05-29 03:28:45,017][23519] Saving new best policy, reward=23.541!
[2023-05-29 03:28:50,000][02542] Fps is (10 sec: 2866.4, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3375104. Throughput: 0: 888.1. Samples: 844318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:28:50,005][02542] Avg episode reward: [(0, '23.447')]
[2023-05-29 03:28:54,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3387392. Throughput: 0: 860.4. Samples: 846458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:28:54,999][02542] Avg episode reward: [(0, '24.029')]
[2023-05-29 03:28:55,013][23519] Saving new best policy, reward=24.029!
[2023-05-29 03:28:57,923][23532] Updated weights for policy 0, policy_version 830 (0.0017)
[2023-05-29 03:28:59,997][02542] Fps is (10 sec: 3277.8, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3407872. Throughput: 0: 821.7. Samples: 851470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:29:00,004][02542] Avg episode reward: [(0, '24.903')]
[2023-05-29 03:29:00,008][23519] Saving new best policy, reward=24.903!
[2023-05-29 03:29:04,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3428352. Throughput: 0: 876.0. Samples: 858206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:29:04,999][02542] Avg episode reward: [(0, '25.346')]
[2023-05-29 03:29:05,012][23519] Saving new best policy, reward=25.346!
[2023-05-29 03:29:07,405][23532] Updated weights for policy 0, policy_version 840 (0.0038)
[2023-05-29 03:29:09,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3444736. Throughput: 0: 890.0. Samples: 860950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:29:10,003][02542] Avg episode reward: [(0, '25.055')]
[2023-05-29 03:29:14,997][02542] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3461120. Throughput: 0: 887.4. Samples: 865146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:29:15,004][02542] Avg episode reward: [(0, '24.028')]
[2023-05-29 03:29:19,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3473408. Throughput: 0: 845.7. Samples: 869392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:29:19,999][02542] Avg episode reward: [(0, '24.452')]
[2023-05-29 03:29:21,980][23532] Updated weights for policy 0, policy_version 850 (0.0023)
[2023-05-29 03:29:24,998][02542] Fps is (10 sec: 3276.5, 60 sec: 3481.5, 300 sec: 3485.1). Total num frames: 3493888. Throughput: 0: 816.6. Samples: 871512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:29:25,003][02542] Avg episode reward: [(0, '24.866')]
[2023-05-29 03:29:29,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3514368. Throughput: 0: 845.8. Samples: 878116. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:29:30,002][02542] Avg episode reward: [(0, '22.847')]
[2023-05-29 03:29:31,377][23532] Updated weights for policy 0, policy_version 860 (0.0029)
[2023-05-29 03:29:34,997][02542] Fps is (10 sec: 3686.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3530752. Throughput: 0: 885.1. Samples: 884144. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:29:35,004][02542] Avg episode reward: [(0, '23.403')]
[2023-05-29 03:29:39,997][02542] Fps is (10 sec: 3276.7, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3547136. Throughput: 0: 884.7. Samples: 886270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:29:40,003][02542] Avg episode reward: [(0, '23.964')]
[2023-05-29 03:29:44,997][02542] Fps is (10 sec: 2867.0, 60 sec: 3345.0, 300 sec: 3457.3). Total num frames: 3559424. Throughput: 0: 868.9. Samples: 890572. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:29:45,008][02542] Avg episode reward: [(0, '24.599')]
[2023-05-29 03:29:45,122][23532] Updated weights for policy 0, policy_version 870 (0.0030)
[2023-05-29 03:29:49,997][02542] Fps is (10 sec: 2867.3, 60 sec: 3345.2, 300 sec: 3457.3). Total num frames: 3575808. Throughput: 0: 820.2. Samples: 895114. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:29:50,006][02542] Avg episode reward: [(0, '24.078')]
[2023-05-29 03:29:54,997][02542] Fps is (10 sec: 4096.2, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3600384. Throughput: 0: 835.8. Samples: 898562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:29:54,999][02542] Avg episode reward: [(0, '24.312')]
[2023-05-29 03:29:55,012][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000879_3600384.pth...
[2023-05-29 03:29:55,132][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000675_2764800.pth
[2023-05-29 03:29:55,696][23532] Updated weights for policy 0, policy_version 880 (0.0031)
[2023-05-29 03:29:59,997][02542] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3620864. Throughput: 0: 890.3. Samples: 905208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:00,000][02542] Avg episode reward: [(0, '24.838')]
[2023-05-29 03:30:04,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3633152. Throughput: 0: 897.4. Samples: 909776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:30:05,007][02542] Avg episode reward: [(0, '24.692')]
[2023-05-29 03:30:07,934][23532] Updated weights for policy 0, policy_version 890 (0.0012)
[2023-05-29 03:30:09,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3649536. Throughput: 0: 897.7. Samples: 911906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:10,002][02542] Avg episode reward: [(0, '24.319')]
[2023-05-29 03:30:14,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.4, 300 sec: 3471.2). Total num frames: 3665920. Throughput: 0: 846.9. Samples: 916226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:15,002][02542] Avg episode reward: [(0, '23.867')]
[2023-05-29 03:30:19,704][23532] Updated weights for policy 0, policy_version 900 (0.0045)
[2023-05-29 03:30:19,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3686400. Throughput: 0: 844.8. Samples: 922162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:19,999][02542] Avg episode reward: [(0, '24.495')]
[2023-05-29 03:30:24,997][02542] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3706880. Throughput: 0: 872.0. Samples: 925508. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
[2023-05-29 03:30:24,999][02542] Avg episode reward: [(0, '25.054')]
[2023-05-29 03:30:30,001][02542] Fps is (10 sec: 3684.6, 60 sec: 3481.3, 300 sec: 3471.1). Total num frames: 3723264. Throughput: 0: 895.6. Samples: 930876. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:30,004][02542] Avg episode reward: [(0, '25.174')]
[2023-05-29 03:30:31,013][23532] Updated weights for policy 0, policy_version 910 (0.0024)
[2023-05-29 03:30:34,997][02542] Fps is (10 sec: 2867.0, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3735552. Throughput: 0: 889.3. Samples: 935134. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:35,004][02542] Avg episode reward: [(0, '25.633')]
[2023-05-29 03:30:35,021][23519] Saving new best policy, reward=25.633!
[2023-05-29 03:30:39,997][02542] Fps is (10 sec: 2868.5, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3751936. Throughput: 0: 858.8. Samples: 937206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:40,003][02542] Avg episode reward: [(0, '26.056')]
[2023-05-29 03:30:40,005][23519] Saving new best policy, reward=26.056!
[2023-05-29 03:30:44,287][23532] Updated weights for policy 0, policy_version 920 (0.0039)
[2023-05-29 03:30:44,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3768320. Throughput: 0: 819.2. Samples: 942072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:45,000][02542] Avg episode reward: [(0, '26.673')]
[2023-05-29 03:30:45,010][23519] Saving new best policy, reward=26.673!
[2023-05-29 03:30:49,997][02542] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3485.1). Total num frames: 3792896. Throughput: 0: 866.0. Samples: 948748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:30:49,999][02542] Avg episode reward: [(0, '26.659')]
[2023-05-29 03:30:54,164][23532] Updated weights for policy 0, policy_version 930 (0.0014)
[2023-05-29 03:30:54,997][02542] Fps is (10 sec: 4096.2, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3809280. Throughput: 0: 888.3. Samples: 951878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:30:54,999][02542] Avg episode reward: [(0, '25.581')]
[2023-05-29 03:30:59,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 3821568. Throughput: 0: 885.0. Samples: 956050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
[2023-05-29 03:30:59,999][02542] Avg episode reward: [(0, '24.633')]
[2023-05-29 03:31:04,997][02542] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3837952. Throughput: 0: 846.9. Samples: 960274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:31:04,999][02542] Avg episode reward: [(0, '24.177')]
[2023-05-29 03:31:08,680][23532] Updated weights for policy 0, policy_version 940 (0.0035)
[2023-05-29 03:31:09,997][02542] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 3854336. Throughput: 0: 819.6. Samples: 962392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:31:10,005][02542] Avg episode reward: [(0, '24.127')]
[2023-05-29 03:31:14,998][02542] Fps is (10 sec: 3685.9, 60 sec: 3481.5, 300 sec: 3471.2). Total num frames: 3874816. Throughput: 0: 843.3. Samples: 968820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:31:15,006][02542] Avg episode reward: [(0, '23.967')]
[2023-05-29 03:31:17,901][23532] Updated weights for policy 0, policy_version 950 (0.0024)
[2023-05-29 03:31:19,997][02542] Fps is (10 sec: 4095.7, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3895296. Throughput: 0: 887.6. Samples: 975078. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
[2023-05-29 03:31:20,005][02542] Avg episode reward: [(0, '23.843')]
[2023-05-29 03:31:24,997][02542] Fps is (10 sec: 3686.9, 60 sec: 3413.3, 300 sec: 3457.4). Total num frames: 3911680. Throughput: 0: 889.9. Samples: 977252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
[2023-05-29 03:31:25,003][02542] Avg episode reward: [(0, '24.516')]
[2023-05-29 03:31:29,997][02542] Fps is (10 sec: 2867.4, 60 sec: 3345.3, 300 sec: 3457.3). Total num frames: 3923968. Throughput: 0: 875.8. Samples: 981482. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:31:30,004][02542] Avg episode reward: [(0, '24.229')]
[2023-05-29 03:31:31,909][23532] Updated weights for policy 0, policy_version 960 (0.0031)
[2023-05-29 03:31:34,998][02542] Fps is (10 sec: 2866.7, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 3940352. Throughput: 0: 823.7. Samples: 985816. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
[2023-05-29 03:31:35,000][02542] Avg episode reward: [(0, '25.362')]
[2023-05-29 03:31:39,997][02542] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 3960832. Throughput: 0: 824.8. Samples: 988996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:31:39,999][02542] Avg episode reward: [(0, '25.974')]
[2023-05-29 03:31:42,188][23532] Updated weights for policy 0, policy_version 970 (0.0019)
[2023-05-29 03:31:44,997][02542] Fps is (10 sec: 4506.4, 60 sec: 3618.2, 300 sec: 3485.1). Total num frames: 3985408. Throughput: 0: 881.3. Samples: 995708. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:31:45,004][02542] Avg episode reward: [(0, '26.018')]
[2023-05-29 03:31:49,999][02542] Fps is (10 sec: 3685.5, 60 sec: 3413.2, 300 sec: 3457.3). Total num frames: 3997696. Throughput: 0: 895.7. Samples: 1000584. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
[2023-05-29 03:31:50,006][02542] Avg episode reward: [(0, '27.159')]
[2023-05-29 03:31:50,011][23519] Saving new best policy, reward=27.159!
[2023-05-29 03:31:51,774][23519] Stopping Batcher_0...
[2023-05-29 03:31:51,775][23519] Loop batcher_evt_loop terminating...
[2023-05-29 03:31:51,775][02542] Component Batcher_0 stopped!
[2023-05-29 03:31:51,779][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-05-29 03:31:51,839][02542] Component RolloutWorker_w2 stopped!
[2023-05-29 03:31:51,842][23535] Stopping RolloutWorker_w2...
[2023-05-29 03:31:51,853][02542] Component RolloutWorker_w0 stopped!
[2023-05-29 03:31:51,857][23537] Stopping RolloutWorker_w0...
[2023-05-29 03:31:51,858][23537] Loop rollout_proc0_evt_loop terminating...
[2023-05-29 03:31:51,860][23538] Stopping RolloutWorker_w6...
[2023-05-29 03:31:51,861][23538] Loop rollout_proc6_evt_loop terminating...
[2023-05-29 03:31:51,862][02542] Component RolloutWorker_w6 stopped!
[2023-05-29 03:31:51,843][23535] Loop rollout_proc2_evt_loop terminating...
[2023-05-29 03:31:51,895][02542] Component RolloutWorker_w4 stopped!
[2023-05-29 03:31:51,898][23536] Stopping RolloutWorker_w4...
[2023-05-29 03:31:51,900][23532] Weights refcount: 2 0
[2023-05-29 03:31:51,898][23536] Loop rollout_proc4_evt_loop terminating...
[2023-05-29 03:31:51,915][02542] Component InferenceWorker_p0-w0 stopped!
[2023-05-29 03:31:51,917][23532] Stopping InferenceWorker_p0-w0...
[2023-05-29 03:31:51,917][23532] Loop inference_proc0-0_evt_loop terminating...
[2023-05-29 03:31:52,007][23533] Stopping RolloutWorker_w1...
[2023-05-29 03:31:52,008][23533] Loop rollout_proc1_evt_loop terminating...
[2023-05-29 03:31:52,009][02542] Component RolloutWorker_w1 stopped!
[2023-05-29 03:31:52,019][23534] Stopping RolloutWorker_w3...
[2023-05-29 03:31:52,019][02542] Component RolloutWorker_w3 stopped!
[2023-05-29 03:31:52,041][23519] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000777_3182592.pth
[2023-05-29 03:31:52,020][23534] Loop rollout_proc3_evt_loop terminating...
[2023-05-29 03:31:52,063][23539] Stopping RolloutWorker_w5...
[2023-05-29 03:31:52,063][23539] Loop rollout_proc5_evt_loop terminating...
[2023-05-29 03:31:52,063][02542] Component RolloutWorker_w5 stopped!
[2023-05-29 03:31:52,069][02542] Component RolloutWorker_w7 stopped!
[2023-05-29 03:31:52,069][23540] Stopping RolloutWorker_w7...
[2023-05-29 03:31:52,081][23540] Loop rollout_proc7_evt_loop terminating...
[2023-05-29 03:31:52,094][23519] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-05-29 03:31:52,318][02542] Component LearnerWorker_p0 stopped!
[2023-05-29 03:31:52,324][02542] Waiting for process learner_proc0 to stop...
[2023-05-29 03:31:52,325][23519] Stopping LearnerWorker_p0...
[2023-05-29 03:31:52,327][23519] Loop learner_proc0_evt_loop terminating...
[2023-05-29 03:31:54,541][02542] Waiting for process inference_proc0-0 to join...
[2023-05-29 03:31:54,547][02542] Waiting for process rollout_proc0 to join...
[2023-05-29 03:31:55,970][02542] Waiting for process rollout_proc1 to join...
[2023-05-29 03:31:56,249][02542] Waiting for process rollout_proc2 to join...
[2023-05-29 03:31:56,251][02542] Waiting for process rollout_proc3 to join...
[2023-05-29 03:31:56,253][02542] Waiting for process rollout_proc4 to join...
[2023-05-29 03:31:56,254][02542] Waiting for process rollout_proc5 to join...
[2023-05-29 03:31:56,257][02542] Waiting for process rollout_proc6 to join...
[2023-05-29 03:31:56,263][02542] Waiting for process rollout_proc7 to join...
[2023-05-29 03:31:56,266][02542] Batcher 0 profile tree view:
batching: 28.4645, releasing_batches: 0.0263
[2023-05-29 03:31:56,269][02542] InferenceWorker_p0-w0 profile tree view:
wait_policy: 0.0001
wait_policy_total: 515.2225
update_model: 8.4980
weight_update: 0.0023
one_step: 0.0208
handle_policy_step: 600.8581
deserialize: 16.4195, stack: 3.3987, obs_to_device_normalize: 127.6553, forward: 301.8284, send_messages: 31.2079
prepare_outputs: 90.6910
to_cpu: 54.6687
[2023-05-29 03:31:56,271][02542] Learner 0 profile tree view:
misc: 0.0054, prepare_batch: 18.6080
train: 76.9040
epoch_init: 0.0139, minibatch_init: 0.0068, losses_postprocess: 0.5416, kl_divergence: 0.6333, after_optimizer: 33.1495
calculate_losses: 26.1113
losses_init: 0.0035, forward_head: 1.7922, bptt_initial: 16.2909, tail: 1.3804, advantages_returns: 0.2870, losses: 3.3330
bptt: 2.6872
bptt_forward_core: 2.5986
update: 15.7342
clip: 1.6206
[2023-05-29 03:31:56,274][02542] RolloutWorker_w0 profile tree view:
wait_for_trajectories: 0.4525, enqueue_policy_requests: 149.0933, env_step: 870.9972, overhead: 26.3125, complete_rollouts: 7.7765
save_policy_outputs: 23.7543
split_output_tensors: 11.2826
[2023-05-29 03:31:56,275][02542] RolloutWorker_w7 profile tree view:
wait_for_trajectories: 0.4058, enqueue_policy_requests: 148.5290, env_step: 872.8197, overhead: 26.1424, complete_rollouts: 6.9500
save_policy_outputs: 24.2615
split_output_tensors: 11.7812
[2023-05-29 03:31:56,277][02542] Loop Runner_EvtLoop terminating...
[2023-05-29 03:31:56,278][02542] Runner profile tree view:
main_loop: 1195.5290
[2023-05-29 03:31:56,280][02542] Collected {0: 4005888}, FPS: 3350.7
[2023-05-29 03:31:56,338][02542] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-05-29 03:31:56,340][02542] Overriding arg 'num_workers' with value 1 passed from command line
[2023-05-29 03:31:56,342][02542] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-05-29 03:31:56,345][02542] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-05-29 03:31:56,347][02542] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-05-29 03:31:56,348][02542] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-05-29 03:31:56,351][02542] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2023-05-29 03:31:56,352][02542] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-05-29 03:31:56,353][02542] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2023-05-29 03:31:56,356][02542] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2023-05-29 03:31:56,357][02542] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-05-29 03:31:56,358][02542] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-05-29 03:31:56,360][02542] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-05-29 03:31:56,361][02542] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-05-29 03:31:56,363][02542] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-05-29 03:31:56,406][02542] Doom resolution: 160x120, resize resolution: (128, 72)
[2023-05-29 03:31:56,410][02542] RunningMeanStd input shape: (3, 72, 128)
[2023-05-29 03:31:56,413][02542] RunningMeanStd input shape: (1,)
[2023-05-29 03:31:56,434][02542] ConvEncoder: input_channels=3
[2023-05-29 03:31:56,650][02542] Conv encoder output size: 512
[2023-05-29 03:31:56,653][02542] Policy head output size: 512
[2023-05-29 03:31:59,958][02542] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-05-29 03:32:01,126][02542] Num frames 100...
[2023-05-29 03:32:01,252][02542] Num frames 200...
[2023-05-29 03:32:01,373][02542] Num frames 300...
[2023-05-29 03:32:01,498][02542] Num frames 400...
[2023-05-29 03:32:01,574][02542] Avg episode rewards: #0: 6.160, true rewards: #0: 4.160
[2023-05-29 03:32:01,576][02542] Avg episode reward: 6.160, avg true_objective: 4.160
[2023-05-29 03:32:01,680][02542] Num frames 500...
[2023-05-29 03:32:01,810][02542] Num frames 600...
[2023-05-29 03:32:01,939][02542] Num frames 700...
[2023-05-29 03:32:02,063][02542] Num frames 800...
[2023-05-29 03:32:02,195][02542] Num frames 900...
[2023-05-29 03:32:02,324][02542] Num frames 1000...
[2023-05-29 03:32:02,454][02542] Num frames 1100...
[2023-05-29 03:32:02,575][02542] Num frames 1200...
[2023-05-29 03:32:02,694][02542] Avg episode rewards: #0: 11.240, true rewards: #0: 6.240
[2023-05-29 03:32:02,696][02542] Avg episode reward: 11.240, avg true_objective: 6.240
[2023-05-29 03:32:02,763][02542] Num frames 1300...
[2023-05-29 03:32:02,888][02542] Num frames 1400...
[2023-05-29 03:32:03,017][02542] Num frames 1500...
[2023-05-29 03:32:03,145][02542] Num frames 1600...
[2023-05-29 03:32:03,271][02542] Num frames 1700...
[2023-05-29 03:32:03,401][02542] Num frames 1800...
[2023-05-29 03:32:03,525][02542] Num frames 1900...
[2023-05-29 03:32:03,646][02542] Num frames 2000...
[2023-05-29 03:32:03,768][02542] Num frames 2100...
[2023-05-29 03:32:03,890][02542] Num frames 2200...
[2023-05-29 03:32:04,012][02542] Num frames 2300...
[2023-05-29 03:32:04,144][02542] Num frames 2400...
[2023-05-29 03:32:04,272][02542] Num frames 2500...
[2023-05-29 03:32:04,394][02542] Num frames 2600...
[2023-05-29 03:32:04,517][02542] Num frames 2700...
[2023-05-29 03:32:04,640][02542] Num frames 2800...
[2023-05-29 03:32:04,763][02542] Num frames 2900...
[2023-05-29 03:32:04,880][02542] Num frames 3000...
[2023-05-29 03:32:05,010][02542] Num frames 3100...
[2023-05-29 03:32:05,071][02542] Avg episode rewards: #0: 22.680, true rewards: #0: 10.347
[2023-05-29 03:32:05,073][02542] Avg episode reward: 22.680, avg true_objective: 10.347
[2023-05-29 03:32:05,194][02542] Num frames 3200...
[2023-05-29 03:32:05,317][02542] Num frames 3300...
[2023-05-29 03:32:05,436][02542] Num frames 3400...
[2023-05-29 03:32:05,557][02542] Num frames 3500...
[2023-05-29 03:32:05,683][02542] Num frames 3600...
[2023-05-29 03:32:05,810][02542] Num frames 3700...
[2023-05-29 03:32:05,935][02542] Num frames 3800...
[2023-05-29 03:32:06,057][02542] Num frames 3900...
[2023-05-29 03:32:06,191][02542] Num frames 4000...
[2023-05-29 03:32:06,323][02542] Num frames 4100...
[2023-05-29 03:32:06,442][02542] Num frames 4200...
[2023-05-29 03:32:06,565][02542] Num frames 4300...
[2023-05-29 03:32:06,691][02542] Num frames 4400...
[2023-05-29 03:32:06,825][02542] Num frames 4500...
[2023-05-29 03:32:06,955][02542] Num frames 4600...
[2023-05-29 03:32:07,076][02542] Num frames 4700...
[2023-05-29 03:32:07,207][02542] Num frames 4800...
[2023-05-29 03:32:07,339][02542] Num frames 4900...
[2023-05-29 03:32:07,465][02542] Num frames 5000...
[2023-05-29 03:32:07,589][02542] Num frames 5100...
[2023-05-29 03:32:07,716][02542] Num frames 5200...
[2023-05-29 03:32:07,778][02542] Avg episode rewards: #0: 31.510, true rewards: #0: 13.010
[2023-05-29 03:32:07,780][02542] Avg episode reward: 31.510, avg true_objective: 13.010
[2023-05-29 03:32:07,907][02542] Num frames 5300...
[2023-05-29 03:32:08,034][02542] Num frames 5400...
[2023-05-29 03:32:08,172][02542] Num frames 5500...
[2023-05-29 03:32:08,304][02542] Num frames 5600...
[2023-05-29 03:32:08,429][02542] Num frames 5700...
[2023-05-29 03:32:08,553][02542] Num frames 5800...
[2023-05-29 03:32:08,684][02542] Num frames 5900...
[2023-05-29 03:32:08,803][02542] Num frames 6000...
[2023-05-29 03:32:08,926][02542] Num frames 6100...
[2023-05-29 03:32:09,054][02542] Num frames 6200...
[2023-05-29 03:32:09,119][02542] Avg episode rewards: #0: 30.210, true rewards: #0: 12.410
[2023-05-29 03:32:09,121][02542] Avg episode reward: 30.210, avg true_objective: 12.410
[2023-05-29 03:32:09,250][02542] Num frames 6300...
[2023-05-29 03:32:09,384][02542] Num frames 6400...
[2023-05-29 03:32:09,507][02542] Num frames 6500...
[2023-05-29 03:32:09,647][02542] Num frames 6600...
[2023-05-29 03:32:09,812][02542] Num frames 6700...
[2023-05-29 03:32:09,988][02542] Num frames 6800...
[2023-05-29 03:32:10,152][02542] Num frames 6900...
[2023-05-29 03:32:10,341][02542] Num frames 7000...
[2023-05-29 03:32:10,532][02542] Num frames 7100...
[2023-05-29 03:32:10,704][02542] Num frames 7200...
[2023-05-29 03:32:10,885][02542] Num frames 7300...
[2023-05-29 03:32:11,068][02542] Num frames 7400...
[2023-05-29 03:32:11,242][02542] Num frames 7500...
[2023-05-29 03:32:11,431][02542] Num frames 7600...
[2023-05-29 03:32:11,600][02542] Num frames 7700...
[2023-05-29 03:32:11,678][02542] Avg episode rewards: #0: 31.681, true rewards: #0: 12.848
[2023-05-29 03:32:11,681][02542] Avg episode reward: 31.681, avg true_objective: 12.848
[2023-05-29 03:32:11,833][02542] Num frames 7800...
[2023-05-29 03:32:12,011][02542] Num frames 7900...
[2023-05-29 03:32:12,181][02542] Num frames 8000...
[2023-05-29 03:32:12,360][02542] Num frames 8100...
[2023-05-29 03:32:12,533][02542] Num frames 8200...
[2023-05-29 03:32:12,714][02542] Num frames 8300...
[2023-05-29 03:32:12,887][02542] Num frames 8400...
[2023-05-29 03:32:13,059][02542] Num frames 8500...
[2023-05-29 03:32:13,249][02542] Num frames 8600...
[2023-05-29 03:32:13,433][02542] Num frames 8700...
[2023-05-29 03:32:13,622][02542] Avg episode rewards: #0: 30.093, true rewards: #0: 12.521
[2023-05-29 03:32:13,625][02542] Avg episode reward: 30.093, avg true_objective: 12.521
[2023-05-29 03:32:13,699][02542] Num frames 8800...
[2023-05-29 03:32:13,884][02542] Num frames 8900...
[2023-05-29 03:32:14,060][02542] Num frames 9000...
[2023-05-29 03:32:14,237][02542] Num frames 9100...
[2023-05-29 03:32:14,416][02542] Num frames 9200...
[2023-05-29 03:32:14,592][02542] Num frames 9300...
[2023-05-29 03:32:14,766][02542] Num frames 9400...
[2023-05-29 03:32:14,940][02542] Num frames 9500...
[2023-05-29 03:32:15,118][02542] Num frames 9600...
[2023-05-29 03:32:15,292][02542] Num frames 9700...
[2023-05-29 03:32:15,471][02542] Num frames 9800...
[2023-05-29 03:32:15,652][02542] Num frames 9900...
[2023-05-29 03:32:15,826][02542] Num frames 10000...
[2023-05-29 03:32:16,012][02542] Num frames 10100...
[2023-05-29 03:32:16,192][02542] Num frames 10200...
[2023-05-29 03:32:16,377][02542] Num frames 10300...
[2023-05-29 03:32:16,568][02542] Num frames 10400...
[2023-05-29 03:32:16,741][02542] Num frames 10500...
[2023-05-29 03:32:16,922][02542] Num frames 10600...
[2023-05-29 03:32:17,102][02542] Num frames 10700...
[2023-05-29 03:32:17,284][02542] Num frames 10800...
[2023-05-29 03:32:17,421][02542] Avg episode rewards: #0: 33.706, true rewards: #0: 13.581
[2023-05-29 03:32:17,423][02542] Avg episode reward: 33.706, avg true_objective: 13.581
[2023-05-29 03:32:17,477][02542] Num frames 10900...
[2023-05-29 03:32:17,600][02542] Num frames 11000...
[2023-05-29 03:32:17,726][02542] Num frames 11100...
[2023-05-29 03:32:17,857][02542] Num frames 11200...
[2023-05-29 03:32:17,985][02542] Num frames 11300...
[2023-05-29 03:32:18,098][02542] Avg episode rewards: #0: 31.161, true rewards: #0: 12.606
[2023-05-29 03:32:18,099][02542] Avg episode reward: 31.161, avg true_objective: 12.606
[2023-05-29 03:32:18,168][02542] Num frames 11400...
[2023-05-29 03:32:18,292][02542] Num frames 11500...
[2023-05-29 03:32:18,419][02542] Num frames 11600...
[2023-05-29 03:32:18,552][02542] Num frames 11700...
[2023-05-29 03:32:18,677][02542] Num frames 11800...
[2023-05-29 03:32:18,796][02542] Num frames 11900...
[2023-05-29 03:32:18,921][02542] Num frames 12000...
[2023-05-29 03:32:19,046][02542] Num frames 12100...
[2023-05-29 03:32:19,183][02542] Num frames 12200...
[2023-05-29 03:32:19,290][02542] Avg episode rewards: #0: 29.941, true rewards: #0: 12.241
[2023-05-29 03:32:19,292][02542] Avg episode reward: 29.941, avg true_objective: 12.241
[2023-05-29 03:33:40,688][02542] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
[2023-05-29 03:47:03,139][02542] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
[2023-05-29 03:47:03,144][02542] Overriding arg 'num_workers' with value 1 passed from command line
[2023-05-29 03:47:03,146][02542] Adding new argument 'no_render'=True that is not in the saved config file!
[2023-05-29 03:47:03,149][02542] Adding new argument 'save_video'=True that is not in the saved config file!
[2023-05-29 03:47:03,151][02542] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2023-05-29 03:47:03,153][02542] Adding new argument 'video_name'=None that is not in the saved config file!
[2023-05-29 03:47:03,158][02542] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
[2023-05-29 03:47:03,159][02542] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
[2023-05-29 03:47:03,161][02542] Adding new argument 'push_to_hub'=True that is not in the saved config file!
[2023-05-29 03:47:03,162][02542] Adding new argument 'hf_repository'='kitrak-rev/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
[2023-05-29 03:47:03,163][02542] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2023-05-29 03:47:03,164][02542] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2023-05-29 03:47:03,165][02542] Adding new argument 'train_script'=None that is not in the saved config file!
[2023-05-29 03:47:03,166][02542] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2023-05-29 03:47:03,167][02542] Using frameskip 1 and render_action_repeat=4 for evaluation
[2023-05-29 03:47:03,196][02542] RunningMeanStd input shape: (3, 72, 128)
[2023-05-29 03:47:03,199][02542] RunningMeanStd input shape: (1,)
[2023-05-29 03:47:03,219][02542] ConvEncoder: input_channels=3
[2023-05-29 03:47:03,275][02542] Conv encoder output size: 512
[2023-05-29 03:47:03,277][02542] Policy head output size: 512
[2023-05-29 03:47:03,304][02542] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
[2023-05-29 03:47:04,041][02542] Num frames 100...
[2023-05-29 03:47:04,218][02542] Num frames 200...
[2023-05-29 03:47:04,389][02542] Num frames 300...
[2023-05-29 03:47:04,558][02542] Num frames 400...
[2023-05-29 03:47:04,733][02542] Num frames 500...
[2023-05-29 03:47:04,906][02542] Num frames 600...
[2023-05-29 03:47:04,977][02542] Avg episode rewards: #0: 11.080, true rewards: #0: 6.080
[2023-05-29 03:47:04,979][02542] Avg episode reward: 11.080, avg true_objective: 6.080
[2023-05-29 03:47:05,140][02542] Num frames 700...
[2023-05-29 03:47:05,322][02542] Num frames 800...
[2023-05-29 03:47:05,501][02542] Num frames 900...
[2023-05-29 03:47:05,695][02542] Num frames 1000...
[2023-05-29 03:47:05,869][02542] Num frames 1100...
[2023-05-29 03:47:06,053][02542] Num frames 1200...
[2023-05-29 03:47:06,203][02542] Num frames 1300...
[2023-05-29 03:47:06,343][02542] Num frames 1400...
[2023-05-29 03:47:06,468][02542] Num frames 1500...
[2023-05-29 03:47:06,593][02542] Num frames 1600...
[2023-05-29 03:47:06,720][02542] Num frames 1700...
[2023-05-29 03:47:06,838][02542] Num frames 1800...
[2023-05-29 03:47:06,964][02542] Num frames 1900...
[2023-05-29 03:47:07,114][02542] Num frames 2000...
[2023-05-29 03:47:07,229][02542] Avg episode rewards: #0: 27.740, true rewards: #0: 10.240
[2023-05-29 03:47:07,231][02542] Avg episode reward: 27.740, avg true_objective: 10.240
[2023-05-29 03:47:07,303][02542] Num frames 2100...
[2023-05-29 03:47:07,431][02542] Num frames 2200...
[2023-05-29 03:47:07,558][02542] Num frames 2300...
[2023-05-29 03:47:07,686][02542] Num frames 2400...
[2023-05-29 03:47:07,816][02542] Num frames 2500...
[2023-05-29 03:47:07,937][02542] Num frames 2600...
[2023-05-29 03:47:08,058][02542] Num frames 2700...
[2023-05-29 03:47:08,137][02542] Avg episode rewards: #0: 23.067, true rewards: #0: 9.067
[2023-05-29 03:47:08,138][02542] Avg episode reward: 23.067, avg true_objective: 9.067
[2023-05-29 03:47:08,244][02542] Num frames 2800...
[2023-05-29 03:47:08,372][02542] Num frames 2900...
[2023-05-29 03:47:08,495][02542] Num frames 3000...
[2023-05-29 03:47:08,627][02542] Num frames 3100...
[2023-05-29 03:47:08,748][02542] Num frames 3200...
[2023-05-29 03:47:08,863][02542] Num frames 3300...
[2023-05-29 03:47:08,980][02542] Num frames 3400...
[2023-05-29 03:47:09,064][02542] Avg episode rewards: #0: 21.310, true rewards: #0: 8.560
[2023-05-29 03:47:09,065][02542] Avg episode reward: 21.310, avg true_objective: 8.560
[2023-05-29 03:47:09,165][02542] Num frames 3500...
[2023-05-29 03:47:09,294][02542] Num frames 3600...
[2023-05-29 03:47:09,416][02542] Num frames 3700...
[2023-05-29 03:47:09,546][02542] Num frames 3800...
[2023-05-29 03:47:09,673][02542] Num frames 3900...
[2023-05-29 03:47:09,795][02542] Num frames 4000...
[2023-05-29 03:47:09,924][02542] Num frames 4100...
[2023-05-29 03:47:10,049][02542] Num frames 4200...
[2023-05-29 03:47:10,179][02542] Num frames 4300...
[2023-05-29 03:47:10,305][02542] Num frames 4400...
[2023-05-29 03:47:10,426][02542] Num frames 4500...
[2023-05-29 03:47:10,551][02542] Num frames 4600...
[2023-05-29 03:47:10,680][02542] Num frames 4700...
[2023-05-29 03:47:10,801][02542] Num frames 4800...
[2023-05-29 03:47:10,923][02542] Num frames 4900...
[2023-05-29 03:47:11,031][02542] Avg episode rewards: #0: 24.676, true rewards: #0: 9.876
[2023-05-29 03:47:11,032][02542] Avg episode reward: 24.676, avg true_objective: 9.876
[2023-05-29 03:47:11,109][02542] Num frames 5000...
[2023-05-29 03:47:11,226][02542] Num frames 5100...
[2023-05-29 03:47:11,368][02542] Num frames 5200...
[2023-05-29 03:47:11,495][02542] Num frames 5300...
[2023-05-29 03:47:11,623][02542] Num frames 5400...
[2023-05-29 03:47:11,709][02542] Avg episode rewards: #0: 22.040, true rewards: #0: 9.040
[2023-05-29 03:47:11,711][02542] Avg episode reward: 22.040, avg true_objective: 9.040
[2023-05-29 03:47:11,807][02542] Num frames 5500...
[2023-05-29 03:47:11,930][02542] Num frames 5600...
[2023-05-29 03:47:12,052][02542] Num frames 5700...
[2023-05-29 03:47:12,174][02542] Num frames 5800...
[2023-05-29 03:47:12,304][02542] Num frames 5900...
[2023-05-29 03:47:12,432][02542] Num frames 6000...
[2023-05-29 03:47:12,559][02542] Num frames 6100...
[2023-05-29 03:47:12,683][02542] Num frames 6200...
[2023-05-29 03:47:12,810][02542] Num frames 6300...
[2023-05-29 03:47:12,933][02542] Avg episode rewards: #0: 22.360, true rewards: #0: 9.074
[2023-05-29 03:47:12,934][02542] Avg episode reward: 22.360, avg true_objective: 9.074
[2023-05-29 03:47:12,996][02542] Num frames 6400...
[2023-05-29 03:47:13,118][02542] Num frames 6500...
[2023-05-29 03:47:13,241][02542] Num frames 6600...
[2023-05-29 03:47:13,383][02542] Num frames 6700...
[2023-05-29 03:47:13,509][02542] Num frames 6800...
[2023-05-29 03:47:13,639][02542] Num frames 6900...
[2023-05-29 03:47:13,788][02542] Avg episode rewards: #0: 21.085, true rewards: #0: 8.710
[2023-05-29 03:47:13,790][02542] Avg episode reward: 21.085, avg true_objective: 8.710
[2023-05-29 03:47:13,836][02542] Num frames 7000...
[2023-05-29 03:47:13,962][02542] Num frames 7100...
[2023-05-29 03:47:14,090][02542] Num frames 7200...
[2023-05-29 03:47:14,209][02542] Num frames 7300...
[2023-05-29 03:47:14,333][02542] Num frames 7400...
[2023-05-29 03:47:14,477][02542] Num frames 7500...
[2023-05-29 03:47:14,601][02542] Num frames 7600...
[2023-05-29 03:47:14,724][02542] Num frames 7700...
[2023-05-29 03:47:14,845][02542] Num frames 7800...
[2023-05-29 03:47:14,971][02542] Num frames 7900...
[2023-05-29 03:47:15,114][02542] Avg episode rewards: #0: 21.067, true rewards: #0: 8.844
[2023-05-29 03:47:15,116][02542] Avg episode reward: 21.067, avg true_objective: 8.844
[2023-05-29 03:47:15,180][02542] Num frames 8000...
[2023-05-29 03:47:15,299][02542] Num frames 8100...
[2023-05-29 03:47:15,426][02542] Num frames 8200...
[2023-05-29 03:47:15,548][02542] Num frames 8300...
[2023-05-29 03:47:15,668][02542] Num frames 8400...
[2023-05-29 03:47:15,789][02542] Num frames 8500...
[2023-05-29 03:47:15,913][02542] Num frames 8600...
[2023-05-29 03:47:16,082][02542] Avg episode rewards: #0: 20.396, true rewards: #0: 8.696
[2023-05-29 03:47:16,085][02542] Avg episode reward: 20.396, avg true_objective: 8.696
[2023-05-29 03:48:16,750][02542] Replay video saved to /content/train_dir/default_experiment/replay.mp4!