keshan commited on
Commit
3df25e6
1 Parent(s): 8c30fa5

Upload . with huggingface_hub

Browse files
.summary/0/events.out.tfevents.1677123779.832f36078d7d ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f92386e29f892c1ce1ca92db4db9df41cfc8e04345ca50a35653807155d748b
3
+ size 383574
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 9.02 +/- 4.14
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 9.66 +/- 5.26
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/checkpoint_000001402_5742592.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d9cc1876366c0c4e0d0d0cdab3705eeca522e7f1b42ee7901bfb507d277873
3
+ size 34929220
checkpoint_p0/checkpoint_000001466_6004736.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2f39e8f7d9b7d2006faca43f016c4bcd61b3a0ea6b195ca53569cdd4292303f
3
+ size 34929220
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 4000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 6000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d3f82d01d6b4c838683791c1e0838b37fa4182b7ebdaecbd908102d6d6e1f801
3
- size 17188039
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:301107abef1bbb1d939ba2d572eb5c32310fb448cfdb6fb8dc1c42512b2d4f25
3
+ size 18390189
sf_log.txt CHANGED
@@ -1098,3 +1098,1193 @@ main_loop: 1096.3166
1098
  [2023-02-23 03:28:12,258][11306] Avg episode rewards: #0: 20.825, true rewards: #0: 9.025
1099
  [2023-02-23 03:28:12,259][11306] Avg episode reward: 20.825, avg true_objective: 9.025
1100
  [2023-02-23 03:29:06,647][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1098
  [2023-02-23 03:28:12,258][11306] Avg episode rewards: #0: 20.825, true rewards: #0: 9.025
1099
  [2023-02-23 03:28:12,259][11306] Avg episode reward: 20.825, avg true_objective: 9.025
1100
  [2023-02-23 03:29:06,647][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1101
+ [2023-02-23 03:29:10,586][11306] The model has been pushed to https://huggingface.co/keshan/rl_course_vizdoom_health_gathering_supreme
1102
+ [2023-02-23 03:29:39,633][11306] Loading legacy config file train_dir/doom_health_gathering_supreme_2222/cfg.json instead of train_dir/doom_health_gathering_supreme_2222/config.json
1103
+ [2023-02-23 03:29:39,635][11306] Loading existing experiment configuration from train_dir/doom_health_gathering_supreme_2222/config.json
1104
+ [2023-02-23 03:29:39,637][11306] Overriding arg 'experiment' with value 'doom_health_gathering_supreme_2222' passed from command line
1105
+ [2023-02-23 03:29:39,639][11306] Overriding arg 'train_dir' with value 'train_dir' passed from command line
1106
+ [2023-02-23 03:29:39,642][11306] Overriding arg 'num_workers' with value 1 passed from command line
1107
+ [2023-02-23 03:29:39,644][11306] Adding new argument 'lr_adaptive_min'=1e-06 that is not in the saved config file!
1108
+ [2023-02-23 03:29:39,647][11306] Adding new argument 'lr_adaptive_max'=0.01 that is not in the saved config file!
1109
+ [2023-02-23 03:29:39,648][11306] Adding new argument 'env_gpu_observations'=True that is not in the saved config file!
1110
+ [2023-02-23 03:29:39,649][11306] Adding new argument 'no_render'=True that is not in the saved config file!
1111
+ [2023-02-23 03:29:39,650][11306] Adding new argument 'save_video'=True that is not in the saved config file!
1112
+ [2023-02-23 03:29:39,652][11306] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1113
+ [2023-02-23 03:29:39,653][11306] Adding new argument 'video_name'=None that is not in the saved config file!
1114
+ [2023-02-23 03:29:39,654][11306] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
1115
+ [2023-02-23 03:29:39,655][11306] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1116
+ [2023-02-23 03:29:39,657][11306] Adding new argument 'push_to_hub'=False that is not in the saved config file!
1117
+ [2023-02-23 03:29:39,658][11306] Adding new argument 'hf_repository'=None that is not in the saved config file!
1118
+ [2023-02-23 03:29:39,660][11306] Adding new argument 'policy_index'=0 that is not in the saved config file!
1119
+ [2023-02-23 03:29:39,661][11306] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1120
+ [2023-02-23 03:29:39,663][11306] Adding new argument 'train_script'=None that is not in the saved config file!
1121
+ [2023-02-23 03:29:39,664][11306] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1122
+ [2023-02-23 03:29:39,667][11306] Using frameskip 1 and render_action_repeat=4 for evaluation
1123
+ [2023-02-23 03:29:39,703][11306] RunningMeanStd input shape: (3, 72, 128)
1124
+ [2023-02-23 03:29:39,708][11306] RunningMeanStd input shape: (1,)
1125
+ [2023-02-23 03:29:39,730][11306] ConvEncoder: input_channels=3
1126
+ [2023-02-23 03:29:39,805][11306] Conv encoder output size: 512
1127
+ [2023-02-23 03:29:39,811][11306] Policy head output size: 512
1128
+ [2023-02-23 03:29:39,847][11306] Loading state from checkpoint train_dir/doom_health_gathering_supreme_2222/checkpoint_p0/checkpoint_000539850_4422451200.pth...
1129
+ [2023-02-23 03:29:40,440][11306] Num frames 100...
1130
+ [2023-02-23 03:29:40,552][11306] Num frames 200...
1131
+ [2023-02-23 03:29:40,681][11306] Num frames 300...
1132
+ [2023-02-23 03:29:40,795][11306] Num frames 400...
1133
+ [2023-02-23 03:29:40,933][11306] Num frames 500...
1134
+ [2023-02-23 03:29:41,049][11306] Num frames 600...
1135
+ [2023-02-23 03:29:41,172][11306] Num frames 700...
1136
+ [2023-02-23 03:29:41,290][11306] Num frames 800...
1137
+ [2023-02-23 03:29:41,405][11306] Num frames 900...
1138
+ [2023-02-23 03:29:41,527][11306] Num frames 1000...
1139
+ [2023-02-23 03:29:41,649][11306] Num frames 1100...
1140
+ [2023-02-23 03:29:41,774][11306] Num frames 1200...
1141
+ [2023-02-23 03:29:41,889][11306] Num frames 1300...
1142
+ [2023-02-23 03:29:42,003][11306] Num frames 1400...
1143
+ [2023-02-23 03:29:42,124][11306] Num frames 1500...
1144
+ [2023-02-23 03:29:42,236][11306] Num frames 1600...
1145
+ [2023-02-23 03:29:42,362][11306] Num frames 1700...
1146
+ [2023-02-23 03:29:42,476][11306] Num frames 1800...
1147
+ [2023-02-23 03:29:42,592][11306] Num frames 1900...
1148
+ [2023-02-23 03:29:42,703][11306] Num frames 2000...
1149
+ [2023-02-23 03:29:42,825][11306] Num frames 2100...
1150
+ [2023-02-23 03:29:42,877][11306] Avg episode rewards: #0: 65.999, true rewards: #0: 21.000
1151
+ [2023-02-23 03:29:42,879][11306] Avg episode reward: 65.999, avg true_objective: 21.000
1152
+ [2023-02-23 03:29:42,996][11306] Num frames 2200...
1153
+ [2023-02-23 03:29:43,115][11306] Num frames 2300...
1154
+ [2023-02-23 03:29:43,233][11306] Num frames 2400...
1155
+ [2023-02-23 03:29:43,354][11306] Num frames 2500...
1156
+ [2023-02-23 03:29:43,477][11306] Num frames 2600...
1157
+ [2023-02-23 03:29:43,595][11306] Num frames 2700...
1158
+ [2023-02-23 03:29:43,708][11306] Num frames 2800...
1159
+ [2023-02-23 03:29:43,819][11306] Num frames 2900...
1160
+ [2023-02-23 03:29:43,934][11306] Num frames 3000...
1161
+ [2023-02-23 03:29:44,049][11306] Num frames 3100...
1162
+ [2023-02-23 03:29:44,171][11306] Num frames 3200...
1163
+ [2023-02-23 03:29:44,299][11306] Num frames 3300...
1164
+ [2023-02-23 03:29:44,414][11306] Num frames 3400...
1165
+ [2023-02-23 03:29:44,534][11306] Num frames 3500...
1166
+ [2023-02-23 03:29:44,647][11306] Num frames 3600...
1167
+ [2023-02-23 03:29:44,769][11306] Num frames 3700...
1168
+ [2023-02-23 03:29:44,899][11306] Num frames 3800...
1169
+ [2023-02-23 03:29:45,013][11306] Num frames 3900...
1170
+ [2023-02-23 03:29:45,140][11306] Num frames 4000...
1171
+ [2023-02-23 03:29:45,267][11306] Num frames 4100...
1172
+ [2023-02-23 03:29:45,391][11306] Num frames 4200...
1173
+ [2023-02-23 03:29:45,446][11306] Avg episode rewards: #0: 64.999, true rewards: #0: 21.000
1174
+ [2023-02-23 03:29:45,448][11306] Avg episode reward: 64.999, avg true_objective: 21.000
1175
+ [2023-02-23 03:29:45,562][11306] Num frames 4300...
1176
+ [2023-02-23 03:29:45,677][11306] Num frames 4400...
1177
+ [2023-02-23 03:29:45,790][11306] Num frames 4500...
1178
+ [2023-02-23 03:29:45,908][11306] Num frames 4600...
1179
+ [2023-02-23 03:29:46,026][11306] Num frames 4700...
1180
+ [2023-02-23 03:29:46,141][11306] Num frames 4800...
1181
+ [2023-02-23 03:29:46,263][11306] Num frames 4900...
1182
+ [2023-02-23 03:29:46,375][11306] Num frames 5000...
1183
+ [2023-02-23 03:29:46,504][11306] Num frames 5100...
1184
+ [2023-02-23 03:29:46,622][11306] Num frames 5200...
1185
+ [2023-02-23 03:29:46,739][11306] Num frames 5300...
1186
+ [2023-02-23 03:29:46,861][11306] Num frames 5400...
1187
+ [2023-02-23 03:29:46,981][11306] Num frames 5500...
1188
+ [2023-02-23 03:29:47,107][11306] Num frames 5600...
1189
+ [2023-02-23 03:29:47,236][11306] Num frames 5700...
1190
+ [2023-02-23 03:29:47,352][11306] Num frames 5800...
1191
+ [2023-02-23 03:29:47,466][11306] Num frames 5900...
1192
+ [2023-02-23 03:29:47,581][11306] Num frames 6000...
1193
+ [2023-02-23 03:29:47,704][11306] Num frames 6100...
1194
+ [2023-02-23 03:29:47,828][11306] Num frames 6200...
1195
+ [2023-02-23 03:29:47,945][11306] Num frames 6300...
1196
+ [2023-02-23 03:29:47,997][11306] Avg episode rewards: #0: 64.332, true rewards: #0: 21.000
1197
+ [2023-02-23 03:29:47,999][11306] Avg episode reward: 64.332, avg true_objective: 21.000
1198
+ [2023-02-23 03:29:48,117][11306] Num frames 6400...
1199
+ [2023-02-23 03:29:48,240][11306] Num frames 6500...
1200
+ [2023-02-23 03:29:48,363][11306] Num frames 6600...
1201
+ [2023-02-23 03:29:48,476][11306] Num frames 6700...
1202
+ [2023-02-23 03:29:48,589][11306] Num frames 6800...
1203
+ [2023-02-23 03:29:48,712][11306] Num frames 6900...
1204
+ [2023-02-23 03:29:48,827][11306] Num frames 7000...
1205
+ [2023-02-23 03:29:48,943][11306] Num frames 7100...
1206
+ [2023-02-23 03:29:49,062][11306] Num frames 7200...
1207
+ [2023-02-23 03:29:49,180][11306] Num frames 7300...
1208
+ [2023-02-23 03:29:49,303][11306] Num frames 7400...
1209
+ [2023-02-23 03:29:49,420][11306] Num frames 7500...
1210
+ [2023-02-23 03:29:49,532][11306] Num frames 7600...
1211
+ [2023-02-23 03:29:49,647][11306] Num frames 7700...
1212
+ [2023-02-23 03:29:49,769][11306] Num frames 7800...
1213
+ [2023-02-23 03:29:49,887][11306] Num frames 7900...
1214
+ [2023-02-23 03:29:50,002][11306] Num frames 8000...
1215
+ [2023-02-23 03:29:50,124][11306] Num frames 8100...
1216
+ [2023-02-23 03:29:50,258][11306] Num frames 8200...
1217
+ [2023-02-23 03:29:50,432][11306] Num frames 8300...
1218
+ [2023-02-23 03:29:50,612][11306] Num frames 8400...
1219
+ [2023-02-23 03:29:50,669][11306] Avg episode rewards: #0: 63.999, true rewards: #0: 21.000
1220
+ [2023-02-23 03:29:50,672][11306] Avg episode reward: 63.999, avg true_objective: 21.000
1221
+ [2023-02-23 03:29:50,835][11306] Num frames 8500...
1222
+ [2023-02-23 03:29:50,999][11306] Num frames 8600...
1223
+ [2023-02-23 03:29:51,165][11306] Num frames 8700...
1224
+ [2023-02-23 03:29:51,330][11306] Num frames 8800...
1225
+ [2023-02-23 03:29:51,490][11306] Num frames 8900...
1226
+ [2023-02-23 03:29:51,651][11306] Num frames 9000...
1227
+ [2023-02-23 03:29:51,813][11306] Num frames 9100...
1228
+ [2023-02-23 03:29:51,976][11306] Num frames 9200...
1229
+ [2023-02-23 03:29:52,144][11306] Num frames 9300...
1230
+ [2023-02-23 03:29:52,310][11306] Num frames 9400...
1231
+ [2023-02-23 03:29:52,478][11306] Num frames 9500...
1232
+ [2023-02-23 03:29:52,649][11306] Num frames 9600...
1233
+ [2023-02-23 03:29:52,827][11306] Num frames 9700...
1234
+ [2023-02-23 03:29:53,002][11306] Num frames 9800...
1235
+ [2023-02-23 03:29:53,173][11306] Num frames 9900...
1236
+ [2023-02-23 03:29:53,354][11306] Num frames 10000...
1237
+ [2023-02-23 03:29:53,533][11306] Num frames 10100...
1238
+ [2023-02-23 03:29:53,713][11306] Num frames 10200...
1239
+ [2023-02-23 03:29:53,886][11306] Num frames 10300...
1240
+ [2023-02-23 03:29:54,009][11306] Num frames 10400...
1241
+ [2023-02-23 03:29:54,136][11306] Num frames 10500...
1242
+ [2023-02-23 03:29:54,188][11306] Avg episode rewards: #0: 63.999, true rewards: #0: 21.000
1243
+ [2023-02-23 03:29:54,192][11306] Avg episode reward: 63.999, avg true_objective: 21.000
1244
+ [2023-02-23 03:29:54,307][11306] Num frames 10600...
1245
+ [2023-02-23 03:29:54,438][11306] Num frames 10700...
1246
+ [2023-02-23 03:29:54,557][11306] Num frames 10800...
1247
+ [2023-02-23 03:29:54,676][11306] Num frames 10900...
1248
+ [2023-02-23 03:29:54,793][11306] Num frames 11000...
1249
+ [2023-02-23 03:29:54,907][11306] Num frames 11100...
1250
+ [2023-02-23 03:29:55,028][11306] Num frames 11200...
1251
+ [2023-02-23 03:29:55,145][11306] Num frames 11300...
1252
+ [2023-02-23 03:29:55,279][11306] Num frames 11400...
1253
+ [2023-02-23 03:29:55,402][11306] Num frames 11500...
1254
+ [2023-02-23 03:29:55,516][11306] Num frames 11600...
1255
+ [2023-02-23 03:29:55,627][11306] Num frames 11700...
1256
+ [2023-02-23 03:29:55,757][11306] Num frames 11800...
1257
+ [2023-02-23 03:29:55,877][11306] Num frames 11900...
1258
+ [2023-02-23 03:29:55,997][11306] Num frames 12000...
1259
+ [2023-02-23 03:29:56,113][11306] Num frames 12100...
1260
+ [2023-02-23 03:29:56,233][11306] Num frames 12200...
1261
+ [2023-02-23 03:29:56,353][11306] Num frames 12300...
1262
+ [2023-02-23 03:29:56,482][11306] Num frames 12400...
1263
+ [2023-02-23 03:29:56,609][11306] Num frames 12500...
1264
+ [2023-02-23 03:29:56,728][11306] Num frames 12600...
1265
+ [2023-02-23 03:29:56,780][11306] Avg episode rewards: #0: 63.665, true rewards: #0: 21.000
1266
+ [2023-02-23 03:29:56,782][11306] Avg episode reward: 63.665, avg true_objective: 21.000
1267
+ [2023-02-23 03:29:56,899][11306] Num frames 12700...
1268
+ [2023-02-23 03:29:57,016][11306] Num frames 12800...
1269
+ [2023-02-23 03:29:57,131][11306] Num frames 12900...
1270
+ [2023-02-23 03:29:57,250][11306] Num frames 13000...
1271
+ [2023-02-23 03:29:57,366][11306] Num frames 13100...
1272
+ [2023-02-23 03:29:57,495][11306] Num frames 13200...
1273
+ [2023-02-23 03:29:57,610][11306] Num frames 13300...
1274
+ [2023-02-23 03:29:57,730][11306] Num frames 13400...
1275
+ [2023-02-23 03:29:57,845][11306] Num frames 13500...
1276
+ [2023-02-23 03:29:57,964][11306] Num frames 13600...
1277
+ [2023-02-23 03:29:58,085][11306] Num frames 13700...
1278
+ [2023-02-23 03:29:58,216][11306] Num frames 13800...
1279
+ [2023-02-23 03:29:58,340][11306] Num frames 13900...
1280
+ [2023-02-23 03:29:58,463][11306] Num frames 14000...
1281
+ [2023-02-23 03:29:58,580][11306] Num frames 14100...
1282
+ [2023-02-23 03:29:58,697][11306] Num frames 14200...
1283
+ [2023-02-23 03:29:58,826][11306] Num frames 14300...
1284
+ [2023-02-23 03:29:58,932][11306] Avg episode rewards: #0: 61.914, true rewards: #0: 20.487
1285
+ [2023-02-23 03:29:58,934][11306] Avg episode reward: 61.914, avg true_objective: 20.487
1286
+ [2023-02-23 03:29:59,004][11306] Num frames 14400...
1287
+ [2023-02-23 03:29:59,122][11306] Num frames 14500...
1288
+ [2023-02-23 03:29:59,240][11306] Num frames 14600...
1289
+ [2023-02-23 03:29:59,351][11306] Num frames 14700...
1290
+ [2023-02-23 03:29:59,482][11306] Num frames 14800...
1291
+ [2023-02-23 03:29:59,611][11306] Num frames 14900...
1292
+ [2023-02-23 03:29:59,731][11306] Num frames 15000...
1293
+ [2023-02-23 03:29:59,848][11306] Num frames 15100...
1294
+ [2023-02-23 03:29:59,966][11306] Num frames 15200...
1295
+ [2023-02-23 03:30:00,082][11306] Num frames 15300...
1296
+ [2023-02-23 03:30:00,202][11306] Num frames 15400...
1297
+ [2023-02-23 03:30:00,316][11306] Num frames 15500...
1298
+ [2023-02-23 03:30:00,438][11306] Num frames 15600...
1299
+ [2023-02-23 03:30:00,567][11306] Num frames 15700...
1300
+ [2023-02-23 03:30:00,681][11306] Num frames 15800...
1301
+ [2023-02-23 03:30:00,799][11306] Num frames 15900...
1302
+ [2023-02-23 03:30:00,916][11306] Num frames 16000...
1303
+ [2023-02-23 03:30:01,035][11306] Num frames 16100...
1304
+ [2023-02-23 03:30:01,165][11306] Num frames 16200...
1305
+ [2023-02-23 03:30:01,294][11306] Num frames 16300...
1306
+ [2023-02-23 03:30:01,407][11306] Num frames 16400...
1307
+ [2023-02-23 03:30:01,511][11306] Avg episode rewards: #0: 62.300, true rewards: #0: 20.551
1308
+ [2023-02-23 03:30:01,513][11306] Avg episode reward: 62.300, avg true_objective: 20.551
1309
+ [2023-02-23 03:30:01,582][11306] Num frames 16500...
1310
+ [2023-02-23 03:30:01,698][11306] Num frames 16600...
1311
+ [2023-02-23 03:30:01,821][11306] Num frames 16700...
1312
+ [2023-02-23 03:30:01,952][11306] Num frames 16800...
1313
+ [2023-02-23 03:30:02,074][11306] Num frames 16900...
1314
+ [2023-02-23 03:30:02,193][11306] Num frames 17000...
1315
+ [2023-02-23 03:30:02,308][11306] Num frames 17100...
1316
+ [2023-02-23 03:30:02,432][11306] Num frames 17200...
1317
+ [2023-02-23 03:30:02,558][11306] Num frames 17300...
1318
+ [2023-02-23 03:30:02,676][11306] Num frames 17400...
1319
+ [2023-02-23 03:30:02,796][11306] Num frames 17500...
1320
+ [2023-02-23 03:30:02,916][11306] Num frames 17600...
1321
+ [2023-02-23 03:30:03,035][11306] Num frames 17700...
1322
+ [2023-02-23 03:30:03,157][11306] Num frames 17800...
1323
+ [2023-02-23 03:30:03,275][11306] Num frames 17900...
1324
+ [2023-02-23 03:30:03,401][11306] Num frames 18000...
1325
+ [2023-02-23 03:30:03,519][11306] Num frames 18100...
1326
+ [2023-02-23 03:30:03,641][11306] Num frames 18200...
1327
+ [2023-02-23 03:30:03,765][11306] Num frames 18300...
1328
+ [2023-02-23 03:30:03,883][11306] Num frames 18400...
1329
+ [2023-02-23 03:30:04,051][11306] Num frames 18500...
1330
+ [2023-02-23 03:30:04,194][11306] Avg episode rewards: #0: 62.378, true rewards: #0: 20.601
1331
+ [2023-02-23 03:30:04,196][11306] Avg episode reward: 62.378, avg true_objective: 20.601
1332
+ [2023-02-23 03:30:04,300][11306] Num frames 18600...
1333
+ [2023-02-23 03:30:04,461][11306] Num frames 18700...
1334
+ [2023-02-23 03:30:04,625][11306] Num frames 18800...
1335
+ [2023-02-23 03:30:04,796][11306] Num frames 18900...
1336
+ [2023-02-23 03:30:04,967][11306] Num frames 19000...
1337
+ [2023-02-23 03:30:05,133][11306] Num frames 19100...
1338
+ [2023-02-23 03:30:05,305][11306] Num frames 19200...
1339
+ [2023-02-23 03:30:05,472][11306] Num frames 19300...
1340
+ [2023-02-23 03:30:05,632][11306] Num frames 19400...
1341
+ [2023-02-23 03:30:05,803][11306] Num frames 19500...
1342
+ [2023-02-23 03:30:05,970][11306] Num frames 19600...
1343
+ [2023-02-23 03:30:06,140][11306] Num frames 19700...
1344
+ [2023-02-23 03:30:06,321][11306] Num frames 19800...
1345
+ [2023-02-23 03:30:06,502][11306] Num frames 19900...
1346
+ [2023-02-23 03:30:06,685][11306] Num frames 20000...
1347
+ [2023-02-23 03:30:06,859][11306] Num frames 20100...
1348
+ [2023-02-23 03:30:07,040][11306] Num frames 20200...
1349
+ [2023-02-23 03:30:07,219][11306] Num frames 20300...
1350
+ [2023-02-23 03:30:07,392][11306] Num frames 20400...
1351
+ [2023-02-23 03:30:07,557][11306] Num frames 20500...
1352
+ [2023-02-23 03:30:07,678][11306] Num frames 20600...
1353
+ [2023-02-23 03:30:07,793][11306] Avg episode rewards: #0: 62.740, true rewards: #0: 20.641
1354
+ [2023-02-23 03:30:07,796][11306] Avg episode reward: 62.740, avg true_objective: 20.641
1355
+ [2023-02-23 03:32:11,715][11306] Replay video saved to train_dir/doom_health_gathering_supreme_2222/replay.mp4!
1356
+ [2023-02-23 03:42:59,609][11306] Environment doom_basic already registered, overwriting...
1357
+ [2023-02-23 03:42:59,611][11306] Environment doom_two_colors_easy already registered, overwriting...
1358
+ [2023-02-23 03:42:59,613][11306] Environment doom_two_colors_hard already registered, overwriting...
1359
+ [2023-02-23 03:42:59,616][11306] Environment doom_dm already registered, overwriting...
1360
+ [2023-02-23 03:42:59,618][11306] Environment doom_dwango5 already registered, overwriting...
1361
+ [2023-02-23 03:42:59,619][11306] Environment doom_my_way_home_flat_actions already registered, overwriting...
1362
+ [2023-02-23 03:42:59,620][11306] Environment doom_defend_the_center_flat_actions already registered, overwriting...
1363
+ [2023-02-23 03:42:59,621][11306] Environment doom_my_way_home already registered, overwriting...
1364
+ [2023-02-23 03:42:59,622][11306] Environment doom_deadly_corridor already registered, overwriting...
1365
+ [2023-02-23 03:42:59,623][11306] Environment doom_defend_the_center already registered, overwriting...
1366
+ [2023-02-23 03:42:59,625][11306] Environment doom_defend_the_line already registered, overwriting...
1367
+ [2023-02-23 03:42:59,627][11306] Environment doom_health_gathering already registered, overwriting...
1368
+ [2023-02-23 03:42:59,628][11306] Environment doom_health_gathering_supreme already registered, overwriting...
1369
+ [2023-02-23 03:42:59,629][11306] Environment doom_battle already registered, overwriting...
1370
+ [2023-02-23 03:42:59,631][11306] Environment doom_battle2 already registered, overwriting...
1371
+ [2023-02-23 03:42:59,632][11306] Environment doom_duel_bots already registered, overwriting...
1372
+ [2023-02-23 03:42:59,634][11306] Environment doom_deathmatch_bots already registered, overwriting...
1373
+ [2023-02-23 03:42:59,635][11306] Environment doom_duel already registered, overwriting...
1374
+ [2023-02-23 03:42:59,636][11306] Environment doom_deathmatch_full already registered, overwriting...
1375
+ [2023-02-23 03:42:59,637][11306] Environment doom_benchmark already registered, overwriting...
1376
+ [2023-02-23 03:42:59,639][11306] register_encoder_factory: <function make_vizdoom_encoder at 0x7f3b3bb37b80>
1377
+ [2023-02-23 03:42:59,670][11306] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1378
+ [2023-02-23 03:42:59,671][11306] Overriding arg 'train_for_env_steps' with value 6000000 passed from command line
1379
+ [2023-02-23 03:42:59,681][11306] Experiment dir /content/train_dir/default_experiment already exists!
1380
+ [2023-02-23 03:42:59,682][11306] Resuming existing experiment from /content/train_dir/default_experiment...
1381
+ [2023-02-23 03:42:59,686][11306] Weights and Biases integration disabled
1382
+ [2023-02-23 03:42:59,690][11306] Environment var CUDA_VISIBLE_DEVICES is 0
1383
+
1384
+ [2023-02-23 03:43:01,066][11306] Starting experiment with the following configuration:
1385
+ help=False
1386
+ algo=APPO
1387
+ env=doom_health_gathering_supreme
1388
+ experiment=default_experiment
1389
+ train_dir=/content/train_dir
1390
+ restart_behavior=resume
1391
+ device=gpu
1392
+ seed=None
1393
+ num_policies=1
1394
+ async_rl=True
1395
+ serial_mode=False
1396
+ batched_sampling=False
1397
+ num_batches_to_accumulate=2
1398
+ worker_num_splits=2
1399
+ policy_workers_per_policy=1
1400
+ max_policy_lag=1000
1401
+ num_workers=8
1402
+ num_envs_per_worker=4
1403
+ batch_size=1024
1404
+ num_batches_per_epoch=1
1405
+ num_epochs=1
1406
+ rollout=32
1407
+ recurrence=32
1408
+ shuffle_minibatches=False
1409
+ gamma=0.99
1410
+ reward_scale=1.0
1411
+ reward_clip=1000.0
1412
+ value_bootstrap=False
1413
+ normalize_returns=True
1414
+ exploration_loss_coeff=0.001
1415
+ value_loss_coeff=0.5
1416
+ kl_loss_coeff=0.0
1417
+ exploration_loss=symmetric_kl
1418
+ gae_lambda=0.95
1419
+ ppo_clip_ratio=0.1
1420
+ ppo_clip_value=0.2
1421
+ with_vtrace=False
1422
+ vtrace_rho=1.0
1423
+ vtrace_c=1.0
1424
+ optimizer=adam
1425
+ adam_eps=1e-06
1426
+ adam_beta1=0.9
1427
+ adam_beta2=0.999
1428
+ max_grad_norm=4.0
1429
+ learning_rate=0.0001
1430
+ lr_schedule=constant
1431
+ lr_schedule_kl_threshold=0.008
1432
+ lr_adaptive_min=1e-06
1433
+ lr_adaptive_max=0.01
1434
+ obs_subtract_mean=0.0
1435
+ obs_scale=255.0
1436
+ normalize_input=True
1437
+ normalize_input_keys=None
1438
+ decorrelate_experience_max_seconds=0
1439
+ decorrelate_envs_on_one_worker=True
1440
+ actor_worker_gpus=[]
1441
+ set_workers_cpu_affinity=True
1442
+ force_envs_single_thread=False
1443
+ default_niceness=0
1444
+ log_to_file=True
1445
+ experiment_summaries_interval=10
1446
+ flush_summaries_interval=30
1447
+ stats_avg=100
1448
+ summaries_use_frameskip=True
1449
+ heartbeat_interval=20
1450
+ heartbeat_reporting_interval=600
1451
+ train_for_env_steps=6000000
1452
+ train_for_seconds=10000000000
1453
+ save_every_sec=120
1454
+ keep_checkpoints=2
1455
+ load_checkpoint_kind=latest
1456
+ save_milestones_sec=-1
1457
+ save_best_every_sec=5
1458
+ save_best_metric=reward
1459
+ save_best_after=100000
1460
+ benchmark=False
1461
+ encoder_mlp_layers=[512, 512]
1462
+ encoder_conv_architecture=convnet_simple
1463
+ encoder_conv_mlp_layers=[512]
1464
+ use_rnn=True
1465
+ rnn_size=512
1466
+ rnn_type=gru
1467
+ rnn_num_layers=1
1468
+ decoder_mlp_layers=[]
1469
+ nonlinearity=elu
1470
+ policy_initialization=orthogonal
1471
+ policy_init_gain=1.0
1472
+ actor_critic_share_weights=True
1473
+ adaptive_stddev=True
1474
+ continuous_tanh_scale=0.0
1475
+ initial_stddev=1.0
1476
+ use_env_info_cache=False
1477
+ env_gpu_actions=False
1478
+ env_gpu_observations=True
1479
+ env_frameskip=4
1480
+ env_framestack=1
1481
+ pixel_format=CHW
1482
+ use_record_episode_statistics=False
1483
+ with_wandb=False
1484
+ wandb_user=None
1485
+ wandb_project=sample_factory
1486
+ wandb_group=None
1487
+ wandb_job_type=SF
1488
+ wandb_tags=[]
1489
+ with_pbt=False
1490
+ pbt_mix_policies_in_one_env=True
1491
+ pbt_period_env_steps=5000000
1492
+ pbt_start_mutation=20000000
1493
+ pbt_replace_fraction=0.3
1494
+ pbt_mutation_rate=0.15
1495
+ pbt_replace_reward_gap=0.1
1496
+ pbt_replace_reward_gap_absolute=1e-06
1497
+ pbt_optimize_gamma=False
1498
+ pbt_target_objective=true_objective
1499
+ pbt_perturb_min=1.1
1500
+ pbt_perturb_max=1.5
1501
+ num_agents=-1
1502
+ num_humans=0
1503
+ num_bots=-1
1504
+ start_bot_difficulty=None
1505
+ timelimit=None
1506
+ res_w=128
1507
+ res_h=72
1508
+ wide_aspect_ratio=False
1509
+ eval_env_frameskip=1
1510
+ fps=35
1511
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
1512
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
1513
+ git_hash=unknown
1514
+ git_repo_name=not a git repository
1515
+ [2023-02-23 03:43:01,068][11306] Saving configuration to /content/train_dir/default_experiment/config.json...
1516
+ [2023-02-23 03:43:01,076][11306] Rollout worker 0 uses device cpu
1517
+ [2023-02-23 03:43:01,078][11306] Rollout worker 1 uses device cpu
1518
+ [2023-02-23 03:43:01,080][11306] Rollout worker 2 uses device cpu
1519
+ [2023-02-23 03:43:01,083][11306] Rollout worker 3 uses device cpu
1520
+ [2023-02-23 03:43:01,088][11306] Rollout worker 4 uses device cpu
1521
+ [2023-02-23 03:43:01,089][11306] Rollout worker 5 uses device cpu
1522
+ [2023-02-23 03:43:01,092][11306] Rollout worker 6 uses device cpu
1523
+ [2023-02-23 03:43:01,095][11306] Rollout worker 7 uses device cpu
1524
+ [2023-02-23 03:43:01,211][11306] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1525
+ [2023-02-23 03:43:01,214][11306] InferenceWorker_p0-w0: min num requests: 2
1526
+ [2023-02-23 03:43:01,245][11306] Starting all processes...
1527
+ [2023-02-23 03:43:01,246][11306] Starting process learner_proc0
1528
+ [2023-02-23 03:43:01,395][11306] Starting all processes...
1529
+ [2023-02-23 03:43:01,404][11306] Starting process inference_proc0-0
1530
+ [2023-02-23 03:43:01,404][11306] Starting process rollout_proc0
1531
+ [2023-02-23 03:43:01,406][11306] Starting process rollout_proc1
1532
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc2
1533
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc3
1534
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc4
1535
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc5
1536
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc6
1537
+ [2023-02-23 03:43:01,408][11306] Starting process rollout_proc7
1538
+ [2023-02-23 03:43:10,899][27671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1539
+ [2023-02-23 03:43:10,903][27671] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
1540
+ [2023-02-23 03:43:10,979][27671] Num visible devices: 1
1541
+ [2023-02-23 03:43:11,033][27671] Starting seed is not provided
1542
+ [2023-02-23 03:43:11,034][27671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1543
+ [2023-02-23 03:43:11,035][27671] Initializing actor-critic model on device cuda:0
1544
+ [2023-02-23 03:43:11,036][27671] RunningMeanStd input shape: (3, 72, 128)
1545
+ [2023-02-23 03:43:11,037][27671] RunningMeanStd input shape: (1,)
1546
+ [2023-02-23 03:43:11,178][27671] ConvEncoder: input_channels=3
1547
+ [2023-02-23 03:43:11,786][27689] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1548
+ [2023-02-23 03:43:11,787][27689] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
1549
+ [2023-02-23 03:43:11,928][27689] Num visible devices: 1
1550
+ [2023-02-23 03:43:12,355][27690] Worker 0 uses CPU cores [0]
1551
+ [2023-02-23 03:43:12,360][27671] Conv encoder output size: 512
1552
+ [2023-02-23 03:43:12,378][27671] Policy head output size: 512
1553
+ [2023-02-23 03:43:12,548][27671] Created Actor Critic model with architecture:
1554
+ [2023-02-23 03:43:12,552][27671] ActorCriticSharedWeights(
1555
+ (obs_normalizer): ObservationNormalizer(
1556
+ (running_mean_std): RunningMeanStdDictInPlace(
1557
+ (running_mean_std): ModuleDict(
1558
+ (obs): RunningMeanStdInPlace()
1559
+ )
1560
+ )
1561
+ )
1562
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
1563
+ (encoder): VizdoomEncoder(
1564
+ (basic_encoder): ConvEncoder(
1565
+ (enc): RecursiveScriptModule(
1566
+ original_name=ConvEncoderImpl
1567
+ (conv_head): RecursiveScriptModule(
1568
+ original_name=Sequential
1569
+ (0): RecursiveScriptModule(original_name=Conv2d)
1570
+ (1): RecursiveScriptModule(original_name=ELU)
1571
+ (2): RecursiveScriptModule(original_name=Conv2d)
1572
+ (3): RecursiveScriptModule(original_name=ELU)
1573
+ (4): RecursiveScriptModule(original_name=Conv2d)
1574
+ (5): RecursiveScriptModule(original_name=ELU)
1575
+ )
1576
+ (mlp_layers): RecursiveScriptModule(
1577
+ original_name=Sequential
1578
+ (0): RecursiveScriptModule(original_name=Linear)
1579
+ (1): RecursiveScriptModule(original_name=ELU)
1580
+ )
1581
+ )
1582
+ )
1583
+ )
1584
+ (core): ModelCoreRNN(
1585
+ (core): GRU(512, 512)
1586
+ )
1587
+ (decoder): MlpDecoder(
1588
+ (mlp): Identity()
1589
+ )
1590
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
1591
+ (action_parameterization): ActionParameterizationDefault(
1592
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
1593
+ )
1594
+ )
1595
+ [2023-02-23 03:43:12,581][27691] Worker 1 uses CPU cores [1]
1596
+ [2023-02-23 03:43:12,849][27693] Worker 2 uses CPU cores [0]
1597
+ [2023-02-23 03:43:13,522][27696] Worker 4 uses CPU cores [0]
1598
+ [2023-02-23 03:43:13,694][27698] Worker 3 uses CPU cores [1]
1599
+ [2023-02-23 03:43:13,735][27706] Worker 7 uses CPU cores [1]
1600
+ [2023-02-23 03:43:13,855][27705] Worker 5 uses CPU cores [1]
1601
+ [2023-02-23 03:43:13,880][27708] Worker 6 uses CPU cores [0]
1602
+ [2023-02-23 03:43:15,820][27671] Using optimizer <class 'torch.optim.adam.Adam'>
1603
+ [2023-02-23 03:43:15,821][27671] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
1604
+ [2023-02-23 03:43:15,852][27671] Loading model from checkpoint
1605
+ [2023-02-23 03:43:15,856][27671] Loaded experiment state at self.train_step=978, self.env_steps=4005888
1606
+ [2023-02-23 03:43:15,857][27671] Initialized policy 0 weights for model version 978
1607
+ [2023-02-23 03:43:15,860][27671] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1608
+ [2023-02-23 03:43:15,866][27671] LearnerWorker_p0 finished initialization!
1609
+ [2023-02-23 03:43:16,087][27689] RunningMeanStd input shape: (3, 72, 128)
1610
+ [2023-02-23 03:43:16,088][27689] RunningMeanStd input shape: (1,)
1611
+ [2023-02-23 03:43:16,106][27689] ConvEncoder: input_channels=3
1612
+ [2023-02-23 03:43:16,201][27689] Conv encoder output size: 512
1613
+ [2023-02-23 03:43:16,202][27689] Policy head output size: 512
1614
+ [2023-02-23 03:43:18,530][11306] Inference worker 0-0 is ready!
1615
+ [2023-02-23 03:43:18,532][11306] All inference workers are ready! Signal rollout workers to start!
1616
+ [2023-02-23 03:43:18,654][27696] Doom resolution: 160x120, resize resolution: (128, 72)
1617
+ [2023-02-23 03:43:18,651][27693] Doom resolution: 160x120, resize resolution: (128, 72)
1618
+ [2023-02-23 03:43:18,683][27690] Doom resolution: 160x120, resize resolution: (128, 72)
1619
+ [2023-02-23 03:43:18,688][27691] Doom resolution: 160x120, resize resolution: (128, 72)
1620
+ [2023-02-23 03:43:18,697][27705] Doom resolution: 160x120, resize resolution: (128, 72)
1621
+ [2023-02-23 03:43:18,693][27706] Doom resolution: 160x120, resize resolution: (128, 72)
1622
+ [2023-02-23 03:43:18,700][27698] Doom resolution: 160x120, resize resolution: (128, 72)
1623
+ [2023-02-23 03:43:18,714][27708] Doom resolution: 160x120, resize resolution: (128, 72)
1624
+ [2023-02-23 03:43:19,690][11306] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1625
+ [2023-02-23 03:43:19,867][27698] Decorrelating experience for 0 frames...
1626
+ [2023-02-23 03:43:19,869][27706] Decorrelating experience for 0 frames...
1627
+ [2023-02-23 03:43:19,872][27705] Decorrelating experience for 0 frames...
1628
+ [2023-02-23 03:43:20,182][27693] Decorrelating experience for 0 frames...
1629
+ [2023-02-23 03:43:20,185][27690] Decorrelating experience for 0 frames...
1630
+ [2023-02-23 03:43:20,187][27708] Decorrelating experience for 0 frames...
1631
+ [2023-02-23 03:43:20,190][27696] Decorrelating experience for 0 frames...
1632
+ [2023-02-23 03:43:20,776][27706] Decorrelating experience for 32 frames...
1633
+ [2023-02-23 03:43:20,787][27705] Decorrelating experience for 32 frames...
1634
+ [2023-02-23 03:43:21,186][27691] Decorrelating experience for 0 frames...
1635
+ [2023-02-23 03:43:21,204][11306] Heartbeat connected on Batcher_0
1636
+ [2023-02-23 03:43:21,207][27693] Decorrelating experience for 32 frames...
1637
+ [2023-02-23 03:43:21,212][11306] Heartbeat connected on LearnerWorker_p0
1638
+ [2023-02-23 03:43:21,216][27708] Decorrelating experience for 32 frames...
1639
+ [2023-02-23 03:43:21,218][27690] Decorrelating experience for 32 frames...
1640
+ [2023-02-23 03:43:21,249][11306] Heartbeat connected on InferenceWorker_p0-w0
1641
+ [2023-02-23 03:43:21,893][27705] Decorrelating experience for 64 frames...
1642
+ [2023-02-23 03:43:22,338][27706] Decorrelating experience for 64 frames...
1643
+ [2023-02-23 03:43:22,394][27698] Decorrelating experience for 32 frames...
1644
+ [2023-02-23 03:43:22,540][27696] Decorrelating experience for 32 frames...
1645
+ [2023-02-23 03:43:22,948][27708] Decorrelating experience for 64 frames...
1646
+ [2023-02-23 03:43:23,035][27690] Decorrelating experience for 64 frames...
1647
+ [2023-02-23 03:43:23,314][27705] Decorrelating experience for 96 frames...
1648
+ [2023-02-23 03:43:23,467][27693] Decorrelating experience for 64 frames...
1649
+ [2023-02-23 03:43:23,583][11306] Heartbeat connected on RolloutWorker_w5
1650
+ [2023-02-23 03:43:24,456][27698] Decorrelating experience for 64 frames...
1651
+ [2023-02-23 03:43:24,690][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1652
+ [2023-02-23 03:43:25,412][27696] Decorrelating experience for 64 frames...
1653
+ [2023-02-23 03:43:25,488][27706] Decorrelating experience for 96 frames...
1654
+ [2023-02-23 03:43:25,841][27690] Decorrelating experience for 96 frames...
1655
+ [2023-02-23 03:43:25,977][11306] Heartbeat connected on RolloutWorker_w7
1656
+ [2023-02-23 03:43:26,215][11306] Heartbeat connected on RolloutWorker_w0
1657
+ [2023-02-23 03:43:26,631][27708] Decorrelating experience for 96 frames...
1658
+ [2023-02-23 03:43:26,954][27698] Decorrelating experience for 96 frames...
1659
+ [2023-02-23 03:43:27,339][11306] Heartbeat connected on RolloutWorker_w3
1660
+ [2023-02-23 03:43:27,398][11306] Heartbeat connected on RolloutWorker_w6
1661
+ [2023-02-23 03:43:28,619][27693] Decorrelating experience for 96 frames...
1662
+ [2023-02-23 03:43:29,263][27696] Decorrelating experience for 96 frames...
1663
+ [2023-02-23 03:43:29,690][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 61.2. Samples: 612. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1664
+ [2023-02-23 03:43:29,699][11306] Avg episode reward: [(0, '0.640')]
1665
+ [2023-02-23 03:43:29,958][11306] Heartbeat connected on RolloutWorker_w2
1666
+ [2023-02-23 03:43:30,057][27691] Decorrelating experience for 32 frames...
1667
+ [2023-02-23 03:43:30,151][11306] Heartbeat connected on RolloutWorker_w4
1668
+ [2023-02-23 03:43:31,650][27671] Signal inference workers to stop experience collection...
1669
+ [2023-02-23 03:43:31,664][27689] InferenceWorker_p0-w0: stopping experience collection
1670
+ [2023-02-23 03:43:32,431][27691] Decorrelating experience for 64 frames...
1671
+ [2023-02-23 03:43:32,852][27691] Decorrelating experience for 96 frames...
1672
+ [2023-02-23 03:43:32,921][11306] Heartbeat connected on RolloutWorker_w1
1673
+ [2023-02-23 03:43:34,690][11306] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 160.3. Samples: 2404. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1674
+ [2023-02-23 03:43:34,697][11306] Avg episode reward: [(0, '2.240')]
1675
+ [2023-02-23 03:43:35,287][27671] Signal inference workers to resume experience collection...
1676
+ [2023-02-23 03:43:35,291][27689] InferenceWorker_p0-w0: resuming experience collection
1677
+ [2023-02-23 03:43:39,690][11306] Fps is (10 sec: 2457.6, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 4030464. Throughput: 0: 326.7. Samples: 6534. Policy #0 lag: (min: 0.0, avg: 0.8, max: 3.0)
1678
+ [2023-02-23 03:43:39,692][11306] Avg episode reward: [(0, '9.474')]
1679
+ [2023-02-23 03:43:43,034][27689] Updated weights for policy 0, policy_version 988 (0.0015)
1680
+ [2023-02-23 03:43:44,693][11306] Fps is (10 sec: 4504.6, 60 sec: 1802.1, 300 sec: 1802.1). Total num frames: 4050944. Throughput: 0: 402.0. Samples: 10052. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1681
+ [2023-02-23 03:43:44,695][11306] Avg episode reward: [(0, '16.425')]
1682
+ [2023-02-23 03:43:49,690][11306] Fps is (10 sec: 3276.8, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 4063232. Throughput: 0: 496.4. Samples: 14892. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1683
+ [2023-02-23 03:43:49,693][11306] Avg episode reward: [(0, '19.608')]
1684
+ [2023-02-23 03:43:54,690][11306] Fps is (10 sec: 2867.9, 60 sec: 2106.5, 300 sec: 2106.5). Total num frames: 4079616. Throughput: 0: 555.6. Samples: 19446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1685
+ [2023-02-23 03:43:54,697][11306] Avg episode reward: [(0, '21.699')]
1686
+ [2023-02-23 03:43:55,969][27689] Updated weights for policy 0, policy_version 998 (0.0024)
1687
+ [2023-02-23 03:43:59,690][11306] Fps is (10 sec: 4096.0, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 4104192. Throughput: 0: 577.0. Samples: 23082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1688
+ [2023-02-23 03:43:59,699][11306] Avg episode reward: [(0, '22.585')]
1689
+ [2023-02-23 03:44:04,528][27689] Updated weights for policy 0, policy_version 1008 (0.0013)
1690
+ [2023-02-23 03:44:04,690][11306] Fps is (10 sec: 4915.2, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 4128768. Throughput: 0: 671.7. Samples: 30228. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1691
+ [2023-02-23 03:44:04,699][11306] Avg episode reward: [(0, '23.757')]
1692
+ [2023-02-23 03:44:09,693][11306] Fps is (10 sec: 3685.6, 60 sec: 2703.2, 300 sec: 2703.2). Total num frames: 4141056. Throughput: 0: 780.4. Samples: 35120. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1693
+ [2023-02-23 03:44:09,695][11306] Avg episode reward: [(0, '26.310')]
1694
+ [2023-02-23 03:44:14,690][11306] Fps is (10 sec: 2867.2, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 4157440. Throughput: 0: 816.4. Samples: 37352. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1695
+ [2023-02-23 03:44:14,693][11306] Avg episode reward: [(0, '25.777')]
1696
+ [2023-02-23 03:44:16,595][27689] Updated weights for policy 0, policy_version 1018 (0.0016)
1697
+ [2023-02-23 03:44:19,690][11306] Fps is (10 sec: 4096.9, 60 sec: 2935.5, 300 sec: 2935.5). Total num frames: 4182016. Throughput: 0: 922.9. Samples: 43936. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1698
+ [2023-02-23 03:44:19,693][11306] Avg episode reward: [(0, '25.532')]
1699
+ [2023-02-23 03:44:24,696][11306] Fps is (10 sec: 4912.6, 60 sec: 3344.8, 300 sec: 3087.5). Total num frames: 4206592. Throughput: 0: 983.3. Samples: 50788. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1700
+ [2023-02-23 03:44:24,699][11306] Avg episode reward: [(0, '25.872')]
1701
+ [2023-02-23 03:44:25,771][27689] Updated weights for policy 0, policy_version 1028 (0.0023)
1702
+ [2023-02-23 03:44:29,696][11306] Fps is (10 sec: 3684.5, 60 sec: 3549.6, 300 sec: 3042.5). Total num frames: 4218880. Throughput: 0: 955.7. Samples: 53060. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1703
+ [2023-02-23 03:44:29,707][11306] Avg episode reward: [(0, '25.922')]
1704
+ [2023-02-23 03:44:34,690][11306] Fps is (10 sec: 2868.7, 60 sec: 3822.9, 300 sec: 3058.3). Total num frames: 4235264. Throughput: 0: 949.3. Samples: 57610. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1705
+ [2023-02-23 03:44:34,698][11306] Avg episode reward: [(0, '25.252')]
1706
+ [2023-02-23 03:44:37,375][27689] Updated weights for policy 0, policy_version 1038 (0.0027)
1707
+ [2023-02-23 03:44:39,690][11306] Fps is (10 sec: 4098.1, 60 sec: 3822.9, 300 sec: 3174.4). Total num frames: 4259840. Throughput: 0: 1001.2. Samples: 64500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1708
+ [2023-02-23 03:44:39,696][11306] Avg episode reward: [(0, '25.916')]
1709
+ [2023-02-23 03:44:44,690][11306] Fps is (10 sec: 4915.2, 60 sec: 3891.4, 300 sec: 3276.8). Total num frames: 4284416. Throughput: 0: 999.8. Samples: 68072. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1710
+ [2023-02-23 03:44:44,695][11306] Avg episode reward: [(0, '25.258')]
1711
+ [2023-02-23 03:44:46,905][27689] Updated weights for policy 0, policy_version 1048 (0.0013)
1712
+ [2023-02-23 03:44:49,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3276.8). Total num frames: 4300800. Throughput: 0: 961.0. Samples: 73472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1713
+ [2023-02-23 03:44:49,696][11306] Avg episode reward: [(0, '26.410')]
1714
+ [2023-02-23 03:44:54,692][11306] Fps is (10 sec: 2866.6, 60 sec: 3891.1, 300 sec: 3233.6). Total num frames: 4313088. Throughput: 0: 958.4. Samples: 78246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1715
+ [2023-02-23 03:44:54,698][11306] Avg episode reward: [(0, '26.509')]
1716
+ [2023-02-23 03:44:58,175][27689] Updated weights for policy 0, policy_version 1058 (0.0028)
1717
+ [2023-02-23 03:44:59,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3317.8). Total num frames: 4337664. Throughput: 0: 987.8. Samples: 81802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1718
+ [2023-02-23 03:44:59,698][11306] Avg episode reward: [(0, '25.949')]
1719
+ [2023-02-23 03:44:59,707][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001059_4337664.pth...
1720
+ [2023-02-23 03:44:59,936][27671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000961_3936256.pth
1721
+ [2023-02-23 03:45:04,695][11306] Fps is (10 sec: 4914.1, 60 sec: 3890.9, 300 sec: 3393.7). Total num frames: 4362240. Throughput: 0: 1000.3. Samples: 88952. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1722
+ [2023-02-23 03:45:04,703][11306] Avg episode reward: [(0, '25.473')]
1723
+ [2023-02-23 03:45:08,148][27689] Updated weights for policy 0, policy_version 1068 (0.0023)
1724
+ [2023-02-23 03:45:09,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.6, 300 sec: 3388.5). Total num frames: 4378624. Throughput: 0: 956.0. Samples: 93804. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1725
+ [2023-02-23 03:45:09,695][11306] Avg episode reward: [(0, '25.854')]
1726
+ [2023-02-23 03:45:14,690][11306] Fps is (10 sec: 3278.2, 60 sec: 3959.5, 300 sec: 3383.7). Total num frames: 4395008. Throughput: 0: 955.2. Samples: 96040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1727
+ [2023-02-23 03:45:14,698][11306] Avg episode reward: [(0, '24.897')]
1728
+ [2023-02-23 03:45:19,093][27689] Updated weights for policy 0, policy_version 1078 (0.0012)
1729
+ [2023-02-23 03:45:19,691][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3413.3). Total num frames: 4415488. Throughput: 0: 993.7. Samples: 102326. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1730
+ [2023-02-23 03:45:19,696][11306] Avg episode reward: [(0, '24.609')]
1731
+ [2023-02-23 03:45:24,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.5, 300 sec: 3473.4). Total num frames: 4440064. Throughput: 0: 992.9. Samples: 109182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1732
+ [2023-02-23 03:45:24,701][11306] Avg episode reward: [(0, '23.103')]
1733
+ [2023-02-23 03:45:29,690][11306] Fps is (10 sec: 3686.5, 60 sec: 3891.5, 300 sec: 3434.3). Total num frames: 4452352. Throughput: 0: 963.9. Samples: 111448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1734
+ [2023-02-23 03:45:29,697][11306] Avg episode reward: [(0, '23.482')]
1735
+ [2023-02-23 03:45:29,880][27689] Updated weights for policy 0, policy_version 1088 (0.0026)
1736
+ [2023-02-23 03:45:34,690][11306] Fps is (10 sec: 2867.2, 60 sec: 3891.2, 300 sec: 3428.5). Total num frames: 4468736. Throughput: 0: 943.7. Samples: 115940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1737
+ [2023-02-23 03:45:34,692][11306] Avg episode reward: [(0, '23.986')]
1738
+ [2023-02-23 03:45:39,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3481.6). Total num frames: 4493312. Throughput: 0: 990.7. Samples: 122826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1739
+ [2023-02-23 03:45:39,693][11306] Avg episode reward: [(0, '23.944')]
1740
+ [2023-02-23 03:45:40,026][27689] Updated weights for policy 0, policy_version 1098 (0.0013)
1741
+ [2023-02-23 03:45:44,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3502.8). Total num frames: 4513792. Throughput: 0: 988.0. Samples: 126260. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1742
+ [2023-02-23 03:45:44,695][11306] Avg episode reward: [(0, '23.703')]
1743
+ [2023-02-23 03:45:49,691][11306] Fps is (10 sec: 3686.0, 60 sec: 3822.9, 300 sec: 3495.2). Total num frames: 4530176. Throughput: 0: 941.2. Samples: 131304. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1744
+ [2023-02-23 03:45:49,694][11306] Avg episode reward: [(0, '23.400')]
1745
+ [2023-02-23 03:45:51,520][27689] Updated weights for policy 0, policy_version 1108 (0.0012)
1746
+ [2023-02-23 03:45:54,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3488.2). Total num frames: 4546560. Throughput: 0: 946.0. Samples: 136372. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1747
+ [2023-02-23 03:45:54,697][11306] Avg episode reward: [(0, '22.708')]
1748
+ [2023-02-23 03:45:59,690][11306] Fps is (10 sec: 4096.4, 60 sec: 3891.2, 300 sec: 3532.8). Total num frames: 4571136. Throughput: 0: 975.3. Samples: 139930. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1749
+ [2023-02-23 03:45:59,698][11306] Avg episode reward: [(0, '23.895')]
1750
+ [2023-02-23 03:46:00,693][27689] Updated weights for policy 0, policy_version 1118 (0.0012)
1751
+ [2023-02-23 03:46:04,690][11306] Fps is (10 sec: 4915.2, 60 sec: 3891.5, 300 sec: 3574.7). Total num frames: 4595712. Throughput: 0: 997.3. Samples: 147206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1752
+ [2023-02-23 03:46:04,693][11306] Avg episode reward: [(0, '22.922')]
1753
+ [2023-02-23 03:46:09,695][11306] Fps is (10 sec: 3684.8, 60 sec: 3822.7, 300 sec: 3541.7). Total num frames: 4608000. Throughput: 0: 945.4. Samples: 151730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1754
+ [2023-02-23 03:46:09,697][11306] Avg episode reward: [(0, '22.641')]
1755
+ [2023-02-23 03:46:12,514][27689] Updated weights for policy 0, policy_version 1128 (0.0016)
1756
+ [2023-02-23 03:46:14,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3557.7). Total num frames: 4628480. Throughput: 0: 945.2. Samples: 153980. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1757
+ [2023-02-23 03:46:14,692][11306] Avg episode reward: [(0, '22.677')]
1758
+ [2023-02-23 03:46:19,690][11306] Fps is (10 sec: 4507.5, 60 sec: 3959.5, 300 sec: 3595.4). Total num frames: 4653056. Throughput: 0: 1003.5. Samples: 161096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1759
+ [2023-02-23 03:46:19,692][11306] Avg episode reward: [(0, '24.514')]
1760
+ [2023-02-23 03:46:21,316][27689] Updated weights for policy 0, policy_version 1138 (0.0015)
1761
+ [2023-02-23 03:46:24,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3608.9). Total num frames: 4673536. Throughput: 0: 996.4. Samples: 167664. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
1762
+ [2023-02-23 03:46:24,695][11306] Avg episode reward: [(0, '23.697')]
1763
+ [2023-02-23 03:46:29,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3600.2). Total num frames: 4689920. Throughput: 0: 968.8. Samples: 169854. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
1764
+ [2023-02-23 03:46:29,696][11306] Avg episode reward: [(0, '23.712')]
1765
+ [2023-02-23 03:46:33,488][27689] Updated weights for policy 0, policy_version 1148 (0.0017)
1766
+ [2023-02-23 03:46:34,691][11306] Fps is (10 sec: 3276.7, 60 sec: 3959.4, 300 sec: 3591.9). Total num frames: 4706304. Throughput: 0: 960.8. Samples: 174540. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1767
+ [2023-02-23 03:46:34,700][11306] Avg episode reward: [(0, '26.236')]
1768
+ [2023-02-23 03:46:39,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3625.0). Total num frames: 4730880. Throughput: 0: 1010.4. Samples: 181838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1769
+ [2023-02-23 03:46:39,696][11306] Avg episode reward: [(0, '26.682')]
1770
+ [2023-02-23 03:46:42,109][27689] Updated weights for policy 0, policy_version 1158 (0.0017)
1771
+ [2023-02-23 03:46:44,690][11306] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3636.4). Total num frames: 4751360. Throughput: 0: 1011.6. Samples: 185450. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1772
+ [2023-02-23 03:46:44,695][11306] Avg episode reward: [(0, '25.136')]
1773
+ [2023-02-23 03:46:49,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3608.4). Total num frames: 4763648. Throughput: 0: 954.5. Samples: 190160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1774
+ [2023-02-23 03:46:49,693][11306] Avg episode reward: [(0, '24.309')]
1775
+ [2023-02-23 03:46:54,280][27689] Updated weights for policy 0, policy_version 1168 (0.0025)
1776
+ [2023-02-23 03:46:54,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3619.7). Total num frames: 4784128. Throughput: 0: 975.0. Samples: 195602. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1777
+ [2023-02-23 03:46:54,693][11306] Avg episode reward: [(0, '23.591')]
1778
+ [2023-02-23 03:46:59,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3649.2). Total num frames: 4808704. Throughput: 0: 1005.3. Samples: 199218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1779
+ [2023-02-23 03:46:59,693][11306] Avg episode reward: [(0, '20.615')]
1780
+ [2023-02-23 03:46:59,703][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001174_4808704.pth...
1781
+ [2023-02-23 03:46:59,873][27671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth
1782
+ [2023-02-23 03:47:02,830][27689] Updated weights for policy 0, policy_version 1178 (0.0022)
1783
+ [2023-02-23 03:47:04,696][11306] Fps is (10 sec: 4503.2, 60 sec: 3890.9, 300 sec: 3659.0). Total num frames: 4829184. Throughput: 0: 993.9. Samples: 205828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1784
+ [2023-02-23 03:47:04,698][11306] Avg episode reward: [(0, '20.820')]
1785
+ [2023-02-23 03:47:09,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3650.8). Total num frames: 4845568. Throughput: 0: 950.3. Samples: 210428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1786
+ [2023-02-23 03:47:09,693][11306] Avg episode reward: [(0, '21.167')]
1787
+ [2023-02-23 03:47:14,690][11306] Fps is (10 sec: 3278.5, 60 sec: 3891.2, 300 sec: 3642.8). Total num frames: 4861952. Throughput: 0: 950.9. Samples: 212644. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1788
+ [2023-02-23 03:47:14,696][11306] Avg episode reward: [(0, '22.074')]
1789
+ [2023-02-23 03:47:14,970][27689] Updated weights for policy 0, policy_version 1188 (0.0032)
1790
+ [2023-02-23 03:47:19,691][11306] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3669.3). Total num frames: 4886528. Throughput: 0: 1007.1. Samples: 219860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1791
+ [2023-02-23 03:47:19,697][11306] Avg episode reward: [(0, '21.910')]
1792
+ [2023-02-23 03:47:24,004][27689] Updated weights for policy 0, policy_version 1198 (0.0013)
1793
+ [2023-02-23 03:47:24,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3678.0). Total num frames: 4907008. Throughput: 0: 985.3. Samples: 226176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1794
+ [2023-02-23 03:47:24,693][11306] Avg episode reward: [(0, '23.213')]
1795
+ [2023-02-23 03:47:29,690][11306] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3670.0). Total num frames: 4923392. Throughput: 0: 956.6. Samples: 228496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1796
+ [2023-02-23 03:47:29,697][11306] Avg episode reward: [(0, '22.632')]
1797
+ [2023-02-23 03:47:34,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3662.3). Total num frames: 4939776. Throughput: 0: 962.1. Samples: 233456. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1798
+ [2023-02-23 03:47:34,692][11306] Avg episode reward: [(0, '22.612')]
1799
+ [2023-02-23 03:47:35,792][27689] Updated weights for policy 0, policy_version 1208 (0.0021)
1800
+ [2023-02-23 03:47:39,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3686.4). Total num frames: 4964352. Throughput: 0: 1004.0. Samples: 240782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1801
+ [2023-02-23 03:47:39,693][11306] Avg episode reward: [(0, '22.580')]
1802
+ [2023-02-23 03:47:44,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3694.1). Total num frames: 4984832. Throughput: 0: 1002.0. Samples: 244310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1803
+ [2023-02-23 03:47:44,696][11306] Avg episode reward: [(0, '24.530')]
1804
+ [2023-02-23 03:47:45,287][27689] Updated weights for policy 0, policy_version 1218 (0.0028)
1805
+ [2023-02-23 03:47:49,692][11306] Fps is (10 sec: 3685.8, 60 sec: 3959.4, 300 sec: 3686.4). Total num frames: 5001216. Throughput: 0: 955.6. Samples: 248826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1806
+ [2023-02-23 03:47:49,694][11306] Avg episode reward: [(0, '24.944')]
1807
+ [2023-02-23 03:47:54,691][11306] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3693.8). Total num frames: 5021696. Throughput: 0: 977.7. Samples: 254424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1808
+ [2023-02-23 03:47:54,697][11306] Avg episode reward: [(0, '24.807')]
1809
+ [2023-02-23 03:47:56,236][27689] Updated weights for policy 0, policy_version 1228 (0.0017)
1810
+ [2023-02-23 03:47:59,691][11306] Fps is (10 sec: 4096.6, 60 sec: 3891.2, 300 sec: 3701.0). Total num frames: 5042176. Throughput: 0: 1009.4. Samples: 258066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1811
+ [2023-02-23 03:47:59,697][11306] Avg episode reward: [(0, '26.336')]
1812
+ [2023-02-23 03:48:04,690][11306] Fps is (10 sec: 4096.1, 60 sec: 3891.5, 300 sec: 3708.0). Total num frames: 5062656. Throughput: 0: 998.4. Samples: 264786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1813
+ [2023-02-23 03:48:04,693][11306] Avg episode reward: [(0, '25.320')]
1814
+ [2023-02-23 03:48:06,164][27689] Updated weights for policy 0, policy_version 1238 (0.0013)
1815
+ [2023-02-23 03:48:09,694][11306] Fps is (10 sec: 3685.2, 60 sec: 3891.0, 300 sec: 3700.5). Total num frames: 5079040. Throughput: 0: 958.8. Samples: 269324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1816
+ [2023-02-23 03:48:09,698][11306] Avg episode reward: [(0, '26.082')]
1817
+ [2023-02-23 03:48:14,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3707.2). Total num frames: 5099520. Throughput: 0: 962.4. Samples: 271806. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1818
+ [2023-02-23 03:48:14,693][11306] Avg episode reward: [(0, '24.646')]
1819
+ [2023-02-23 03:48:16,804][27689] Updated weights for policy 0, policy_version 1248 (0.0012)
1820
+ [2023-02-23 03:48:19,690][11306] Fps is (10 sec: 4507.1, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 5124096. Throughput: 0: 1013.8. Samples: 279076. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1821
+ [2023-02-23 03:48:19,693][11306] Avg episode reward: [(0, '24.891')]
1822
+ [2023-02-23 03:48:24,693][11306] Fps is (10 sec: 4504.5, 60 sec: 3959.3, 300 sec: 3859.9). Total num frames: 5144576. Throughput: 0: 986.7. Samples: 285188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1823
+ [2023-02-23 03:48:24,698][11306] Avg episode reward: [(0, '24.799')]
1824
+ [2023-02-23 03:48:27,520][27689] Updated weights for policy 0, policy_version 1258 (0.0024)
1825
+ [2023-02-23 03:48:29,691][11306] Fps is (10 sec: 3276.6, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 5156864. Throughput: 0: 956.2. Samples: 287340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1826
+ [2023-02-23 03:48:29,699][11306] Avg episode reward: [(0, '24.728')]
1827
+ [2023-02-23 03:48:34,690][11306] Fps is (10 sec: 3277.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 5177344. Throughput: 0: 972.3. Samples: 292578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1828
+ [2023-02-23 03:48:34,693][11306] Avg episode reward: [(0, '25.859')]
1829
+ [2023-02-23 03:48:37,789][27689] Updated weights for policy 0, policy_version 1268 (0.0015)
1830
+ [2023-02-23 03:48:39,690][11306] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 5201920. Throughput: 0: 1004.8. Samples: 299640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1831
+ [2023-02-23 03:48:39,701][11306] Avg episode reward: [(0, '25.291')]
1832
+ [2023-02-23 03:48:44,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 5218304. Throughput: 0: 993.2. Samples: 302758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1833
+ [2023-02-23 03:48:44,693][11306] Avg episode reward: [(0, '25.657')]
1834
+ [2023-02-23 03:48:49,306][27689] Updated weights for policy 0, policy_version 1278 (0.0014)
1835
+ [2023-02-23 03:48:49,691][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.3, 300 sec: 3915.5). Total num frames: 5234688. Throughput: 0: 942.1. Samples: 307182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1836
+ [2023-02-23 03:48:49,696][11306] Avg episode reward: [(0, '25.421')]
1837
+ [2023-02-23 03:48:54,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 5255168. Throughput: 0: 969.0. Samples: 312924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1838
+ [2023-02-23 03:48:54,694][11306] Avg episode reward: [(0, '24.518')]
1839
+ [2023-02-23 03:48:58,990][27689] Updated weights for policy 0, policy_version 1288 (0.0023)
1840
+ [2023-02-23 03:48:59,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5275648. Throughput: 0: 991.9. Samples: 316442. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1841
+ [2023-02-23 03:48:59,696][11306] Avg episode reward: [(0, '24.373')]
1842
+ [2023-02-23 03:48:59,707][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001288_5275648.pth...
1843
+ [2023-02-23 03:48:59,892][27671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001059_4337664.pth
1844
+ [2023-02-23 03:49:04,691][11306] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 5296128. Throughput: 0: 962.1. Samples: 322372. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1845
+ [2023-02-23 03:49:04,696][11306] Avg episode reward: [(0, '24.066')]
1846
+ [2023-02-23 03:49:09,692][11306] Fps is (10 sec: 3276.2, 60 sec: 3823.0, 300 sec: 3901.6). Total num frames: 5308416. Throughput: 0: 920.3. Samples: 326602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1847
+ [2023-02-23 03:49:09,694][11306] Avg episode reward: [(0, '23.808')]
1848
+ [2023-02-23 03:49:11,628][27689] Updated weights for policy 0, policy_version 1298 (0.0025)
1849
+ [2023-02-23 03:49:14,690][11306] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 5328896. Throughput: 0: 931.2. Samples: 329244. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1850
+ [2023-02-23 03:49:14,694][11306] Avg episode reward: [(0, '23.847')]
1851
+ [2023-02-23 03:49:19,690][11306] Fps is (10 sec: 4096.8, 60 sec: 3754.7, 300 sec: 3873.9). Total num frames: 5349376. Throughput: 0: 968.2. Samples: 336148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1852
+ [2023-02-23 03:49:19,693][11306] Avg episode reward: [(0, '25.688')]
1853
+ [2023-02-23 03:49:20,615][27689] Updated weights for policy 0, policy_version 1308 (0.0012)
1854
+ [2023-02-23 03:49:24,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.8, 300 sec: 3901.7). Total num frames: 5369856. Throughput: 0: 930.4. Samples: 341510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1855
+ [2023-02-23 03:49:24,693][11306] Avg episode reward: [(0, '26.662')]
1856
+ [2023-02-23 03:49:29,693][11306] Fps is (10 sec: 3276.1, 60 sec: 3754.6, 300 sec: 3887.7). Total num frames: 5382144. Throughput: 0: 908.9. Samples: 343660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1857
+ [2023-02-23 03:49:29,697][11306] Avg episode reward: [(0, '26.616')]
1858
+ [2023-02-23 03:49:33,176][27689] Updated weights for policy 0, policy_version 1318 (0.0013)
1859
+ [2023-02-23 03:49:34,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5402624. Throughput: 0: 931.2. Samples: 349088. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1860
+ [2023-02-23 03:49:34,697][11306] Avg episode reward: [(0, '29.290')]
1861
+ [2023-02-23 03:49:39,690][11306] Fps is (10 sec: 4506.6, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5427200. Throughput: 0: 966.0. Samples: 356394. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1862
+ [2023-02-23 03:49:39,698][11306] Avg episode reward: [(0, '27.501')]
1863
+ [2023-02-23 03:49:42,293][27689] Updated weights for policy 0, policy_version 1328 (0.0012)
1864
+ [2023-02-23 03:49:44,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5443584. Throughput: 0: 951.2. Samples: 359248. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1865
+ [2023-02-23 03:49:44,692][11306] Avg episode reward: [(0, '25.716')]
1866
+ [2023-02-23 03:49:49,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3887.8). Total num frames: 5459968. Throughput: 0: 920.5. Samples: 363794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1867
+ [2023-02-23 03:49:49,696][11306] Avg episode reward: [(0, '24.501')]
1868
+ [2023-02-23 03:49:53,870][27689] Updated weights for policy 0, policy_version 1338 (0.0030)
1869
+ [2023-02-23 03:49:54,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5480448. Throughput: 0: 965.2. Samples: 370036. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1870
+ [2023-02-23 03:49:54,696][11306] Avg episode reward: [(0, '24.630')]
1871
+ [2023-02-23 03:49:59,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3873.9). Total num frames: 5505024. Throughput: 0: 985.5. Samples: 373590. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1872
+ [2023-02-23 03:49:59,692][11306] Avg episode reward: [(0, '23.241')]
1873
+ [2023-02-23 03:50:03,498][27689] Updated weights for policy 0, policy_version 1348 (0.0012)
1874
+ [2023-02-23 03:50:04,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 5521408. Throughput: 0: 965.2. Samples: 379580. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1875
+ [2023-02-23 03:50:04,698][11306] Avg episode reward: [(0, '23.356')]
1876
+ [2023-02-23 03:50:09,691][11306] Fps is (10 sec: 3276.6, 60 sec: 3823.0, 300 sec: 3873.8). Total num frames: 5537792. Throughput: 0: 950.4. Samples: 384278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1877
+ [2023-02-23 03:50:09,694][11306] Avg episode reward: [(0, '23.907')]
1878
+ [2023-02-23 03:50:14,592][27689] Updated weights for policy 0, policy_version 1358 (0.0012)
1879
+ [2023-02-23 03:50:14,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5562368. Throughput: 0: 969.5. Samples: 387286. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1880
+ [2023-02-23 03:50:14,697][11306] Avg episode reward: [(0, '25.374')]
1881
+ [2023-02-23 03:50:19,691][11306] Fps is (10 sec: 4505.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5582848. Throughput: 0: 1001.6. Samples: 394160. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1882
+ [2023-02-23 03:50:19,694][11306] Avg episode reward: [(0, '25.742')]
1883
+ [2023-02-23 03:50:24,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 5599232. Throughput: 0: 963.2. Samples: 399738. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1884
+ [2023-02-23 03:50:24,693][11306] Avg episode reward: [(0, '26.215')]
1885
+ [2023-02-23 03:50:24,758][27689] Updated weights for policy 0, policy_version 1368 (0.0019)
1886
+ [2023-02-23 03:50:29,692][11306] Fps is (10 sec: 3276.2, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5615616. Throughput: 0: 948.9. Samples: 401952. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1887
+ [2023-02-23 03:50:29,698][11306] Avg episode reward: [(0, '25.320')]
1888
+ [2023-02-23 03:50:34,691][11306] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5636096. Throughput: 0: 974.8. Samples: 407658. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1889
+ [2023-02-23 03:50:34,698][11306] Avg episode reward: [(0, '26.175')]
1890
+ [2023-02-23 03:50:35,689][27689] Updated weights for policy 0, policy_version 1378 (0.0013)
1891
+ [2023-02-23 03:50:39,690][11306] Fps is (10 sec: 4506.6, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5660672. Throughput: 0: 996.4. Samples: 414874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1892
+ [2023-02-23 03:50:39,697][11306] Avg episode reward: [(0, '26.118')]
1893
+ [2023-02-23 03:50:44,698][11306] Fps is (10 sec: 4502.5, 60 sec: 3959.0, 300 sec: 3901.5). Total num frames: 5681152. Throughput: 0: 980.1. Samples: 417700. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1894
+ [2023-02-23 03:50:44,710][11306] Avg episode reward: [(0, '25.024')]
1895
+ [2023-02-23 03:50:46,099][27689] Updated weights for policy 0, policy_version 1388 (0.0017)
1896
+ [2023-02-23 03:50:49,691][11306] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 5693440. Throughput: 0: 947.7. Samples: 422226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1897
+ [2023-02-23 03:50:49,697][11306] Avg episode reward: [(0, '24.419')]
1898
+ [2023-02-23 03:50:54,690][11306] Fps is (10 sec: 3689.1, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 5718016. Throughput: 0: 985.6. Samples: 428628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1899
+ [2023-02-23 03:50:54,697][11306] Avg episode reward: [(0, '23.244')]
1900
+ [2023-02-23 03:50:56,269][27689] Updated weights for policy 0, policy_version 1398 (0.0018)
1901
+ [2023-02-23 03:50:59,690][11306] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5738496. Throughput: 0: 999.5. Samples: 432264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1902
+ [2023-02-23 03:50:59,696][11306] Avg episode reward: [(0, '22.594')]
1903
+ [2023-02-23 03:50:59,716][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001402_5742592.pth...
1904
+ [2023-02-23 03:50:59,888][27671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001174_4808704.pth
1905
+ [2023-02-23 03:51:04,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3887.8). Total num frames: 5754880. Throughput: 0: 972.4. Samples: 437918. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1906
+ [2023-02-23 03:51:04,695][11306] Avg episode reward: [(0, '22.153')]
1907
+ [2023-02-23 03:51:07,637][27689] Updated weights for policy 0, policy_version 1408 (0.0012)
1908
+ [2023-02-23 03:51:09,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5771264. Throughput: 0: 949.4. Samples: 442462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1909
+ [2023-02-23 03:51:09,697][11306] Avg episode reward: [(0, '22.619')]
1910
+ [2023-02-23 03:51:14,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5795840. Throughput: 0: 973.2. Samples: 445742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1911
+ [2023-02-23 03:51:14,697][11306] Avg episode reward: [(0, '24.487')]
1912
+ [2023-02-23 03:51:17,042][27689] Updated weights for policy 0, policy_version 1418 (0.0020)
1913
+ [2023-02-23 03:51:19,690][11306] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 5820416. Throughput: 0: 1005.1. Samples: 452888. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1914
+ [2023-02-23 03:51:19,698][11306] Avg episode reward: [(0, '25.470')]
1915
+ [2023-02-23 03:51:24,694][11306] Fps is (10 sec: 3685.2, 60 sec: 3891.0, 300 sec: 3873.8). Total num frames: 5832704. Throughput: 0: 962.6. Samples: 458196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1916
+ [2023-02-23 03:51:24,701][11306] Avg episode reward: [(0, '26.519')]
1917
+ [2023-02-23 03:51:28,690][27689] Updated weights for policy 0, policy_version 1428 (0.0013)
1918
+ [2023-02-23 03:51:29,690][11306] Fps is (10 sec: 2867.2, 60 sec: 3891.3, 300 sec: 3873.8). Total num frames: 5849088. Throughput: 0: 949.1. Samples: 460402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1919
+ [2023-02-23 03:51:29,698][11306] Avg episode reward: [(0, '26.898')]
1920
+ [2023-02-23 03:51:34,690][11306] Fps is (10 sec: 4097.3, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 5873664. Throughput: 0: 979.5. Samples: 466302. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1921
+ [2023-02-23 03:51:34,698][11306] Avg episode reward: [(0, '27.585')]
1922
+ [2023-02-23 03:51:38,014][27689] Updated weights for policy 0, policy_version 1438 (0.0014)
1923
+ [2023-02-23 03:51:39,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5894144. Throughput: 0: 994.1. Samples: 473362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1924
+ [2023-02-23 03:51:39,698][11306] Avg episode reward: [(0, '27.100')]
1925
+ [2023-02-23 03:51:44,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3823.4, 300 sec: 3887.7). Total num frames: 5910528. Throughput: 0: 970.5. Samples: 475938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1926
+ [2023-02-23 03:51:44,697][11306] Avg episode reward: [(0, '25.292')]
1927
+ [2023-02-23 03:51:49,690][11306] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5926912. Throughput: 0: 946.4. Samples: 480506. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1928
+ [2023-02-23 03:51:49,693][11306] Avg episode reward: [(0, '24.780')]
1929
+ [2023-02-23 03:51:50,256][27689] Updated weights for policy 0, policy_version 1448 (0.0025)
1930
+ [2023-02-23 03:51:54,690][11306] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5951488. Throughput: 0: 987.1. Samples: 486882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1931
+ [2023-02-23 03:51:54,695][11306] Avg episode reward: [(0, '25.409')]
1932
+ [2023-02-23 03:51:58,888][27689] Updated weights for policy 0, policy_version 1458 (0.0013)
1933
+ [2023-02-23 03:51:59,690][11306] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 5971968. Throughput: 0: 993.0. Samples: 490426. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1934
+ [2023-02-23 03:51:59,697][11306] Avg episode reward: [(0, '24.940')]
1935
+ [2023-02-23 03:52:04,690][11306] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3873.8). Total num frames: 5988352. Throughput: 0: 962.6. Samples: 496204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1936
+ [2023-02-23 03:52:04,692][11306] Avg episode reward: [(0, '25.256')]
1937
+ [2023-02-23 03:52:09,029][27671] Stopping Batcher_0...
1938
+ [2023-02-23 03:52:09,030][27671] Loop batcher_evt_loop terminating...
1939
+ [2023-02-23 03:52:09,031][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
1940
+ [2023-02-23 03:52:09,033][11306] Component Batcher_0 stopped!
1941
+ [2023-02-23 03:52:09,106][27689] Weights refcount: 2 0
1942
+ [2023-02-23 03:52:09,133][27689] Stopping InferenceWorker_p0-w0...
1943
+ [2023-02-23 03:52:09,133][27689] Loop inference_proc0-0_evt_loop terminating...
1944
+ [2023-02-23 03:52:09,133][11306] Component InferenceWorker_p0-w0 stopped!
1945
+ [2023-02-23 03:52:09,201][27671] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001288_5275648.pth
1946
+ [2023-02-23 03:52:09,215][27671] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
1947
+ [2023-02-23 03:52:09,321][11306] Component RolloutWorker_w3 stopped!
1948
+ [2023-02-23 03:52:09,328][11306] Component RolloutWorker_w7 stopped!
1949
+ [2023-02-23 03:52:09,323][27698] Stopping RolloutWorker_w3...
1950
+ [2023-02-23 03:52:09,334][27698] Loop rollout_proc3_evt_loop terminating...
1951
+ [2023-02-23 03:52:09,333][27706] Stopping RolloutWorker_w7...
1952
+ [2023-02-23 03:52:09,336][27706] Loop rollout_proc7_evt_loop terminating...
1953
+ [2023-02-23 03:52:09,340][11306] Component RolloutWorker_w5 stopped!
1954
+ [2023-02-23 03:52:09,342][27705] Stopping RolloutWorker_w5...
1955
+ [2023-02-23 03:52:09,343][27705] Loop rollout_proc5_evt_loop terminating...
1956
+ [2023-02-23 03:52:09,352][11306] Component RolloutWorker_w1 stopped!
1957
+ [2023-02-23 03:52:09,355][27691] Stopping RolloutWorker_w1...
1958
+ [2023-02-23 03:52:09,356][27696] Stopping RolloutWorker_w4...
1959
+ [2023-02-23 03:52:09,358][11306] Component RolloutWorker_w4 stopped!
1960
+ [2023-02-23 03:52:09,367][27691] Loop rollout_proc1_evt_loop terminating...
1961
+ [2023-02-23 03:52:09,374][27696] Loop rollout_proc4_evt_loop terminating...
1962
+ [2023-02-23 03:52:09,382][27690] Stopping RolloutWorker_w0...
1963
+ [2023-02-23 03:52:09,382][11306] Component RolloutWorker_w0 stopped!
1964
+ [2023-02-23 03:52:09,394][27693] Stopping RolloutWorker_w2...
1965
+ [2023-02-23 03:52:09,394][11306] Component RolloutWorker_w2 stopped!
1966
+ [2023-02-23 03:52:09,407][27693] Loop rollout_proc2_evt_loop terminating...
1967
+ [2023-02-23 03:52:09,419][27690] Loop rollout_proc0_evt_loop terminating...
1968
+ [2023-02-23 03:52:09,427][27708] Stopping RolloutWorker_w6...
1969
+ [2023-02-23 03:52:09,427][27708] Loop rollout_proc6_evt_loop terminating...
1970
+ [2023-02-23 03:52:09,426][11306] Component RolloutWorker_w6 stopped!
1971
+ [2023-02-23 03:52:09,535][27671] Stopping LearnerWorker_p0...
1972
+ [2023-02-23 03:52:09,536][27671] Loop learner_proc0_evt_loop terminating...
1973
+ [2023-02-23 03:52:09,538][11306] Component LearnerWorker_p0 stopped!
1974
+ [2023-02-23 03:52:09,544][11306] Waiting for process learner_proc0 to stop...
1975
+ [2023-02-23 03:52:13,324][11306] Waiting for process inference_proc0-0 to join...
1976
+ [2023-02-23 03:52:13,325][11306] Waiting for process rollout_proc0 to join...
1977
+ [2023-02-23 03:52:13,330][11306] Waiting for process rollout_proc1 to join...
1978
+ [2023-02-23 03:52:13,333][11306] Waiting for process rollout_proc2 to join...
1979
+ [2023-02-23 03:52:13,335][11306] Waiting for process rollout_proc3 to join...
1980
+ [2023-02-23 03:52:13,337][11306] Waiting for process rollout_proc4 to join...
1981
+ [2023-02-23 03:52:13,339][11306] Waiting for process rollout_proc5 to join...
1982
+ [2023-02-23 03:52:13,341][11306] Waiting for process rollout_proc6 to join...
1983
+ [2023-02-23 03:52:13,344][11306] Waiting for process rollout_proc7 to join...
1984
+ [2023-02-23 03:52:13,345][11306] Batcher 0 profile tree view:
1985
+ batching: 12.3814, releasing_batches: 0.0152
1986
+ [2023-02-23 03:52:13,347][11306] InferenceWorker_p0-w0 profile tree view:
1987
+ wait_policy: 0.0068
1988
+ wait_policy_total: 257.7903
1989
+ update_model: 3.4659
1990
+ weight_update: 0.0027
1991
+ one_step: 0.0024
1992
+ handle_policy_step: 246.8707
1993
+ deserialize: 7.0802, stack: 1.3847, obs_to_device_normalize: 55.5051, forward: 118.3383, send_messages: 12.5359
1994
+ prepare_outputs: 39.7805
1995
+ to_cpu: 25.3208
1996
+ [2023-02-23 03:52:13,348][11306] Learner 0 profile tree view:
1997
+ misc: 0.0032, prepare_batch: 11.9498
1998
+ train: 40.4273
1999
+ epoch_init: 0.0044, minibatch_init: 0.0030, losses_postprocess: 0.2750, kl_divergence: 0.3179, after_optimizer: 1.5266
2000
+ calculate_losses: 13.0858
2001
+ losses_init: 0.0017, forward_head: 0.9362, bptt_initial: 8.5900, tail: 0.4512, advantages_returns: 0.1667, losses: 1.6820
2002
+ bptt: 1.0902
2003
+ bptt_forward_core: 1.0556
2004
+ update: 24.8550
2005
+ clip: 0.7188
2006
+ [2023-02-23 03:52:13,349][11306] RolloutWorker_w0 profile tree view:
2007
+ wait_for_trajectories: 0.1713, enqueue_policy_requests: 66.1793, env_step: 398.5742, overhead: 9.4720, complete_rollouts: 3.5780
2008
+ save_policy_outputs: 9.4458
2009
+ split_output_tensors: 4.5873
2010
+ [2023-02-23 03:52:13,351][11306] RolloutWorker_w7 profile tree view:
2011
+ wait_for_trajectories: 0.1998, enqueue_policy_requests: 69.9099, env_step: 394.9435, overhead: 9.7367, complete_rollouts: 3.0390
2012
+ save_policy_outputs: 8.9794
2013
+ split_output_tensors: 4.2412
2014
+ [2023-02-23 03:52:13,352][11306] Loop Runner_EvtLoop terminating...
2015
+ [2023-02-23 03:52:13,354][11306] Runner profile tree view:
2016
+ main_loop: 552.1093
2017
+ [2023-02-23 03:52:13,355][11306] Collected {0: 6004736}, FPS: 3620.4
2018
+ [2023-02-23 03:52:13,413][11306] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2019
+ [2023-02-23 03:52:13,417][11306] Overriding arg 'num_workers' with value 1 passed from command line
2020
+ [2023-02-23 03:52:13,418][11306] Adding new argument 'no_render'=True that is not in the saved config file!
2021
+ [2023-02-23 03:52:13,422][11306] Adding new argument 'save_video'=True that is not in the saved config file!
2022
+ [2023-02-23 03:52:13,424][11306] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2023
+ [2023-02-23 03:52:13,427][11306] Adding new argument 'video_name'=None that is not in the saved config file!
2024
+ [2023-02-23 03:52:13,428][11306] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
2025
+ [2023-02-23 03:52:13,429][11306] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2026
+ [2023-02-23 03:52:13,433][11306] Adding new argument 'push_to_hub'=False that is not in the saved config file!
2027
+ [2023-02-23 03:52:13,436][11306] Adding new argument 'hf_repository'=None that is not in the saved config file!
2028
+ [2023-02-23 03:52:13,437][11306] Adding new argument 'policy_index'=0 that is not in the saved config file!
2029
+ [2023-02-23 03:52:13,438][11306] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2030
+ [2023-02-23 03:52:13,441][11306] Adding new argument 'train_script'=None that is not in the saved config file!
2031
+ [2023-02-23 03:52:13,445][11306] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2032
+ [2023-02-23 03:52:13,449][11306] Using frameskip 1 and render_action_repeat=4 for evaluation
2033
+ [2023-02-23 03:52:13,468][11306] RunningMeanStd input shape: (3, 72, 128)
2034
+ [2023-02-23 03:52:13,472][11306] RunningMeanStd input shape: (1,)
2035
+ [2023-02-23 03:52:13,497][11306] ConvEncoder: input_channels=3
2036
+ [2023-02-23 03:52:13,663][11306] Conv encoder output size: 512
2037
+ [2023-02-23 03:52:13,665][11306] Policy head output size: 512
2038
+ [2023-02-23 03:52:13,784][11306] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
2039
+ [2023-02-23 03:52:14,662][11306] Num frames 100...
2040
+ [2023-02-23 03:52:14,779][11306] Num frames 200...
2041
+ [2023-02-23 03:52:14,890][11306] Num frames 300...
2042
+ [2023-02-23 03:52:15,003][11306] Num frames 400...
2043
+ [2023-02-23 03:52:15,118][11306] Num frames 500...
2044
+ [2023-02-23 03:52:15,238][11306] Num frames 600...
2045
+ [2023-02-23 03:52:15,354][11306] Num frames 700...
2046
+ [2023-02-23 03:52:15,469][11306] Num frames 800...
2047
+ [2023-02-23 03:52:15,588][11306] Num frames 900...
2048
+ [2023-02-23 03:52:15,707][11306] Num frames 1000...
2049
+ [2023-02-23 03:52:15,835][11306] Num frames 1100...
2050
+ [2023-02-23 03:52:15,951][11306] Num frames 1200...
2051
+ [2023-02-23 03:52:16,060][11306] Avg episode rewards: #0: 32.480, true rewards: #0: 12.480
2052
+ [2023-02-23 03:52:16,064][11306] Avg episode reward: 32.480, avg true_objective: 12.480
2053
+ [2023-02-23 03:52:16,130][11306] Num frames 1300...
2054
+ [2023-02-23 03:52:16,242][11306] Num frames 1400...
2055
+ [2023-02-23 03:52:16,374][11306] Num frames 1500...
2056
+ [2023-02-23 03:52:16,518][11306] Avg episode rewards: #0: 18.880, true rewards: #0: 7.880
2057
+ [2023-02-23 03:52:16,520][11306] Avg episode reward: 18.880, avg true_objective: 7.880
2058
+ [2023-02-23 03:52:16,563][11306] Num frames 1600...
2059
+ [2023-02-23 03:52:16,675][11306] Num frames 1700...
2060
+ [2023-02-23 03:52:16,791][11306] Num frames 1800...
2061
+ [2023-02-23 03:52:16,904][11306] Num frames 1900...
2062
+ [2023-02-23 03:52:17,022][11306] Num frames 2000...
2063
+ [2023-02-23 03:52:17,139][11306] Avg episode rewards: #0: 16.177, true rewards: #0: 6.843
2064
+ [2023-02-23 03:52:17,141][11306] Avg episode reward: 16.177, avg true_objective: 6.843
2065
+ [2023-02-23 03:52:17,197][11306] Num frames 2100...
2066
+ [2023-02-23 03:52:17,312][11306] Num frames 2200...
2067
+ [2023-02-23 03:52:17,425][11306] Num frames 2300...
2068
+ [2023-02-23 03:52:17,549][11306] Num frames 2400...
2069
+ [2023-02-23 03:52:17,667][11306] Num frames 2500...
2070
+ [2023-02-23 03:52:17,779][11306] Num frames 2600...
2071
+ [2023-02-23 03:52:17,899][11306] Num frames 2700...
2072
+ [2023-02-23 03:52:18,015][11306] Num frames 2800...
2073
+ [2023-02-23 03:52:18,140][11306] Num frames 2900...
2074
+ [2023-02-23 03:52:18,255][11306] Avg episode rewards: #0: 17.368, true rewards: #0: 7.367
2075
+ [2023-02-23 03:52:18,257][11306] Avg episode reward: 17.368, avg true_objective: 7.367
2076
+ [2023-02-23 03:52:18,325][11306] Num frames 3000...
2077
+ [2023-02-23 03:52:18,441][11306] Num frames 3100...
2078
+ [2023-02-23 03:52:18,553][11306] Num frames 3200...
2079
+ [2023-02-23 03:52:18,669][11306] Num frames 3300...
2080
+ [2023-02-23 03:52:18,788][11306] Num frames 3400...
2081
+ [2023-02-23 03:52:18,902][11306] Num frames 3500...
2082
+ [2023-02-23 03:52:19,015][11306] Num frames 3600...
2083
+ [2023-02-23 03:52:19,130][11306] Num frames 3700...
2084
+ [2023-02-23 03:52:19,249][11306] Num frames 3800...
2085
+ [2023-02-23 03:52:19,372][11306] Num frames 3900...
2086
+ [2023-02-23 03:52:19,491][11306] Num frames 4000...
2087
+ [2023-02-23 03:52:19,587][11306] Avg episode rewards: #0: 18.470, true rewards: #0: 8.070
2088
+ [2023-02-23 03:52:19,591][11306] Avg episode reward: 18.470, avg true_objective: 8.070
2089
+ [2023-02-23 03:52:19,671][11306] Num frames 4100...
2090
+ [2023-02-23 03:52:19,782][11306] Num frames 4200...
2091
+ [2023-02-23 03:52:19,901][11306] Num frames 4300...
2092
+ [2023-02-23 03:52:20,017][11306] Num frames 4400...
2093
+ [2023-02-23 03:52:20,175][11306] Num frames 4500...
2094
+ [2023-02-23 03:52:20,332][11306] Num frames 4600...
2095
+ [2023-02-23 03:52:20,493][11306] Num frames 4700...
2096
+ [2023-02-23 03:52:20,654][11306] Num frames 4800...
2097
+ [2023-02-23 03:52:20,815][11306] Num frames 4900...
2098
+ [2023-02-23 03:52:20,978][11306] Num frames 5000...
2099
+ [2023-02-23 03:52:21,135][11306] Num frames 5100...
2100
+ [2023-02-23 03:52:21,293][11306] Num frames 5200...
2101
+ [2023-02-23 03:52:21,446][11306] Num frames 5300...
2102
+ [2023-02-23 03:52:21,605][11306] Num frames 5400...
2103
+ [2023-02-23 03:52:21,768][11306] Num frames 5500...
2104
+ [2023-02-23 03:52:21,930][11306] Num frames 5600...
2105
+ [2023-02-23 03:52:22,120][11306] Avg episode rewards: #0: 22.293, true rewards: #0: 9.460
2106
+ [2023-02-23 03:52:22,123][11306] Avg episode reward: 22.293, avg true_objective: 9.460
2107
+ [2023-02-23 03:52:22,167][11306] Num frames 5700...
2108
+ [2023-02-23 03:52:22,336][11306] Num frames 5800...
2109
+ [2023-02-23 03:52:22,504][11306] Num frames 5900...
2110
+ [2023-02-23 03:52:22,678][11306] Num frames 6000...
2111
+ [2023-02-23 03:52:22,843][11306] Num frames 6100...
2112
+ [2023-02-23 03:52:23,021][11306] Num frames 6200...
2113
+ [2023-02-23 03:52:23,192][11306] Num frames 6300...
2114
+ [2023-02-23 03:52:23,359][11306] Num frames 6400...
2115
+ [2023-02-23 03:52:23,533][11306] Num frames 6500...
2116
+ [2023-02-23 03:52:23,604][11306] Avg episode rewards: #0: 21.869, true rewards: #0: 9.297
2117
+ [2023-02-23 03:52:23,606][11306] Avg episode reward: 21.869, avg true_objective: 9.297
2118
+ [2023-02-23 03:52:23,723][11306] Num frames 6600...
2119
+ [2023-02-23 03:52:23,853][11306] Num frames 6700...
2120
+ [2023-02-23 03:52:23,973][11306] Num frames 6800...
2121
+ [2023-02-23 03:52:24,086][11306] Num frames 6900...
2122
+ [2023-02-23 03:52:24,203][11306] Num frames 7000...
2123
+ [2023-02-23 03:52:24,315][11306] Num frames 7100...
2124
+ [2023-02-23 03:52:24,439][11306] Num frames 7200...
2125
+ [2023-02-23 03:52:24,553][11306] Num frames 7300...
2126
+ [2023-02-23 03:52:24,669][11306] Num frames 7400...
2127
+ [2023-02-23 03:52:24,792][11306] Num frames 7500...
2128
+ [2023-02-23 03:52:24,908][11306] Num frames 7600...
2129
+ [2023-02-23 03:52:25,025][11306] Num frames 7700...
2130
+ [2023-02-23 03:52:25,146][11306] Num frames 7800...
2131
+ [2023-02-23 03:52:25,258][11306] Num frames 7900...
2132
+ [2023-02-23 03:52:25,379][11306] Num frames 8000...
2133
+ [2023-02-23 03:52:25,491][11306] Num frames 8100...
2134
+ [2023-02-23 03:52:25,607][11306] Num frames 8200...
2135
+ [2023-02-23 03:52:25,722][11306] Num frames 8300...
2136
+ [2023-02-23 03:52:25,817][11306] Avg episode rewards: #0: 26.037, true rewards: #0: 10.412
2137
+ [2023-02-23 03:52:25,819][11306] Avg episode reward: 26.037, avg true_objective: 10.412
2138
+ [2023-02-23 03:52:25,899][11306] Num frames 8400...
2139
+ [2023-02-23 03:52:26,011][11306] Num frames 8500...
2140
+ [2023-02-23 03:52:26,127][11306] Num frames 8600...
2141
+ [2023-02-23 03:52:26,243][11306] Num frames 8700...
2142
+ [2023-02-23 03:52:26,372][11306] Num frames 8800...
2143
+ [2023-02-23 03:52:26,484][11306] Num frames 8900...
2144
+ [2023-02-23 03:52:26,548][11306] Avg episode rewards: #0: 24.229, true rewards: #0: 9.896
2145
+ [2023-02-23 03:52:26,551][11306] Avg episode reward: 24.229, avg true_objective: 9.896
2146
+ [2023-02-23 03:52:26,660][11306] Num frames 9000...
2147
+ [2023-02-23 03:52:26,778][11306] Num frames 9100...
2148
+ [2023-02-23 03:52:26,897][11306] Num frames 9200...
2149
+ [2023-02-23 03:52:27,009][11306] Num frames 9300...
2150
+ [2023-02-23 03:52:27,126][11306] Avg episode rewards: #0: 22.754, true rewards: #0: 9.354
2151
+ [2023-02-23 03:52:27,127][11306] Avg episode reward: 22.754, avg true_objective: 9.354
2152
+ [2023-02-23 03:53:24,214][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
2153
+ [2023-02-23 03:53:24,859][11306] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2154
+ [2023-02-23 03:53:24,862][11306] Overriding arg 'num_workers' with value 1 passed from command line
2155
+ [2023-02-23 03:53:24,864][11306] Adding new argument 'no_render'=True that is not in the saved config file!
2156
+ [2023-02-23 03:53:24,870][11306] Adding new argument 'save_video'=True that is not in the saved config file!
2157
+ [2023-02-23 03:53:24,872][11306] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2158
+ [2023-02-23 03:53:24,874][11306] Adding new argument 'video_name'=None that is not in the saved config file!
2159
+ [2023-02-23 03:53:24,875][11306] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
2160
+ [2023-02-23 03:53:24,877][11306] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2161
+ [2023-02-23 03:53:24,880][11306] Adding new argument 'push_to_hub'=True that is not in the saved config file!
2162
+ [2023-02-23 03:53:24,889][11306] Adding new argument 'hf_repository'='keshan/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
2163
+ [2023-02-23 03:53:24,892][11306] Adding new argument 'policy_index'=0 that is not in the saved config file!
2164
+ [2023-02-23 03:53:24,894][11306] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2165
+ [2023-02-23 03:53:24,895][11306] Adding new argument 'train_script'=None that is not in the saved config file!
2166
+ [2023-02-23 03:53:24,897][11306] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2167
+ [2023-02-23 03:53:24,903][11306] Using frameskip 1 and render_action_repeat=4 for evaluation
2168
+ [2023-02-23 03:53:24,928][11306] RunningMeanStd input shape: (3, 72, 128)
2169
+ [2023-02-23 03:53:24,932][11306] RunningMeanStd input shape: (1,)
2170
+ [2023-02-23 03:53:24,955][11306] ConvEncoder: input_channels=3
2171
+ [2023-02-23 03:53:25,018][11306] Conv encoder output size: 512
2172
+ [2023-02-23 03:53:25,021][11306] Policy head output size: 512
2173
+ [2023-02-23 03:53:25,051][11306] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001466_6004736.pth...
2174
+ [2023-02-23 03:53:25,732][11306] Num frames 100...
2175
+ [2023-02-23 03:53:25,888][11306] Num frames 200...
2176
+ [2023-02-23 03:53:26,039][11306] Num frames 300...
2177
+ [2023-02-23 03:53:26,186][11306] Avg episode rewards: #0: 3.570, true rewards: #0: 3.570
2178
+ [2023-02-23 03:53:26,188][11306] Avg episode reward: 3.570, avg true_objective: 3.570
2179
+ [2023-02-23 03:53:26,260][11306] Num frames 400...
2180
+ [2023-02-23 03:53:26,421][11306] Num frames 500...
2181
+ [2023-02-23 03:53:26,581][11306] Num frames 600...
2182
+ [2023-02-23 03:53:26,782][11306] Avg episode rewards: #0: 4.470, true rewards: #0: 3.470
2183
+ [2023-02-23 03:53:26,784][11306] Avg episode reward: 4.470, avg true_objective: 3.470
2184
+ [2023-02-23 03:53:26,795][11306] Num frames 700...
2185
+ [2023-02-23 03:53:26,954][11306] Num frames 800...
2186
+ [2023-02-23 03:53:27,122][11306] Num frames 900...
2187
+ [2023-02-23 03:53:27,290][11306] Num frames 1000...
2188
+ [2023-02-23 03:53:27,463][11306] Num frames 1100...
2189
+ [2023-02-23 03:53:27,645][11306] Num frames 1200...
2190
+ [2023-02-23 03:53:27,799][11306] Num frames 1300...
2191
+ [2023-02-23 03:53:27,997][11306] Num frames 1400...
2192
+ [2023-02-23 03:53:28,177][11306] Num frames 1500...
2193
+ [2023-02-23 03:53:28,275][11306] Avg episode rewards: #0: 8.420, true rewards: #0: 5.087
2194
+ [2023-02-23 03:53:28,276][11306] Avg episode reward: 8.420, avg true_objective: 5.087
2195
+ [2023-02-23 03:53:28,430][11306] Num frames 1600...
2196
+ [2023-02-23 03:53:28,601][11306] Num frames 1700...
2197
+ [2023-02-23 03:53:28,759][11306] Num frames 1800...
2198
+ [2023-02-23 03:53:28,918][11306] Num frames 1900...
2199
+ [2023-02-23 03:53:29,086][11306] Num frames 2000...
2200
+ [2023-02-23 03:53:29,259][11306] Num frames 2100...
2201
+ [2023-02-23 03:53:29,425][11306] Num frames 2200...
2202
+ [2023-02-23 03:53:29,602][11306] Num frames 2300...
2203
+ [2023-02-23 03:53:29,728][11306] Num frames 2400...
2204
+ [2023-02-23 03:53:29,850][11306] Num frames 2500...
2205
+ [2023-02-23 03:53:29,974][11306] Num frames 2600...
2206
+ [2023-02-23 03:53:30,154][11306] Num frames 2700...
2207
+ [2023-02-23 03:53:30,317][11306] Avg episode rewards: #0: 13.658, true rewards: #0: 6.907
2208
+ [2023-02-23 03:53:30,319][11306] Avg episode reward: 13.658, avg true_objective: 6.907
2209
+ [2023-02-23 03:53:30,396][11306] Num frames 2800...
2210
+ [2023-02-23 03:53:30,554][11306] Num frames 2900...
2211
+ [2023-02-23 03:53:30,719][11306] Num frames 3000...
2212
+ [2023-02-23 03:53:30,874][11306] Num frames 3100...
2213
+ [2023-02-23 03:53:31,034][11306] Num frames 3200...
2214
+ [2023-02-23 03:53:31,193][11306] Num frames 3300...
2215
+ [2023-02-23 03:53:31,343][11306] Num frames 3400...
2216
+ [2023-02-23 03:53:31,502][11306] Num frames 3500...
2217
+ [2023-02-23 03:53:31,679][11306] Num frames 3600...
2218
+ [2023-02-23 03:53:31,843][11306] Num frames 3700...
2219
+ [2023-02-23 03:53:32,052][11306] Avg episode rewards: #0: 15.174, true rewards: #0: 7.574
2220
+ [2023-02-23 03:53:32,055][11306] Avg episode reward: 15.174, avg true_objective: 7.574
2221
+ [2023-02-23 03:53:32,082][11306] Num frames 3800...
2222
+ [2023-02-23 03:53:32,244][11306] Num frames 3900...
2223
+ [2023-02-23 03:53:32,409][11306] Num frames 4000...
2224
+ [2023-02-23 03:53:32,573][11306] Num frames 4100...
2225
+ [2023-02-23 03:53:32,741][11306] Num frames 4200...
2226
+ [2023-02-23 03:53:32,914][11306] Num frames 4300...
2227
+ [2023-02-23 03:53:33,086][11306] Num frames 4400...
2228
+ [2023-02-23 03:53:33,256][11306] Num frames 4500...
2229
+ [2023-02-23 03:53:33,420][11306] Num frames 4600...
2230
+ [2023-02-23 03:53:33,557][11306] Num frames 4700...
2231
+ [2023-02-23 03:53:33,680][11306] Num frames 4800...
2232
+ [2023-02-23 03:53:33,801][11306] Num frames 4900...
2233
+ [2023-02-23 03:53:33,918][11306] Num frames 5000...
2234
+ [2023-02-23 03:53:34,054][11306] Num frames 5100...
2235
+ [2023-02-23 03:53:34,171][11306] Num frames 5200...
2236
+ [2023-02-23 03:53:34,294][11306] Num frames 5300...
2237
+ [2023-02-23 03:53:34,411][11306] Num frames 5400...
2238
+ [2023-02-23 03:53:34,525][11306] Num frames 5500...
2239
+ [2023-02-23 03:53:34,663][11306] Num frames 5600...
2240
+ [2023-02-23 03:53:34,798][11306] Num frames 5700...
2241
+ [2023-02-23 03:53:34,924][11306] Num frames 5800...
2242
+ [2023-02-23 03:53:35,084][11306] Avg episode rewards: #0: 22.312, true rewards: #0: 9.812
2243
+ [2023-02-23 03:53:35,086][11306] Avg episode reward: 22.312, avg true_objective: 9.812
2244
+ [2023-02-23 03:53:35,105][11306] Num frames 5900...
2245
+ [2023-02-23 03:53:35,219][11306] Num frames 6000...
2246
+ [2023-02-23 03:53:35,333][11306] Num frames 6100...
2247
+ [2023-02-23 03:53:35,471][11306] Num frames 6200...
2248
+ [2023-02-23 03:53:35,589][11306] Num frames 6300...
2249
+ [2023-02-23 03:53:35,714][11306] Num frames 6400...
2250
+ [2023-02-23 03:53:35,831][11306] Num frames 6500...
2251
+ [2023-02-23 03:53:35,958][11306] Num frames 6600...
2252
+ [2023-02-23 03:53:36,082][11306] Num frames 6700...
2253
+ [2023-02-23 03:53:36,198][11306] Num frames 6800...
2254
+ [2023-02-23 03:53:36,316][11306] Num frames 6900...
2255
+ [2023-02-23 03:53:36,440][11306] Num frames 7000...
2256
+ [2023-02-23 03:53:36,557][11306] Num frames 7100...
2257
+ [2023-02-23 03:53:36,675][11306] Num frames 7200...
2258
+ [2023-02-23 03:53:36,798][11306] Num frames 7300...
2259
+ [2023-02-23 03:53:36,921][11306] Num frames 7400...
2260
+ [2023-02-23 03:53:37,040][11306] Avg episode rewards: #0: 25.078, true rewards: #0: 10.650
2261
+ [2023-02-23 03:53:37,041][11306] Avg episode reward: 25.078, avg true_objective: 10.650
2262
+ [2023-02-23 03:53:37,096][11306] Num frames 7500...
2263
+ [2023-02-23 03:53:37,211][11306] Num frames 7600...
2264
+ [2023-02-23 03:53:37,327][11306] Num frames 7700...
2265
+ [2023-02-23 03:53:37,443][11306] Num frames 7800...
2266
+ [2023-02-23 03:53:37,557][11306] Num frames 7900...
2267
+ [2023-02-23 03:53:37,671][11306] Num frames 8000...
2268
+ [2023-02-23 03:53:37,801][11306] Avg episode rewards: #0: 23.079, true rewards: #0: 10.079
2269
+ [2023-02-23 03:53:37,803][11306] Avg episode reward: 23.079, avg true_objective: 10.079
2270
+ [2023-02-23 03:53:37,855][11306] Num frames 8100...
2271
+ [2023-02-23 03:53:37,972][11306] Num frames 8200...
2272
+ [2023-02-23 03:53:38,086][11306] Num frames 8300...
2273
+ [2023-02-23 03:53:38,198][11306] Num frames 8400...
2274
+ [2023-02-23 03:53:38,312][11306] Num frames 8500...
2275
+ [2023-02-23 03:53:38,425][11306] Num frames 8600...
2276
+ [2023-02-23 03:53:38,557][11306] Avg episode rewards: #0: 22.071, true rewards: #0: 9.627
2277
+ [2023-02-23 03:53:38,560][11306] Avg episode reward: 22.071, avg true_objective: 9.627
2278
+ [2023-02-23 03:53:38,605][11306] Num frames 8700...
2279
+ [2023-02-23 03:53:38,724][11306] Num frames 8800...
2280
+ [2023-02-23 03:53:38,843][11306] Num frames 8900...
2281
+ [2023-02-23 03:53:38,959][11306] Num frames 9000...
2282
+ [2023-02-23 03:53:39,080][11306] Num frames 9100...
2283
+ [2023-02-23 03:53:39,197][11306] Num frames 9200...
2284
+ [2023-02-23 03:53:39,312][11306] Num frames 9300...
2285
+ [2023-02-23 03:53:39,425][11306] Num frames 9400...
2286
+ [2023-02-23 03:53:39,542][11306] Num frames 9500...
2287
+ [2023-02-23 03:53:39,655][11306] Num frames 9600...
2288
+ [2023-02-23 03:53:39,773][11306] Avg episode rewards: #0: 22.156, true rewards: #0: 9.656
2289
+ [2023-02-23 03:53:39,775][11306] Avg episode reward: 22.156, avg true_objective: 9.656
2290
+ [2023-02-23 03:54:37,173][11306] Replay video saved to /content/train_dir/default_experiment/replay.mp4!