pszemraj commited on
Commit
cf208af
1 Parent(s): f6f7ebf

End of training

Browse files
Files changed (5) hide show
  1. README.md +3 -3
  2. all_results.json +16 -0
  3. eval_results.json +10 -0
  4. train_results.json +9 -0
  5. trainer_state.json +1185 -0
README.md CHANGED
@@ -14,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # silu-griffin-1024-c3t-8layer-simple_wikipedia_LM-vN
16
 
17
- This model is a fine-tuned version of [silu-griffin-1024-c3t-8layer](https://huggingface.co/silu-griffin-1024-c3t-8layer) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 4.1877
20
- - Accuracy: 0.4085
21
 
22
  ## Model description
23
 
 
14
 
15
  # silu-griffin-1024-c3t-8layer-simple_wikipedia_LM-vN
16
 
17
+ This model is a fine-tuned version of [silu-griffin-1024-c3t-8layer](https://huggingface.co/silu-griffin-1024-c3t-8layer) on the pszemraj/simple_wikipedia_LM dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 4.0476
20
+ - Accuracy: 0.4224
21
 
22
  ## Model description
23
 
all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9972932091393998,
3
+ "eval_accuracy": 0.42241642228739,
4
+ "eval_loss": 4.047567367553711,
5
+ "eval_runtime": 14.4539,
6
+ "eval_samples": 250,
7
+ "eval_samples_per_second": 17.296,
8
+ "eval_steps_per_second": 4.359,
9
+ "perplexity": 57.25799982195849,
10
+ "total_flos": 6.247688798679859e+16,
11
+ "train_loss": 8.457253451250038,
12
+ "train_runtime": 65851.1603,
13
+ "train_samples": 50243,
14
+ "train_samples_per_second": 1.526,
15
+ "train_steps_per_second": 0.012
16
+ }
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9972932091393998,
3
+ "eval_accuracy": 0.42241642228739,
4
+ "eval_loss": 4.047567367553711,
5
+ "eval_runtime": 14.4539,
6
+ "eval_samples": 250,
7
+ "eval_samples_per_second": 17.296,
8
+ "eval_steps_per_second": 4.359,
9
+ "perplexity": 57.25799982195849
10
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.9972932091393998,
3
+ "total_flos": 6.247688798679859e+16,
4
+ "train_loss": 8.457253451250038,
5
+ "train_runtime": 65851.1603,
6
+ "train_samples": 50243,
7
+ "train_samples_per_second": 1.526,
8
+ "train_steps_per_second": 0.012
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.9972932091393998,
5
+ "eval_steps": 100,
6
+ "global_step": 784,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.012737839344001274,
13
+ "grad_norm": 7.091875076293945,
14
+ "learning_rate": 3.75e-05,
15
+ "loss": 37.744,
16
+ "step": 5
17
+ },
18
+ {
19
+ "epoch": 0.02547567868800255,
20
+ "grad_norm": 2.930401563644409,
21
+ "learning_rate": 7.5e-05,
22
+ "loss": 34.0864,
23
+ "step": 10
24
+ },
25
+ {
26
+ "epoch": 0.03821351803200382,
27
+ "grad_norm": 1.8477588891983032,
28
+ "learning_rate": 0.0001125,
29
+ "loss": 31.2726,
30
+ "step": 15
31
+ },
32
+ {
33
+ "epoch": 0.0509513573760051,
34
+ "grad_norm": 1.3455390930175781,
35
+ "learning_rate": 0.00015,
36
+ "loss": 28.3763,
37
+ "step": 20
38
+ },
39
+ {
40
+ "epoch": 0.06368919672000636,
41
+ "grad_norm": 1.138717532157898,
42
+ "learning_rate": 0.00018749999999999998,
43
+ "loss": 26.957,
44
+ "step": 25
45
+ },
46
+ {
47
+ "epoch": 0.07642703606400764,
48
+ "grad_norm": 0.9747544527053833,
49
+ "learning_rate": 0.000225,
50
+ "loss": 24.4616,
51
+ "step": 30
52
+ },
53
+ {
54
+ "epoch": 0.08916487540800892,
55
+ "grad_norm": 0.9035225510597229,
56
+ "learning_rate": 0.0002625,
57
+ "loss": 22.5748,
58
+ "step": 35
59
+ },
60
+ {
61
+ "epoch": 0.1019027147520102,
62
+ "grad_norm": 0.7786006927490234,
63
+ "learning_rate": 0.0003,
64
+ "loss": 20.6574,
65
+ "step": 40
66
+ },
67
+ {
68
+ "epoch": 0.11464055409601147,
69
+ "grad_norm": 0.7649045586585999,
70
+ "learning_rate": 0.0003,
71
+ "loss": 18.9346,
72
+ "step": 45
73
+ },
74
+ {
75
+ "epoch": 0.12737839344001273,
76
+ "grad_norm": 0.6415356993675232,
77
+ "learning_rate": 0.0003,
78
+ "loss": 17.8129,
79
+ "step": 50
80
+ },
81
+ {
82
+ "epoch": 0.140116232784014,
83
+ "grad_norm": 0.5701594948768616,
84
+ "learning_rate": 0.0003,
85
+ "loss": 16.881,
86
+ "step": 55
87
+ },
88
+ {
89
+ "epoch": 0.15285407212801527,
90
+ "grad_norm": 0.49638187885284424,
91
+ "learning_rate": 0.0003,
92
+ "loss": 16.2049,
93
+ "step": 60
94
+ },
95
+ {
96
+ "epoch": 0.16559191147201657,
97
+ "grad_norm": 0.44346606731414795,
98
+ "learning_rate": 0.0003,
99
+ "loss": 15.9336,
100
+ "step": 65
101
+ },
102
+ {
103
+ "epoch": 0.17832975081601785,
104
+ "grad_norm": 0.4194740355014801,
105
+ "learning_rate": 0.0003,
106
+ "loss": 15.2473,
107
+ "step": 70
108
+ },
109
+ {
110
+ "epoch": 0.19106759016001912,
111
+ "grad_norm": 0.4130041301250458,
112
+ "learning_rate": 0.0003,
113
+ "loss": 15.1218,
114
+ "step": 75
115
+ },
116
+ {
117
+ "epoch": 0.2038054295040204,
118
+ "grad_norm": 0.40480196475982666,
119
+ "learning_rate": 0.0003,
120
+ "loss": 14.7839,
121
+ "step": 80
122
+ },
123
+ {
124
+ "epoch": 0.21654326884802166,
125
+ "grad_norm": 0.394378662109375,
126
+ "learning_rate": 0.0003,
127
+ "loss": 14.2312,
128
+ "step": 85
129
+ },
130
+ {
131
+ "epoch": 0.22928110819202294,
132
+ "grad_norm": 0.39825204014778137,
133
+ "learning_rate": 0.0003,
134
+ "loss": 13.9441,
135
+ "step": 90
136
+ },
137
+ {
138
+ "epoch": 0.2420189475360242,
139
+ "grad_norm": 0.38816991448402405,
140
+ "learning_rate": 0.0003,
141
+ "loss": 13.4799,
142
+ "step": 95
143
+ },
144
+ {
145
+ "epoch": 0.25475678688002545,
146
+ "grad_norm": 0.36586159467697144,
147
+ "learning_rate": 0.0003,
148
+ "loss": 13.3276,
149
+ "step": 100
150
+ },
151
+ {
152
+ "epoch": 0.25475678688002545,
153
+ "eval_accuracy": 0.013133919843597262,
154
+ "eval_loss": 12.040165901184082,
155
+ "eval_runtime": 14.4617,
156
+ "eval_samples_per_second": 17.287,
157
+ "eval_steps_per_second": 4.356,
158
+ "step": 100
159
+ },
160
+ {
161
+ "epoch": 0.26749462622402675,
162
+ "grad_norm": 0.40571218729019165,
163
+ "learning_rate": 0.0003,
164
+ "loss": 13.1015,
165
+ "step": 105
166
+ },
167
+ {
168
+ "epoch": 0.280232465568028,
169
+ "grad_norm": 0.3502795696258545,
170
+ "learning_rate": 0.0003,
171
+ "loss": 12.614,
172
+ "step": 110
173
+ },
174
+ {
175
+ "epoch": 0.2929703049120293,
176
+ "grad_norm": 0.33776018023490906,
177
+ "learning_rate": 0.0003,
178
+ "loss": 12.488,
179
+ "step": 115
180
+ },
181
+ {
182
+ "epoch": 0.30570814425603055,
183
+ "grad_norm": 0.3277961015701294,
184
+ "learning_rate": 0.0003,
185
+ "loss": 12.2282,
186
+ "step": 120
187
+ },
188
+ {
189
+ "epoch": 0.31844598360003185,
190
+ "grad_norm": 0.3399854898452759,
191
+ "learning_rate": 0.0003,
192
+ "loss": 12.0168,
193
+ "step": 125
194
+ },
195
+ {
196
+ "epoch": 0.33118382294403315,
197
+ "grad_norm": 0.31557145714759827,
198
+ "learning_rate": 0.0003,
199
+ "loss": 11.832,
200
+ "step": 130
201
+ },
202
+ {
203
+ "epoch": 0.3439216622880344,
204
+ "grad_norm": 0.32902857661247253,
205
+ "learning_rate": 0.0003,
206
+ "loss": 11.4818,
207
+ "step": 135
208
+ },
209
+ {
210
+ "epoch": 0.3566595016320357,
211
+ "grad_norm": 0.34518980979919434,
212
+ "learning_rate": 0.0003,
213
+ "loss": 11.3197,
214
+ "step": 140
215
+ },
216
+ {
217
+ "epoch": 0.36939734097603694,
218
+ "grad_norm": 0.32530176639556885,
219
+ "learning_rate": 0.0003,
220
+ "loss": 11.0346,
221
+ "step": 145
222
+ },
223
+ {
224
+ "epoch": 0.38213518032003824,
225
+ "grad_norm": 0.3253624141216278,
226
+ "learning_rate": 0.0003,
227
+ "loss": 10.6717,
228
+ "step": 150
229
+ },
230
+ {
231
+ "epoch": 0.3948730196640395,
232
+ "grad_norm": 0.33527347445487976,
233
+ "learning_rate": 0.0003,
234
+ "loss": 10.5302,
235
+ "step": 155
236
+ },
237
+ {
238
+ "epoch": 0.4076108590080408,
239
+ "grad_norm": 0.3164774477481842,
240
+ "learning_rate": 0.0003,
241
+ "loss": 10.2009,
242
+ "step": 160
243
+ },
244
+ {
245
+ "epoch": 0.420348698352042,
246
+ "grad_norm": 0.3047502934932709,
247
+ "learning_rate": 0.0003,
248
+ "loss": 10.1689,
249
+ "step": 165
250
+ },
251
+ {
252
+ "epoch": 0.4330865376960433,
253
+ "grad_norm": 0.31613191962242126,
254
+ "learning_rate": 0.0003,
255
+ "loss": 9.85,
256
+ "step": 170
257
+ },
258
+ {
259
+ "epoch": 0.4458243770400446,
260
+ "grad_norm": 0.3114412724971771,
261
+ "learning_rate": 0.0003,
262
+ "loss": 9.6662,
263
+ "step": 175
264
+ },
265
+ {
266
+ "epoch": 0.4585622163840459,
267
+ "grad_norm": 0.31863468885421753,
268
+ "learning_rate": 0.0003,
269
+ "loss": 9.4857,
270
+ "step": 180
271
+ },
272
+ {
273
+ "epoch": 0.4713000557280471,
274
+ "grad_norm": 0.3024883568286896,
275
+ "learning_rate": 0.0003,
276
+ "loss": 9.2409,
277
+ "step": 185
278
+ },
279
+ {
280
+ "epoch": 0.4840378950720484,
281
+ "grad_norm": 0.3118532598018646,
282
+ "learning_rate": 0.0003,
283
+ "loss": 9.156,
284
+ "step": 190
285
+ },
286
+ {
287
+ "epoch": 0.49677573441604966,
288
+ "grad_norm": 0.3026701807975769,
289
+ "learning_rate": 0.0003,
290
+ "loss": 9.0273,
291
+ "step": 195
292
+ },
293
+ {
294
+ "epoch": 0.5095135737600509,
295
+ "grad_norm": 0.3058376908302307,
296
+ "learning_rate": 0.0003,
297
+ "loss": 8.9207,
298
+ "step": 200
299
+ },
300
+ {
301
+ "epoch": 0.5095135737600509,
302
+ "eval_accuracy": 0.03601173020527859,
303
+ "eval_loss": 8.031224250793457,
304
+ "eval_runtime": 14.6886,
305
+ "eval_samples_per_second": 17.02,
306
+ "eval_steps_per_second": 4.289,
307
+ "step": 200
308
+ },
309
+ {
310
+ "epoch": 0.5222514131040522,
311
+ "grad_norm": 0.31776145100593567,
312
+ "learning_rate": 0.0003,
313
+ "loss": 8.819,
314
+ "step": 205
315
+ },
316
+ {
317
+ "epoch": 0.5349892524480535,
318
+ "grad_norm": 0.3050650656223297,
319
+ "learning_rate": 0.0003,
320
+ "loss": 8.7563,
321
+ "step": 210
322
+ },
323
+ {
324
+ "epoch": 0.5477270917920548,
325
+ "grad_norm": 0.31346216797828674,
326
+ "learning_rate": 0.0003,
327
+ "loss": 8.4781,
328
+ "step": 215
329
+ },
330
+ {
331
+ "epoch": 0.560464931136056,
332
+ "grad_norm": 0.3162192404270172,
333
+ "learning_rate": 0.0003,
334
+ "loss": 8.49,
335
+ "step": 220
336
+ },
337
+ {
338
+ "epoch": 0.5732027704800573,
339
+ "grad_norm": 0.2908290922641754,
340
+ "learning_rate": 0.0003,
341
+ "loss": 8.1487,
342
+ "step": 225
343
+ },
344
+ {
345
+ "epoch": 0.5859406098240586,
346
+ "grad_norm": 0.29553738236427307,
347
+ "learning_rate": 0.0003,
348
+ "loss": 8.2668,
349
+ "step": 230
350
+ },
351
+ {
352
+ "epoch": 0.5986784491680599,
353
+ "grad_norm": 0.288335919380188,
354
+ "learning_rate": 0.0003,
355
+ "loss": 8.1061,
356
+ "step": 235
357
+ },
358
+ {
359
+ "epoch": 0.6114162885120611,
360
+ "grad_norm": 0.30966615676879883,
361
+ "learning_rate": 0.0003,
362
+ "loss": 8.1297,
363
+ "step": 240
364
+ },
365
+ {
366
+ "epoch": 0.6241541278560624,
367
+ "grad_norm": 0.29941117763519287,
368
+ "learning_rate": 0.0003,
369
+ "loss": 7.8082,
370
+ "step": 245
371
+ },
372
+ {
373
+ "epoch": 0.6368919672000637,
374
+ "grad_norm": 0.29136765003204346,
375
+ "learning_rate": 0.0003,
376
+ "loss": 7.937,
377
+ "step": 250
378
+ },
379
+ {
380
+ "epoch": 0.649629806544065,
381
+ "grad_norm": 0.30150941014289856,
382
+ "learning_rate": 0.0003,
383
+ "loss": 7.7454,
384
+ "step": 255
385
+ },
386
+ {
387
+ "epoch": 0.6623676458880663,
388
+ "grad_norm": 0.28709036111831665,
389
+ "learning_rate": 0.0003,
390
+ "loss": 7.8069,
391
+ "step": 260
392
+ },
393
+ {
394
+ "epoch": 0.6751054852320675,
395
+ "grad_norm": 0.31939393281936646,
396
+ "learning_rate": 0.0003,
397
+ "loss": 7.631,
398
+ "step": 265
399
+ },
400
+ {
401
+ "epoch": 0.6878433245760688,
402
+ "grad_norm": 0.29692211747169495,
403
+ "learning_rate": 0.0003,
404
+ "loss": 7.6632,
405
+ "step": 270
406
+ },
407
+ {
408
+ "epoch": 0.7005811639200701,
409
+ "grad_norm": 0.3304164409637451,
410
+ "learning_rate": 0.0003,
411
+ "loss": 7.4727,
412
+ "step": 275
413
+ },
414
+ {
415
+ "epoch": 0.7133190032640714,
416
+ "grad_norm": 0.28332462906837463,
417
+ "learning_rate": 0.0003,
418
+ "loss": 7.4796,
419
+ "step": 280
420
+ },
421
+ {
422
+ "epoch": 0.7260568426080726,
423
+ "grad_norm": 0.2897827625274658,
424
+ "learning_rate": 0.0003,
425
+ "loss": 7.5389,
426
+ "step": 285
427
+ },
428
+ {
429
+ "epoch": 0.7387946819520739,
430
+ "grad_norm": 0.2887686491012573,
431
+ "learning_rate": 0.0003,
432
+ "loss": 7.382,
433
+ "step": 290
434
+ },
435
+ {
436
+ "epoch": 0.7515325212960752,
437
+ "grad_norm": 0.3093564212322235,
438
+ "learning_rate": 0.0003,
439
+ "loss": 7.2586,
440
+ "step": 295
441
+ },
442
+ {
443
+ "epoch": 0.7642703606400765,
444
+ "grad_norm": 0.2902717590332031,
445
+ "learning_rate": 0.0003,
446
+ "loss": 7.2681,
447
+ "step": 300
448
+ },
449
+ {
450
+ "epoch": 0.7642703606400765,
451
+ "eval_accuracy": 0.050643206256109484,
452
+ "eval_loss": 6.477533340454102,
453
+ "eval_runtime": 14.6327,
454
+ "eval_samples_per_second": 17.085,
455
+ "eval_steps_per_second": 4.305,
456
+ "step": 300
457
+ },
458
+ {
459
+ "epoch": 0.7770081999840777,
460
+ "grad_norm": 0.2867899239063263,
461
+ "learning_rate": 0.0003,
462
+ "loss": 7.0712,
463
+ "step": 305
464
+ },
465
+ {
466
+ "epoch": 0.789746039328079,
467
+ "grad_norm": 0.27321040630340576,
468
+ "learning_rate": 0.0003,
469
+ "loss": 7.0524,
470
+ "step": 310
471
+ },
472
+ {
473
+ "epoch": 0.8024838786720803,
474
+ "grad_norm": 0.3487064242362976,
475
+ "learning_rate": 0.0003,
476
+ "loss": 7.0939,
477
+ "step": 315
478
+ },
479
+ {
480
+ "epoch": 0.8152217180160816,
481
+ "grad_norm": 0.329608291387558,
482
+ "learning_rate": 0.0003,
483
+ "loss": 6.9997,
484
+ "step": 320
485
+ },
486
+ {
487
+ "epoch": 0.8279595573600828,
488
+ "grad_norm": 0.3154338300228119,
489
+ "learning_rate": 0.0003,
490
+ "loss": 6.9663,
491
+ "step": 325
492
+ },
493
+ {
494
+ "epoch": 0.840697396704084,
495
+ "grad_norm": 0.31021803617477417,
496
+ "learning_rate": 0.0003,
497
+ "loss": 6.7821,
498
+ "step": 330
499
+ },
500
+ {
501
+ "epoch": 0.8534352360480854,
502
+ "grad_norm": 0.388336181640625,
503
+ "learning_rate": 0.0003,
504
+ "loss": 6.7751,
505
+ "step": 335
506
+ },
507
+ {
508
+ "epoch": 0.8661730753920867,
509
+ "grad_norm": 0.31887954473495483,
510
+ "learning_rate": 0.0003,
511
+ "loss": 6.702,
512
+ "step": 340
513
+ },
514
+ {
515
+ "epoch": 0.8789109147360878,
516
+ "grad_norm": 0.31558957695961,
517
+ "learning_rate": 0.0003,
518
+ "loss": 6.6206,
519
+ "step": 345
520
+ },
521
+ {
522
+ "epoch": 0.8916487540800891,
523
+ "grad_norm": 0.30751529335975647,
524
+ "learning_rate": 0.0003,
525
+ "loss": 6.7077,
526
+ "step": 350
527
+ },
528
+ {
529
+ "epoch": 0.9043865934240904,
530
+ "grad_norm": 0.33058232069015503,
531
+ "learning_rate": 0.0003,
532
+ "loss": 6.557,
533
+ "step": 355
534
+ },
535
+ {
536
+ "epoch": 0.9171244327680917,
537
+ "grad_norm": 0.3375111222267151,
538
+ "learning_rate": 0.0003,
539
+ "loss": 6.6369,
540
+ "step": 360
541
+ },
542
+ {
543
+ "epoch": 0.9298622721120929,
544
+ "grad_norm": 0.3047392964363098,
545
+ "learning_rate": 0.0003,
546
+ "loss": 6.5796,
547
+ "step": 365
548
+ },
549
+ {
550
+ "epoch": 0.9426001114560942,
551
+ "grad_norm": 0.430053174495697,
552
+ "learning_rate": 0.0003,
553
+ "loss": 6.5548,
554
+ "step": 370
555
+ },
556
+ {
557
+ "epoch": 0.9553379508000955,
558
+ "grad_norm": 0.3610515296459198,
559
+ "learning_rate": 0.0003,
560
+ "loss": 6.4576,
561
+ "step": 375
562
+ },
563
+ {
564
+ "epoch": 0.9680757901440968,
565
+ "grad_norm": 0.32095110416412354,
566
+ "learning_rate": 0.0003,
567
+ "loss": 6.4266,
568
+ "step": 380
569
+ },
570
+ {
571
+ "epoch": 0.980813629488098,
572
+ "grad_norm": 0.32170969247817993,
573
+ "learning_rate": 0.0003,
574
+ "loss": 6.5597,
575
+ "step": 385
576
+ },
577
+ {
578
+ "epoch": 0.9935514688320993,
579
+ "grad_norm": 0.29942792654037476,
580
+ "learning_rate": 0.0003,
581
+ "loss": 6.3873,
582
+ "step": 390
583
+ },
584
+ {
585
+ "epoch": 1.0062893081761006,
586
+ "grad_norm": 0.2971299886703491,
587
+ "learning_rate": 0.0003,
588
+ "loss": 6.3915,
589
+ "step": 395
590
+ },
591
+ {
592
+ "epoch": 1.0190271475201018,
593
+ "grad_norm": 0.2800815999507904,
594
+ "learning_rate": 0.0003,
595
+ "loss": 6.3187,
596
+ "step": 400
597
+ },
598
+ {
599
+ "epoch": 1.0190271475201018,
600
+ "eval_accuracy": 0.0433822091886608,
601
+ "eval_loss": 5.622740268707275,
602
+ "eval_runtime": 14.4103,
603
+ "eval_samples_per_second": 17.349,
604
+ "eval_steps_per_second": 4.372,
605
+ "step": 400
606
+ },
607
+ {
608
+ "epoch": 1.0317649868641032,
609
+ "grad_norm": 0.28819501399993896,
610
+ "learning_rate": 0.0003,
611
+ "loss": 6.328,
612
+ "step": 405
613
+ },
614
+ {
615
+ "epoch": 1.0445028262081044,
616
+ "grad_norm": 0.3983236849308014,
617
+ "learning_rate": 0.0003,
618
+ "loss": 6.3988,
619
+ "step": 410
620
+ },
621
+ {
622
+ "epoch": 1.0572406655521058,
623
+ "grad_norm": 0.2969406545162201,
624
+ "learning_rate": 0.0003,
625
+ "loss": 6.2509,
626
+ "step": 415
627
+ },
628
+ {
629
+ "epoch": 1.069978504896107,
630
+ "grad_norm": 0.2973212003707886,
631
+ "learning_rate": 0.0003,
632
+ "loss": 6.1234,
633
+ "step": 420
634
+ },
635
+ {
636
+ "epoch": 1.0827163442401082,
637
+ "grad_norm": 0.3298945426940918,
638
+ "learning_rate": 0.0003,
639
+ "loss": 6.3219,
640
+ "step": 425
641
+ },
642
+ {
643
+ "epoch": 1.0954541835841096,
644
+ "grad_norm": 0.3493943214416504,
645
+ "learning_rate": 0.0003,
646
+ "loss": 6.0888,
647
+ "step": 430
648
+ },
649
+ {
650
+ "epoch": 1.1081920229281108,
651
+ "grad_norm": 0.3639209270477295,
652
+ "learning_rate": 0.0003,
653
+ "loss": 6.2226,
654
+ "step": 435
655
+ },
656
+ {
657
+ "epoch": 1.120929862272112,
658
+ "grad_norm": 0.43913957476615906,
659
+ "learning_rate": 0.0003,
660
+ "loss": 6.0308,
661
+ "step": 440
662
+ },
663
+ {
664
+ "epoch": 1.1336677016161134,
665
+ "grad_norm": 0.43267834186553955,
666
+ "learning_rate": 0.0003,
667
+ "loss": 6.0806,
668
+ "step": 445
669
+ },
670
+ {
671
+ "epoch": 1.1464055409601146,
672
+ "grad_norm": 0.4563148021697998,
673
+ "learning_rate": 0.0003,
674
+ "loss": 5.9703,
675
+ "step": 450
676
+ },
677
+ {
678
+ "epoch": 1.159143380304116,
679
+ "grad_norm": 0.4002761244773865,
680
+ "learning_rate": 0.0003,
681
+ "loss": 5.9163,
682
+ "step": 455
683
+ },
684
+ {
685
+ "epoch": 1.1718812196481172,
686
+ "grad_norm": 0.4359826147556305,
687
+ "learning_rate": 0.0003,
688
+ "loss": 5.8285,
689
+ "step": 460
690
+ },
691
+ {
692
+ "epoch": 1.1846190589921184,
693
+ "grad_norm": 0.5450247526168823,
694
+ "learning_rate": 0.0003,
695
+ "loss": 5.8063,
696
+ "step": 465
697
+ },
698
+ {
699
+ "epoch": 1.1973568983361198,
700
+ "grad_norm": 0.3597274422645569,
701
+ "learning_rate": 0.0003,
702
+ "loss": 5.6978,
703
+ "step": 470
704
+ },
705
+ {
706
+ "epoch": 1.210094737680121,
707
+ "grad_norm": 0.4141215980052948,
708
+ "learning_rate": 0.0003,
709
+ "loss": 5.6078,
710
+ "step": 475
711
+ },
712
+ {
713
+ "epoch": 1.2228325770241222,
714
+ "grad_norm": 0.3695543110370636,
715
+ "learning_rate": 0.0003,
716
+ "loss": 5.6728,
717
+ "step": 480
718
+ },
719
+ {
720
+ "epoch": 1.2355704163681236,
721
+ "grad_norm": 0.5060051083564758,
722
+ "learning_rate": 0.0003,
723
+ "loss": 5.6049,
724
+ "step": 485
725
+ },
726
+ {
727
+ "epoch": 1.2483082557121248,
728
+ "grad_norm": 0.5355808138847351,
729
+ "learning_rate": 0.0003,
730
+ "loss": 5.6564,
731
+ "step": 490
732
+ },
733
+ {
734
+ "epoch": 1.261046095056126,
735
+ "grad_norm": 0.4578459858894348,
736
+ "learning_rate": 0.0003,
737
+ "loss": 5.5758,
738
+ "step": 495
739
+ },
740
+ {
741
+ "epoch": 1.2737839344001274,
742
+ "grad_norm": 0.4868403673171997,
743
+ "learning_rate": 0.0003,
744
+ "loss": 5.5695,
745
+ "step": 500
746
+ },
747
+ {
748
+ "epoch": 1.2737839344001274,
749
+ "eval_accuracy": 0.36348778103616813,
750
+ "eval_loss": 4.77961540222168,
751
+ "eval_runtime": 14.5581,
752
+ "eval_samples_per_second": 17.173,
753
+ "eval_steps_per_second": 4.328,
754
+ "step": 500
755
+ },
756
+ {
757
+ "epoch": 1.2865217737441286,
758
+ "grad_norm": 0.550255298614502,
759
+ "learning_rate": 0.0003,
760
+ "loss": 5.5591,
761
+ "step": 505
762
+ },
763
+ {
764
+ "epoch": 1.29925961308813,
765
+ "grad_norm": 0.5515110492706299,
766
+ "learning_rate": 0.0003,
767
+ "loss": 5.4588,
768
+ "step": 510
769
+ },
770
+ {
771
+ "epoch": 1.3119974524321312,
772
+ "grad_norm": 0.44656914472579956,
773
+ "learning_rate": 0.0003,
774
+ "loss": 5.4336,
775
+ "step": 515
776
+ },
777
+ {
778
+ "epoch": 1.3247352917761326,
779
+ "grad_norm": 0.5925999283790588,
780
+ "learning_rate": 0.0003,
781
+ "loss": 5.5185,
782
+ "step": 520
783
+ },
784
+ {
785
+ "epoch": 1.3374731311201338,
786
+ "grad_norm": 0.632453203201294,
787
+ "learning_rate": 0.0003,
788
+ "loss": 5.325,
789
+ "step": 525
790
+ },
791
+ {
792
+ "epoch": 1.350210970464135,
793
+ "grad_norm": 0.5380024909973145,
794
+ "learning_rate": 0.0003,
795
+ "loss": 5.4005,
796
+ "step": 530
797
+ },
798
+ {
799
+ "epoch": 1.3629488098081364,
800
+ "grad_norm": 0.5659191012382507,
801
+ "learning_rate": 0.0003,
802
+ "loss": 5.3564,
803
+ "step": 535
804
+ },
805
+ {
806
+ "epoch": 1.3756866491521376,
807
+ "grad_norm": 0.8913821578025818,
808
+ "learning_rate": 0.0003,
809
+ "loss": 5.2763,
810
+ "step": 540
811
+ },
812
+ {
813
+ "epoch": 1.3884244884961388,
814
+ "grad_norm": 0.9271002411842346,
815
+ "learning_rate": 0.0003,
816
+ "loss": 5.4129,
817
+ "step": 545
818
+ },
819
+ {
820
+ "epoch": 1.4011623278401402,
821
+ "grad_norm": 0.7141408324241638,
822
+ "learning_rate": 0.0003,
823
+ "loss": 5.4437,
824
+ "step": 550
825
+ },
826
+ {
827
+ "epoch": 1.4139001671841414,
828
+ "grad_norm": 0.5360827445983887,
829
+ "learning_rate": 0.0003,
830
+ "loss": 5.3523,
831
+ "step": 555
832
+ },
833
+ {
834
+ "epoch": 1.4266380065281425,
835
+ "grad_norm": 0.6563194990158081,
836
+ "learning_rate": 0.0003,
837
+ "loss": 5.1103,
838
+ "step": 560
839
+ },
840
+ {
841
+ "epoch": 1.439375845872144,
842
+ "grad_norm": 0.6325790882110596,
843
+ "learning_rate": 0.0003,
844
+ "loss": 5.4026,
845
+ "step": 565
846
+ },
847
+ {
848
+ "epoch": 1.4521136852161451,
849
+ "grad_norm": 0.8463213443756104,
850
+ "learning_rate": 0.0003,
851
+ "loss": 5.3129,
852
+ "step": 570
853
+ },
854
+ {
855
+ "epoch": 1.4648515245601466,
856
+ "grad_norm": 0.8394812345504761,
857
+ "learning_rate": 0.0003,
858
+ "loss": 5.3415,
859
+ "step": 575
860
+ },
861
+ {
862
+ "epoch": 1.4775893639041477,
863
+ "grad_norm": 0.692244291305542,
864
+ "learning_rate": 0.0003,
865
+ "loss": 5.2649,
866
+ "step": 580
867
+ },
868
+ {
869
+ "epoch": 1.4903272032481492,
870
+ "grad_norm": 0.6197806000709534,
871
+ "learning_rate": 0.0003,
872
+ "loss": 5.112,
873
+ "step": 585
874
+ },
875
+ {
876
+ "epoch": 1.5030650425921503,
877
+ "grad_norm": 0.6573797464370728,
878
+ "learning_rate": 0.0003,
879
+ "loss": 5.1669,
880
+ "step": 590
881
+ },
882
+ {
883
+ "epoch": 1.5158028819361515,
884
+ "grad_norm": 0.795892059803009,
885
+ "learning_rate": 0.0003,
886
+ "loss": 5.1693,
887
+ "step": 595
888
+ },
889
+ {
890
+ "epoch": 1.528540721280153,
891
+ "grad_norm": 0.6279253363609314,
892
+ "learning_rate": 0.0003,
893
+ "loss": 5.2926,
894
+ "step": 600
895
+ },
896
+ {
897
+ "epoch": 1.528540721280153,
898
+ "eval_accuracy": 0.3952492668621701,
899
+ "eval_loss": 4.392324447631836,
900
+ "eval_runtime": 14.409,
901
+ "eval_samples_per_second": 17.35,
902
+ "eval_steps_per_second": 4.372,
903
+ "step": 600
904
+ },
905
+ {
906
+ "epoch": 1.5412785606241541,
907
+ "grad_norm": 0.5762287378311157,
908
+ "learning_rate": 0.0003,
909
+ "loss": 5.0475,
910
+ "step": 605
911
+ },
912
+ {
913
+ "epoch": 1.5540163999681553,
914
+ "grad_norm": 0.5149503350257874,
915
+ "learning_rate": 0.0003,
916
+ "loss": 5.1185,
917
+ "step": 610
918
+ },
919
+ {
920
+ "epoch": 1.5667542393121567,
921
+ "grad_norm": 0.581633985042572,
922
+ "learning_rate": 0.0003,
923
+ "loss": 5.1166,
924
+ "step": 615
925
+ },
926
+ {
927
+ "epoch": 1.579492078656158,
928
+ "grad_norm": 0.5910624861717224,
929
+ "learning_rate": 0.0003,
930
+ "loss": 4.9907,
931
+ "step": 620
932
+ },
933
+ {
934
+ "epoch": 1.5922299180001591,
935
+ "grad_norm": 0.8280585408210754,
936
+ "learning_rate": 0.0003,
937
+ "loss": 5.0748,
938
+ "step": 625
939
+ },
940
+ {
941
+ "epoch": 1.6049677573441605,
942
+ "grad_norm": 0.5128599405288696,
943
+ "learning_rate": 0.0003,
944
+ "loss": 4.9768,
945
+ "step": 630
946
+ },
947
+ {
948
+ "epoch": 1.6177055966881617,
949
+ "grad_norm": 0.7540919184684753,
950
+ "learning_rate": 0.0003,
951
+ "loss": 5.0806,
952
+ "step": 635
953
+ },
954
+ {
955
+ "epoch": 1.630443436032163,
956
+ "grad_norm": 0.6239334940910339,
957
+ "learning_rate": 0.0003,
958
+ "loss": 5.1277,
959
+ "step": 640
960
+ },
961
+ {
962
+ "epoch": 1.6431812753761643,
963
+ "grad_norm": 0.7787991166114807,
964
+ "learning_rate": 0.0003,
965
+ "loss": 5.0778,
966
+ "step": 645
967
+ },
968
+ {
969
+ "epoch": 1.6559191147201657,
970
+ "grad_norm": 0.6328299641609192,
971
+ "learning_rate": 0.0003,
972
+ "loss": 4.9763,
973
+ "step": 650
974
+ },
975
+ {
976
+ "epoch": 1.668656954064167,
977
+ "grad_norm": 0.5455794334411621,
978
+ "learning_rate": 0.0003,
979
+ "loss": 5.0049,
980
+ "step": 655
981
+ },
982
+ {
983
+ "epoch": 1.681394793408168,
984
+ "grad_norm": 0.7078703045845032,
985
+ "learning_rate": 0.0003,
986
+ "loss": 5.0258,
987
+ "step": 660
988
+ },
989
+ {
990
+ "epoch": 1.6941326327521695,
991
+ "grad_norm": 0.6339858770370483,
992
+ "learning_rate": 0.0003,
993
+ "loss": 5.1028,
994
+ "step": 665
995
+ },
996
+ {
997
+ "epoch": 1.7068704720961707,
998
+ "grad_norm": 0.6060242652893066,
999
+ "learning_rate": 0.0003,
1000
+ "loss": 5.0428,
1001
+ "step": 670
1002
+ },
1003
+ {
1004
+ "epoch": 1.719608311440172,
1005
+ "grad_norm": 0.9218889474868774,
1006
+ "learning_rate": 0.0003,
1007
+ "loss": 4.9891,
1008
+ "step": 675
1009
+ },
1010
+ {
1011
+ "epoch": 1.7323461507841733,
1012
+ "grad_norm": 0.6890697479248047,
1013
+ "learning_rate": 0.0003,
1014
+ "loss": 4.8921,
1015
+ "step": 680
1016
+ },
1017
+ {
1018
+ "epoch": 1.7450839901281745,
1019
+ "grad_norm": 0.9093934297561646,
1020
+ "learning_rate": 0.0003,
1021
+ "loss": 4.9385,
1022
+ "step": 685
1023
+ },
1024
+ {
1025
+ "epoch": 1.7578218294721757,
1026
+ "grad_norm": 0.5929202437400818,
1027
+ "learning_rate": 0.0003,
1028
+ "loss": 4.9376,
1029
+ "step": 690
1030
+ },
1031
+ {
1032
+ "epoch": 1.770559668816177,
1033
+ "grad_norm": 0.6317362189292908,
1034
+ "learning_rate": 0.0003,
1035
+ "loss": 4.9681,
1036
+ "step": 695
1037
+ },
1038
+ {
1039
+ "epoch": 1.7832975081601783,
1040
+ "grad_norm": 0.5537763237953186,
1041
+ "learning_rate": 0.0003,
1042
+ "loss": 4.878,
1043
+ "step": 700
1044
+ },
1045
+ {
1046
+ "epoch": 1.7832975081601783,
1047
+ "eval_accuracy": 0.40849266862170086,
1048
+ "eval_loss": 4.187656402587891,
1049
+ "eval_runtime": 14.4745,
1050
+ "eval_samples_per_second": 17.272,
1051
+ "eval_steps_per_second": 4.352,
1052
+ "step": 700
1053
+ },
1054
+ {
1055
+ "epoch": 1.7960353475041795,
1056
+ "grad_norm": 0.5984592437744141,
1057
+ "learning_rate": 0.0003,
1058
+ "loss": 4.9092,
1059
+ "step": 705
1060
+ },
1061
+ {
1062
+ "epoch": 1.808773186848181,
1063
+ "grad_norm": 0.5060558915138245,
1064
+ "learning_rate": 0.0003,
1065
+ "loss": 4.989,
1066
+ "step": 710
1067
+ },
1068
+ {
1069
+ "epoch": 1.8215110261921823,
1070
+ "grad_norm": 0.8713288903236389,
1071
+ "learning_rate": 0.0003,
1072
+ "loss": 4.8114,
1073
+ "step": 715
1074
+ },
1075
+ {
1076
+ "epoch": 1.8342488655361833,
1077
+ "grad_norm": 0.8011664748191833,
1078
+ "learning_rate": 0.0003,
1079
+ "loss": 4.8468,
1080
+ "step": 720
1081
+ },
1082
+ {
1083
+ "epoch": 1.8469867048801847,
1084
+ "grad_norm": 0.6774628758430481,
1085
+ "learning_rate": 0.0003,
1086
+ "loss": 4.8899,
1087
+ "step": 725
1088
+ },
1089
+ {
1090
+ "epoch": 1.859724544224186,
1091
+ "grad_norm": 1.05668044090271,
1092
+ "learning_rate": 0.0003,
1093
+ "loss": 4.8676,
1094
+ "step": 730
1095
+ },
1096
+ {
1097
+ "epoch": 1.8724623835681873,
1098
+ "grad_norm": 0.8638430237770081,
1099
+ "learning_rate": 0.0003,
1100
+ "loss": 4.8515,
1101
+ "step": 735
1102
+ },
1103
+ {
1104
+ "epoch": 1.8852002229121885,
1105
+ "grad_norm": 0.8210180997848511,
1106
+ "learning_rate": 0.0003,
1107
+ "loss": 4.9094,
1108
+ "step": 740
1109
+ },
1110
+ {
1111
+ "epoch": 1.8979380622561899,
1112
+ "grad_norm": 0.6894564032554626,
1113
+ "learning_rate": 0.0003,
1114
+ "loss": 4.8406,
1115
+ "step": 745
1116
+ },
1117
+ {
1118
+ "epoch": 1.910675901600191,
1119
+ "grad_norm": 0.7244303822517395,
1120
+ "learning_rate": 0.0003,
1121
+ "loss": 4.8299,
1122
+ "step": 750
1123
+ },
1124
+ {
1125
+ "epoch": 1.9234137409441923,
1126
+ "grad_norm": 0.5788025856018066,
1127
+ "learning_rate": 0.0003,
1128
+ "loss": 4.8843,
1129
+ "step": 755
1130
+ },
1131
+ {
1132
+ "epoch": 1.9361515802881937,
1133
+ "grad_norm": 0.5082942843437195,
1134
+ "learning_rate": 0.0003,
1135
+ "loss": 4.7624,
1136
+ "step": 760
1137
+ },
1138
+ {
1139
+ "epoch": 1.9488894196321949,
1140
+ "grad_norm": 0.6290297508239746,
1141
+ "learning_rate": 0.0003,
1142
+ "loss": 4.7709,
1143
+ "step": 765
1144
+ },
1145
+ {
1146
+ "epoch": 1.961627258976196,
1147
+ "grad_norm": 0.5582670569419861,
1148
+ "learning_rate": 0.0003,
1149
+ "loss": 4.7169,
1150
+ "step": 770
1151
+ },
1152
+ {
1153
+ "epoch": 1.9743650983201975,
1154
+ "grad_norm": 0.6051950454711914,
1155
+ "learning_rate": 0.0003,
1156
+ "loss": 4.7701,
1157
+ "step": 775
1158
+ },
1159
+ {
1160
+ "epoch": 1.9871029376641989,
1161
+ "grad_norm": 0.6427810788154602,
1162
+ "learning_rate": 0.0003,
1163
+ "loss": 4.7729,
1164
+ "step": 780
1165
+ },
1166
+ {
1167
+ "epoch": 1.9972932091393998,
1168
+ "step": 784,
1169
+ "total_flos": 6.247688798679859e+16,
1170
+ "train_loss": 8.457253451250038,
1171
+ "train_runtime": 65851.1603,
1172
+ "train_samples_per_second": 1.526,
1173
+ "train_steps_per_second": 0.012
1174
+ }
1175
+ ],
1176
+ "logging_steps": 5,
1177
+ "max_steps": 784,
1178
+ "num_input_tokens_seen": 0,
1179
+ "num_train_epochs": 2,
1180
+ "save_steps": 100,
1181
+ "total_flos": 6.247688798679859e+16,
1182
+ "train_batch_size": 4,
1183
+ "trial_name": null,
1184
+ "trial_params": null
1185
+ }