pszemraj commited on
Commit
8c2135b
1 Parent(s): 160c7d9

End of training

Browse files
Files changed (5) hide show
  1. README.md +3 -1
  2. all_results.json +16 -0
  3. eval_results.json +12 -0
  4. train_results.json +7 -0
  5. trainer_state.json +802 -0
README.md CHANGED
@@ -2,6 +2,8 @@
2
  license: apache-2.0
3
  base_model: facebook/dinov2-base-imagenet1k-1-layer
4
  tags:
 
 
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
@@ -19,7 +21,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # dinov2-base-imagenet1k-1-layer-boulderspot-vN
21
 
22
- This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.0519
25
  - Accuracy: 0.9810
 
2
  license: apache-2.0
3
  base_model: facebook/dinov2-base-imagenet1k-1-layer
4
  tags:
5
+ - image-classification
6
+ - vision
7
  - generated_from_trainer
8
  metrics:
9
  - accuracy
 
21
 
22
  # dinov2-base-imagenet1k-1-layer-boulderspot-vN
23
 
24
+ This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on the pszemraj/boulderspot dataset.
25
  It achieves the following results on the evaluation set:
26
  - Loss: 0.0519
27
  - Accuracy: 0.9810
all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.99,
3
+ "eval_accuracy": 0.9809941520467836,
4
+ "eval_f1": 0.9808994233422476,
5
+ "eval_loss": 0.051854074001312256,
6
+ "eval_matthews_correlation": 0.8500768147494288,
7
+ "eval_precision": 0.9808194985441887,
8
+ "eval_recall": 0.9809941520467836,
9
+ "eval_runtime": 4.1858,
10
+ "eval_samples_per_second": 163.41,
11
+ "eval_steps_per_second": 10.273,
12
+ "train_loss": 0.08605182834446724,
13
+ "train_runtime": 583.274,
14
+ "train_samples_per_second": 111.397,
15
+ "train_steps_per_second": 1.74
16
+ }
eval_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.99,
3
+ "eval_accuracy": 0.9809941520467836,
4
+ "eval_f1": 0.9808994233422476,
5
+ "eval_loss": 0.051854074001312256,
6
+ "eval_matthews_correlation": 0.8500768147494288,
7
+ "eval_precision": 0.9808194985441887,
8
+ "eval_recall": 0.9809941520467836,
9
+ "eval_runtime": 4.1858,
10
+ "eval_samples_per_second": 163.41,
11
+ "eval_steps_per_second": 10.273
12
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.99,
3
+ "train_loss": 0.08605182834446724,
4
+ "train_runtime": 583.274,
5
+ "train_samples_per_second": 111.397,
6
+ "train_steps_per_second": 1.74
7
+ }
trainer_state.json ADDED
@@ -0,0 +1,802 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.9808994233422476,
3
+ "best_model_checkpoint": "./outputs/dinov2-base-imagenet1k-1-layer-boulderspot-vN/checkpoint-1015",
4
+ "epoch": 4.993849938499385,
5
+ "eval_steps": 500,
6
+ "global_step": 1015,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.05,
13
+ "grad_norm": 21.444149017333984,
14
+ "learning_rate": 3.92156862745098e-06,
15
+ "loss": 0.3077,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.1,
20
+ "grad_norm": 20.83109474182129,
21
+ "learning_rate": 7.84313725490196e-06,
22
+ "loss": 0.0953,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.15,
27
+ "grad_norm": 8.993614196777344,
28
+ "learning_rate": 1.1764705882352942e-05,
29
+ "loss": 0.1968,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.2,
34
+ "grad_norm": 32.97685241699219,
35
+ "learning_rate": 1.568627450980392e-05,
36
+ "loss": 0.1265,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.25,
41
+ "grad_norm": 10.921149253845215,
42
+ "learning_rate": 1.9607843137254903e-05,
43
+ "loss": 0.166,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.3,
48
+ "grad_norm": 30.291000366210938,
49
+ "learning_rate": 1.9995698998770955e-05,
50
+ "loss": 0.1484,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.34,
55
+ "grad_norm": 7.144667625427246,
56
+ "learning_rate": 1.998083609002402e-05,
57
+ "loss": 0.1529,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.39,
62
+ "grad_norm": 72.38031005859375,
63
+ "learning_rate": 1.995537395500004e-05,
64
+ "loss": 0.1588,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.44,
69
+ "grad_norm": 9.280795097351074,
70
+ "learning_rate": 1.9919339633410737e-05,
71
+ "loss": 0.1575,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.49,
76
+ "grad_norm": 8.867383003234863,
77
+ "learning_rate": 1.9872771392183334e-05,
78
+ "loss": 0.1345,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.54,
83
+ "grad_norm": 11.048301696777344,
84
+ "learning_rate": 1.981571868482269e-05,
85
+ "loss": 0.1832,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 0.59,
90
+ "grad_norm": 18.908288955688477,
91
+ "learning_rate": 1.974824209889377e-05,
92
+ "loss": 0.1392,
93
+ "step": 120
94
+ },
95
+ {
96
+ "epoch": 0.64,
97
+ "grad_norm": 6.0579047203063965,
98
+ "learning_rate": 1.9670413291680223e-05,
99
+ "loss": 0.1179,
100
+ "step": 130
101
+ },
102
+ {
103
+ "epoch": 0.69,
104
+ "grad_norm": 6.6037774085998535,
105
+ "learning_rate": 1.9582314914087344e-05,
106
+ "loss": 0.1258,
107
+ "step": 140
108
+ },
109
+ {
110
+ "epoch": 0.74,
111
+ "grad_norm": 17.16813087463379,
112
+ "learning_rate": 1.9484040522870333e-05,
113
+ "loss": 0.1914,
114
+ "step": 150
115
+ },
116
+ {
117
+ "epoch": 0.79,
118
+ "grad_norm": 15.085983276367188,
119
+ "learning_rate": 1.9375694481280965e-05,
120
+ "loss": 0.1747,
121
+ "step": 160
122
+ },
123
+ {
124
+ "epoch": 0.84,
125
+ "grad_norm": 6.54490852355957,
126
+ "learning_rate": 1.9257391848238212e-05,
127
+ "loss": 0.0749,
128
+ "step": 170
129
+ },
130
+ {
131
+ "epoch": 0.89,
132
+ "grad_norm": 0.9332025051116943,
133
+ "learning_rate": 1.9129258256140556e-05,
134
+ "loss": 0.0706,
135
+ "step": 180
136
+ },
137
+ {
138
+ "epoch": 0.93,
139
+ "grad_norm": 26.89777946472168,
140
+ "learning_rate": 1.8991429777449674e-05,
141
+ "loss": 0.0852,
142
+ "step": 190
143
+ },
144
+ {
145
+ "epoch": 0.98,
146
+ "grad_norm": 4.798658847808838,
147
+ "learning_rate": 1.884405278018722e-05,
148
+ "loss": 0.1596,
149
+ "step": 200
150
+ },
151
+ {
152
+ "epoch": 1.0,
153
+ "eval_accuracy": 0.9766081871345029,
154
+ "eval_f1": 0.9758655635300373,
155
+ "eval_loss": 0.0732586681842804,
156
+ "eval_matthews_correlation": 0.8078903073020254,
157
+ "eval_precision": 0.9756885037132272,
158
+ "eval_recall": 0.9766081871345029,
159
+ "eval_runtime": 3.9698,
160
+ "eval_samples_per_second": 172.303,
161
+ "eval_steps_per_second": 10.832,
162
+ "step": 203
163
+ },
164
+ {
165
+ "epoch": 1.03,
166
+ "grad_norm": 13.787192344665527,
167
+ "learning_rate": 1.8687283772498205e-05,
168
+ "loss": 0.1367,
169
+ "step": 210
170
+ },
171
+ {
172
+ "epoch": 1.08,
173
+ "grad_norm": 5.066114902496338,
174
+ "learning_rate": 1.852128923644593e-05,
175
+ "loss": 0.1308,
176
+ "step": 220
177
+ },
178
+ {
179
+ "epoch": 1.13,
180
+ "grad_norm": 2.300729990005493,
181
+ "learning_rate": 1.8346245451215068e-05,
182
+ "loss": 0.1011,
183
+ "step": 230
184
+ },
185
+ {
186
+ "epoch": 1.18,
187
+ "grad_norm": 6.617341995239258,
188
+ "learning_rate": 1.8162338305910636e-05,
189
+ "loss": 0.0879,
190
+ "step": 240
191
+ },
192
+ {
193
+ "epoch": 1.23,
194
+ "grad_norm": 5.020946025848389,
195
+ "learning_rate": 1.79697631021516e-05,
196
+ "loss": 0.0969,
197
+ "step": 250
198
+ },
199
+ {
200
+ "epoch": 1.28,
201
+ "grad_norm": 7.580730438232422,
202
+ "learning_rate": 1.776872434666882e-05,
203
+ "loss": 0.0942,
204
+ "step": 260
205
+ },
206
+ {
207
+ "epoch": 1.33,
208
+ "grad_norm": 1.9427905082702637,
209
+ "learning_rate": 1.7559435534127534e-05,
210
+ "loss": 0.0745,
211
+ "step": 270
212
+ },
213
+ {
214
+ "epoch": 1.38,
215
+ "grad_norm": 10.397661209106445,
216
+ "learning_rate": 1.7342118920405035e-05,
217
+ "loss": 0.1028,
218
+ "step": 280
219
+ },
220
+ {
221
+ "epoch": 1.43,
222
+ "grad_norm": 8.107540130615234,
223
+ "learning_rate": 1.7117005286564344e-05,
224
+ "loss": 0.0941,
225
+ "step": 290
226
+ },
227
+ {
228
+ "epoch": 1.48,
229
+ "grad_norm": 13.503528594970703,
230
+ "learning_rate": 1.688433369377444e-05,
231
+ "loss": 0.1162,
232
+ "step": 300
233
+ },
234
+ {
235
+ "epoch": 1.53,
236
+ "grad_norm": 15.07189655303955,
237
+ "learning_rate": 1.6644351229437416e-05,
238
+ "loss": 0.1301,
239
+ "step": 310
240
+ },
241
+ {
242
+ "epoch": 1.57,
243
+ "grad_norm": 3.9105443954467773,
244
+ "learning_rate": 1.63973127447921e-05,
245
+ "loss": 0.1441,
246
+ "step": 320
247
+ },
248
+ {
249
+ "epoch": 1.62,
250
+ "grad_norm": 7.494752883911133,
251
+ "learning_rate": 1.6143480584272794e-05,
252
+ "loss": 0.1002,
253
+ "step": 330
254
+ },
255
+ {
256
+ "epoch": 1.67,
257
+ "grad_norm": 2.2176589965820312,
258
+ "learning_rate": 1.5883124306910563e-05,
259
+ "loss": 0.0731,
260
+ "step": 340
261
+ },
262
+ {
263
+ "epoch": 1.72,
264
+ "grad_norm": 5.566643714904785,
265
+ "learning_rate": 1.5616520400072963e-05,
266
+ "loss": 0.093,
267
+ "step": 350
268
+ },
269
+ {
270
+ "epoch": 1.77,
271
+ "grad_norm": 4.072403907775879,
272
+ "learning_rate": 1.5343951985846096e-05,
273
+ "loss": 0.0899,
274
+ "step": 360
275
+ },
276
+ {
277
+ "epoch": 1.82,
278
+ "grad_norm": 4.870898246765137,
279
+ "learning_rate": 1.5065708520370943e-05,
280
+ "loss": 0.0781,
281
+ "step": 370
282
+ },
283
+ {
284
+ "epoch": 1.87,
285
+ "grad_norm": 21.179044723510742,
286
+ "learning_rate": 1.4782085486453155e-05,
287
+ "loss": 0.0807,
288
+ "step": 380
289
+ },
290
+ {
291
+ "epoch": 1.92,
292
+ "grad_norm": 4.826471328735352,
293
+ "learning_rate": 1.4493384079772815e-05,
294
+ "loss": 0.0852,
295
+ "step": 390
296
+ },
297
+ {
298
+ "epoch": 1.97,
299
+ "grad_norm": 3.290867328643799,
300
+ "learning_rate": 1.4199910889027335e-05,
301
+ "loss": 0.0635,
302
+ "step": 400
303
+ },
304
+ {
305
+ "epoch": 2.0,
306
+ "eval_accuracy": 0.9473684210526315,
307
+ "eval_f1": 0.9522155218554862,
308
+ "eval_loss": 0.12761278450489044,
309
+ "eval_matthews_correlation": 0.6844503635019795,
310
+ "eval_precision": 0.9618507818612543,
311
+ "eval_recall": 0.9473684210526315,
312
+ "eval_runtime": 4.0112,
313
+ "eval_samples_per_second": 170.521,
314
+ "eval_steps_per_second": 10.72,
315
+ "step": 406
316
+ },
317
+ {
318
+ "epoch": 2.02,
319
+ "grad_norm": 16.522815704345703,
320
+ "learning_rate": 1.390197757034721e-05,
321
+ "loss": 0.0853,
322
+ "step": 410
323
+ },
324
+ {
325
+ "epoch": 2.07,
326
+ "grad_norm": 5.749844074249268,
327
+ "learning_rate": 1.3599900516330382e-05,
328
+ "loss": 0.0685,
329
+ "step": 420
330
+ },
331
+ {
332
+ "epoch": 2.12,
333
+ "grad_norm": 4.761476993560791,
334
+ "learning_rate": 1.3294000520046666e-05,
335
+ "loss": 0.086,
336
+ "step": 430
337
+ },
338
+ {
339
+ "epoch": 2.16,
340
+ "grad_norm": 5.712699890136719,
341
+ "learning_rate": 1.2984602434369058e-05,
342
+ "loss": 0.0927,
343
+ "step": 440
344
+ },
345
+ {
346
+ "epoch": 2.21,
347
+ "grad_norm": 7.019559860229492,
348
+ "learning_rate": 1.2672034826993716e-05,
349
+ "loss": 0.0771,
350
+ "step": 450
351
+ },
352
+ {
353
+ "epoch": 2.26,
354
+ "grad_norm": 5.033693313598633,
355
+ "learning_rate": 1.235662963151493e-05,
356
+ "loss": 0.065,
357
+ "step": 460
358
+ },
359
+ {
360
+ "epoch": 2.31,
361
+ "grad_norm": 7.356439590454102,
362
+ "learning_rate": 1.2038721794925689e-05,
363
+ "loss": 0.0782,
364
+ "step": 470
365
+ },
366
+ {
367
+ "epoch": 2.36,
368
+ "grad_norm": 13.046978950500488,
369
+ "learning_rate": 1.1718648921918112e-05,
370
+ "loss": 0.1074,
371
+ "step": 480
372
+ },
373
+ {
374
+ "epoch": 2.41,
375
+ "grad_norm": 2.489901542663574,
376
+ "learning_rate": 1.1396750916361526e-05,
377
+ "loss": 0.0891,
378
+ "step": 490
379
+ },
380
+ {
381
+ "epoch": 2.46,
382
+ "grad_norm": 8.636452674865723,
383
+ "learning_rate": 1.1073369620338928e-05,
384
+ "loss": 0.0922,
385
+ "step": 500
386
+ },
387
+ {
388
+ "epoch": 2.51,
389
+ "grad_norm": 3.626593589782715,
390
+ "learning_rate": 1.074884845112512e-05,
391
+ "loss": 0.0832,
392
+ "step": 510
393
+ },
394
+ {
395
+ "epoch": 2.56,
396
+ "grad_norm": 2.680762529373169,
397
+ "learning_rate": 1.0423532036492077e-05,
398
+ "loss": 0.0659,
399
+ "step": 520
400
+ },
401
+ {
402
+ "epoch": 2.61,
403
+ "grad_norm": 9.335197448730469,
404
+ "learning_rate": 1.0097765848728825e-05,
405
+ "loss": 0.0718,
406
+ "step": 530
407
+ },
408
+ {
409
+ "epoch": 2.66,
410
+ "grad_norm": 2.929506301879883,
411
+ "learning_rate": 9.771895837764438e-06,
412
+ "loss": 0.0975,
413
+ "step": 540
414
+ },
415
+ {
416
+ "epoch": 2.71,
417
+ "grad_norm": 1.251734733581543,
418
+ "learning_rate": 9.446268063783853e-06,
419
+ "loss": 0.0343,
420
+ "step": 550
421
+ },
422
+ {
423
+ "epoch": 2.76,
424
+ "grad_norm": 11.404159545898438,
425
+ "learning_rate": 9.121228329726563e-06,
426
+ "loss": 0.0488,
427
+ "step": 560
428
+ },
429
+ {
430
+ "epoch": 2.8,
431
+ "grad_norm": 3.3042876720428467,
432
+ "learning_rate": 8.797121814058502e-06,
433
+ "loss": 0.0641,
434
+ "step": 570
435
+ },
436
+ {
437
+ "epoch": 2.85,
438
+ "grad_norm": 10.385183334350586,
439
+ "learning_rate": 8.474292704207095e-06,
440
+ "loss": 0.0951,
441
+ "step": 580
442
+ },
443
+ {
444
+ "epoch": 2.9,
445
+ "grad_norm": 7.184572219848633,
446
+ "learning_rate": 8.153083831048772e-06,
447
+ "loss": 0.0591,
448
+ "step": 590
449
+ },
450
+ {
451
+ "epoch": 2.95,
452
+ "grad_norm": 4.226497650146484,
453
+ "learning_rate": 7.833836304837022e-06,
454
+ "loss": 0.1031,
455
+ "step": 600
456
+ },
457
+ {
458
+ "epoch": 3.0,
459
+ "eval_accuracy": 0.9751461988304093,
460
+ "eval_f1": 0.9755012041745224,
461
+ "eval_loss": 0.06017656996846199,
462
+ "eval_matthews_correlation": 0.8118305972172924,
463
+ "eval_precision": 0.9759749663327615,
464
+ "eval_recall": 0.9751461988304093,
465
+ "eval_runtime": 4.1933,
466
+ "eval_samples_per_second": 163.119,
467
+ "eval_steps_per_second": 10.255,
468
+ "step": 609
469
+ },
470
+ {
471
+ "epoch": 3.0,
472
+ "grad_norm": 4.557692527770996,
473
+ "learning_rate": 7.516889152957744e-06,
474
+ "loss": 0.0533,
475
+ "step": 610
476
+ },
477
+ {
478
+ "epoch": 3.05,
479
+ "grad_norm": 3.751300096511841,
480
+ "learning_rate": 7.202578959896491e-06,
481
+ "loss": 0.0782,
482
+ "step": 620
483
+ },
484
+ {
485
+ "epoch": 3.1,
486
+ "grad_norm": 4.593023777008057,
487
+ "learning_rate": 6.891239509799932e-06,
488
+ "loss": 0.0627,
489
+ "step": 630
490
+ },
491
+ {
492
+ "epoch": 3.15,
493
+ "grad_norm": 4.492953777313232,
494
+ "learning_rate": 6.583201432011217e-06,
495
+ "loss": 0.0564,
496
+ "step": 640
497
+ },
498
+ {
499
+ "epoch": 3.2,
500
+ "grad_norm": 3.022425889968872,
501
+ "learning_rate": 6.278791849955583e-06,
502
+ "loss": 0.0719,
503
+ "step": 650
504
+ },
505
+ {
506
+ "epoch": 3.25,
507
+ "grad_norm": 4.909082889556885,
508
+ "learning_rate": 5.978334033749076e-06,
509
+ "loss": 0.0531,
510
+ "step": 660
511
+ },
512
+ {
513
+ "epoch": 3.3,
514
+ "grad_norm": 2.2064766883850098,
515
+ "learning_rate": 5.682147056899361e-06,
516
+ "loss": 0.0628,
517
+ "step": 670
518
+ },
519
+ {
520
+ "epoch": 3.35,
521
+ "grad_norm": 5.478167533874512,
522
+ "learning_rate": 5.390545457463134e-06,
523
+ "loss": 0.0705,
524
+ "step": 680
525
+ },
526
+ {
527
+ "epoch": 3.39,
528
+ "grad_norm": 5.376189708709717,
529
+ "learning_rate": 5.103838904019993e-06,
530
+ "loss": 0.0888,
531
+ "step": 690
532
+ },
533
+ {
534
+ "epoch": 3.44,
535
+ "grad_norm": 3.5109596252441406,
536
+ "learning_rate": 4.822331866817478e-06,
537
+ "loss": 0.0577,
538
+ "step": 700
539
+ },
540
+ {
541
+ "epoch": 3.49,
542
+ "grad_norm": 0.5529822707176208,
543
+ "learning_rate": 4.546323294436556e-06,
544
+ "loss": 0.0421,
545
+ "step": 710
546
+ },
547
+ {
548
+ "epoch": 3.54,
549
+ "grad_norm": 10.782859802246094,
550
+ "learning_rate": 4.276106296320828e-06,
551
+ "loss": 0.0579,
552
+ "step": 720
553
+ },
554
+ {
555
+ "epoch": 3.59,
556
+ "grad_norm": 0.36797773838043213,
557
+ "learning_rate": 4.0119678315067025e-06,
558
+ "loss": 0.0381,
559
+ "step": 730
560
+ },
561
+ {
562
+ "epoch": 3.64,
563
+ "grad_norm": 3.724026679992676,
564
+ "learning_rate": 3.754188403885013e-06,
565
+ "loss": 0.057,
566
+ "step": 740
567
+ },
568
+ {
569
+ "epoch": 3.69,
570
+ "grad_norm": 6.234977722167969,
571
+ "learning_rate": 3.5030417643177416e-06,
572
+ "loss": 0.0556,
573
+ "step": 750
574
+ },
575
+ {
576
+ "epoch": 3.74,
577
+ "grad_norm": 6.538670063018799,
578
+ "learning_rate": 3.258794619926159e-06,
579
+ "loss": 0.0624,
580
+ "step": 760
581
+ },
582
+ {
583
+ "epoch": 3.79,
584
+ "grad_norm": 1.8592917919158936,
585
+ "learning_rate": 3.021706350859147e-06,
586
+ "loss": 0.0544,
587
+ "step": 770
588
+ },
589
+ {
590
+ "epoch": 3.84,
591
+ "grad_norm": 1.5543180704116821,
592
+ "learning_rate": 2.792028734842418e-06,
593
+ "loss": 0.0321,
594
+ "step": 780
595
+ },
596
+ {
597
+ "epoch": 3.89,
598
+ "grad_norm": 6.113970756530762,
599
+ "learning_rate": 2.5700056798012164e-06,
600
+ "loss": 0.052,
601
+ "step": 790
602
+ },
603
+ {
604
+ "epoch": 3.94,
605
+ "grad_norm": 6.493689060211182,
606
+ "learning_rate": 2.3558729648404065e-06,
607
+ "loss": 0.046,
608
+ "step": 800
609
+ },
610
+ {
611
+ "epoch": 3.99,
612
+ "grad_norm": 3.7167036533355713,
613
+ "learning_rate": 2.1498579898570228e-06,
614
+ "loss": 0.0587,
615
+ "step": 810
616
+ },
617
+ {
618
+ "epoch": 4.0,
619
+ "eval_accuracy": 0.9736842105263158,
620
+ "eval_f1": 0.9734187929958467,
621
+ "eval_loss": 0.05119941756129265,
622
+ "eval_matthews_correlation": 0.7904630921052955,
623
+ "eval_precision": 0.9732101510950039,
624
+ "eval_recall": 0.9736842105263158,
625
+ "eval_runtime": 4.2261,
626
+ "eval_samples_per_second": 161.853,
627
+ "eval_steps_per_second": 10.175,
628
+ "step": 813
629
+ },
630
+ {
631
+ "epoch": 4.03,
632
+ "grad_norm": 6.378363132476807,
633
+ "learning_rate": 1.952179534051183e-06,
634
+ "loss": 0.0389,
635
+ "step": 820
636
+ },
637
+ {
638
+ "epoch": 4.08,
639
+ "grad_norm": 5.403367042541504,
640
+ "learning_rate": 1.763047523591831e-06,
641
+ "loss": 0.0727,
642
+ "step": 830
643
+ },
644
+ {
645
+ "epoch": 4.13,
646
+ "grad_norm": 5.556785583496094,
647
+ "learning_rate": 1.5826628086839968e-06,
648
+ "loss": 0.0365,
649
+ "step": 840
650
+ },
651
+ {
652
+ "epoch": 4.18,
653
+ "grad_norm": 10.821802139282227,
654
+ "learning_rate": 1.41121695027438e-06,
655
+ "loss": 0.0419,
656
+ "step": 850
657
+ },
658
+ {
659
+ "epoch": 4.23,
660
+ "grad_norm": 2.722320795059204,
661
+ "learning_rate": 1.2488920166217034e-06,
662
+ "loss": 0.0404,
663
+ "step": 860
664
+ },
665
+ {
666
+ "epoch": 4.28,
667
+ "grad_norm": 11.169462203979492,
668
+ "learning_rate": 1.095860389947928e-06,
669
+ "loss": 0.0522,
670
+ "step": 870
671
+ },
672
+ {
673
+ "epoch": 4.33,
674
+ "grad_norm": 4.065892696380615,
675
+ "learning_rate": 9.522845833756001e-07,
676
+ "loss": 0.0497,
677
+ "step": 880
678
+ },
679
+ {
680
+ "epoch": 4.38,
681
+ "grad_norm": 11.717745780944824,
682
+ "learning_rate": 8.183170683457986e-07,
683
+ "loss": 0.0543,
684
+ "step": 890
685
+ },
686
+ {
687
+ "epoch": 4.43,
688
+ "grad_norm": 3.715181589126587,
689
+ "learning_rate": 6.941001126998892e-07,
690
+ "loss": 0.065,
691
+ "step": 900
692
+ },
693
+ {
694
+ "epoch": 4.48,
695
+ "grad_norm": 8.879691123962402,
696
+ "learning_rate": 5.797656295970955e-07,
697
+ "loss": 0.0546,
698
+ "step": 910
699
+ },
700
+ {
701
+ "epoch": 4.53,
702
+ "grad_norm": 6.542267799377441,
703
+ "learning_rate": 4.754350374283001e-07,
704
+ "loss": 0.0491,
705
+ "step": 920
706
+ },
707
+ {
708
+ "epoch": 4.58,
709
+ "grad_norm": 5.4677557945251465,
710
+ "learning_rate": 3.8121913087483033e-07,
711
+ "loss": 0.0434,
712
+ "step": 930
713
+ },
714
+ {
715
+ "epoch": 4.62,
716
+ "grad_norm": 5.965972900390625,
717
+ "learning_rate": 2.972179632491989e-07,
718
+ "loss": 0.0594,
719
+ "step": 940
720
+ },
721
+ {
722
+ "epoch": 4.67,
723
+ "grad_norm": 5.097690105438232,
724
+ "learning_rate": 2.23520740242712e-07,
725
+ "loss": 0.0632,
726
+ "step": 950
727
+ },
728
+ {
729
+ "epoch": 4.72,
730
+ "grad_norm": 5.078763961791992,
731
+ "learning_rate": 1.602057251927891e-07,
732
+ "loss": 0.0448,
733
+ "step": 960
734
+ },
735
+ {
736
+ "epoch": 4.77,
737
+ "grad_norm": 4.476650714874268,
738
+ "learning_rate": 1.0734015597060222e-07,
739
+ "loss": 0.0481,
740
+ "step": 970
741
+ },
742
+ {
743
+ "epoch": 4.82,
744
+ "grad_norm": 3.210946798324585,
745
+ "learning_rate": 6.498017357731035e-08,
746
+ "loss": 0.0604,
747
+ "step": 980
748
+ },
749
+ {
750
+ "epoch": 4.87,
751
+ "grad_norm": 6.374156951904297,
752
+ "learning_rate": 3.317076252467133e-08,
753
+ "loss": 0.0406,
754
+ "step": 990
755
+ },
756
+ {
757
+ "epoch": 4.92,
758
+ "grad_norm": 2.580352306365967,
759
+ "learning_rate": 1.1945703063402925e-08,
760
+ "loss": 0.0343,
761
+ "step": 1000
762
+ },
763
+ {
764
+ "epoch": 4.97,
765
+ "grad_norm": 1.094759464263916,
766
+ "learning_rate": 1.327535309979533e-09,
767
+ "loss": 0.038,
768
+ "step": 1010
769
+ },
770
+ {
771
+ "epoch": 4.99,
772
+ "eval_accuracy": 0.9809941520467836,
773
+ "eval_f1": 0.9808994233422476,
774
+ "eval_loss": 0.051854074001312256,
775
+ "eval_matthews_correlation": 0.8500768147494288,
776
+ "eval_precision": 0.9808194985441887,
777
+ "eval_recall": 0.9809941520467836,
778
+ "eval_runtime": 4.1997,
779
+ "eval_samples_per_second": 162.87,
780
+ "eval_steps_per_second": 10.239,
781
+ "step": 1015
782
+ },
783
+ {
784
+ "epoch": 4.99,
785
+ "step": 1015,
786
+ "total_flos": 6.629580853384053e+18,
787
+ "train_loss": 0.08605182834446724,
788
+ "train_runtime": 583.274,
789
+ "train_samples_per_second": 111.397,
790
+ "train_steps_per_second": 1.74
791
+ }
792
+ ],
793
+ "logging_steps": 10,
794
+ "max_steps": 1015,
795
+ "num_input_tokens_seen": 0,
796
+ "num_train_epochs": 5,
797
+ "save_steps": 500,
798
+ "total_flos": 6.629580853384053e+18,
799
+ "train_batch_size": 16,
800
+ "trial_name": null,
801
+ "trial_params": null
802
+ }