polejowska commited on
Commit
9da173a
1 Parent(s): f955480

End of training

Browse files
Files changed (3) hide show
  1. README.md +107 -0
  2. model.safetensors +1 -1
  3. trainer_state.json +728 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: polejowska/detr-r50-cd45rb-8ah-6l
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: detr-r50-finetuned-mist1-gb-8ah-6l
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # detr-r50-finetuned-mist1-gb-8ah-6l
15
+
16
+ This model is a fine-tuned version of [polejowska/detr-r50-cd45rb-8ah-6l](https://huggingface.co/polejowska/detr-r50-cd45rb-8ah-6l) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.9224
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 1e-05
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 50
44
+ - mixed_precision_training: Native AMP
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:----:|:---------------:|
50
+ | 2.5222 | 1.0 | 115 | 2.2563 |
51
+ | 2.3827 | 2.0 | 230 | 2.2211 |
52
+ | 2.3441 | 3.0 | 345 | 2.2602 |
53
+ | 2.2896 | 4.0 | 460 | 2.2359 |
54
+ | 2.2828 | 5.0 | 575 | 2.2431 |
55
+ | 2.2972 | 6.0 | 690 | 2.1629 |
56
+ | 2.3007 | 7.0 | 805 | 2.1545 |
57
+ | 2.2951 | 8.0 | 920 | 2.1153 |
58
+ | 2.2595 | 9.0 | 1035 | 2.1553 |
59
+ | 2.2327 | 10.0 | 1150 | 2.2060 |
60
+ | 2.2023 | 11.0 | 1265 | 2.0452 |
61
+ | 2.2117 | 12.0 | 1380 | 2.0879 |
62
+ | 2.1805 | 13.0 | 1495 | 2.1812 |
63
+ | 2.1344 | 14.0 | 1610 | 2.0992 |
64
+ | 2.1057 | 15.0 | 1725 | 1.9834 |
65
+ | 2.086 | 16.0 | 1840 | 1.9610 |
66
+ | 2.0591 | 17.0 | 1955 | 2.1007 |
67
+ | 2.053 | 18.0 | 2070 | 2.0561 |
68
+ | 2.0387 | 19.0 | 2185 | 2.0596 |
69
+ | 2.0161 | 20.0 | 2300 | 1.9885 |
70
+ | 2.0374 | 21.0 | 2415 | 2.0041 |
71
+ | 2.0233 | 22.0 | 2530 | 2.0103 |
72
+ | 2.0363 | 23.0 | 2645 | 2.0541 |
73
+ | 1.9837 | 24.0 | 2760 | 1.9924 |
74
+ | 1.9943 | 25.0 | 2875 | 2.0558 |
75
+ | 1.9846 | 26.0 | 2990 | 1.9874 |
76
+ | 1.9601 | 27.0 | 3105 | 1.9554 |
77
+ | 1.9837 | 28.0 | 3220 | 1.9989 |
78
+ | 1.9664 | 29.0 | 3335 | 1.9876 |
79
+ | 1.966 | 30.0 | 3450 | 1.9755 |
80
+ | 1.9226 | 31.0 | 3565 | 1.9357 |
81
+ | 1.9405 | 32.0 | 3680 | 1.9240 |
82
+ | 1.9035 | 33.0 | 3795 | 1.9411 |
83
+ | 1.8924 | 34.0 | 3910 | 1.9291 |
84
+ | 1.8801 | 35.0 | 4025 | 1.9661 |
85
+ | 1.8698 | 36.0 | 4140 | 1.9105 |
86
+ | 1.8572 | 37.0 | 4255 | 1.9448 |
87
+ | 1.8756 | 38.0 | 4370 | 1.9675 |
88
+ | 1.8593 | 39.0 | 4485 | 1.9365 |
89
+ | 1.8713 | 40.0 | 4600 | 1.9383 |
90
+ | 1.8436 | 41.0 | 4715 | 1.9671 |
91
+ | 1.83 | 42.0 | 4830 | 1.9527 |
92
+ | 1.857 | 43.0 | 4945 | 1.9448 |
93
+ | 1.8318 | 44.0 | 5060 | 1.9366 |
94
+ | 1.8177 | 45.0 | 5175 | 1.9389 |
95
+ | 1.8034 | 46.0 | 5290 | 1.9050 |
96
+ | 1.8226 | 47.0 | 5405 | 1.9226 |
97
+ | 1.818 | 48.0 | 5520 | 1.9150 |
98
+ | 1.8148 | 49.0 | 5635 | 1.9169 |
99
+ | 1.7984 | 50.0 | 5750 | 1.9224 |
100
+
101
+
102
+ ### Framework versions
103
+
104
+ - Transformers 4.35.0
105
+ - Pytorch 2.0.0
106
+ - Datasets 2.1.0
107
+ - Tokenizers 0.14.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:024f6dacd05e7b91834780fa85d5258a18778d0c499f7c1ba7292fd35d01fc19
3
  size 166494824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16625071c6b40408b117b4bd0e310612000d962e8200411ff8bf1eaf82875e69
3
  size 166494824
trainer_state.json ADDED
@@ -0,0 +1,728 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 1.905003309249878,
3
+ "best_model_checkpoint": "detr-r50-finetuned-mist1-gb-8ah-6l/checkpoint-5290",
4
+ "epoch": 50.0,
5
+ "eval_steps": 500,
6
+ "global_step": 5750,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 1.0,
13
+ "learning_rate": 9.812173913043479e-06,
14
+ "loss": 2.5222,
15
+ "step": 115
16
+ },
17
+ {
18
+ "epoch": 1.0,
19
+ "eval_loss": 2.2562553882598877,
20
+ "eval_runtime": 5.7546,
21
+ "eval_samples_per_second": 6.951,
22
+ "eval_steps_per_second": 0.869,
23
+ "step": 115
24
+ },
25
+ {
26
+ "epoch": 2.0,
27
+ "learning_rate": 9.612173913043479e-06,
28
+ "loss": 2.3827,
29
+ "step": 230
30
+ },
31
+ {
32
+ "epoch": 2.0,
33
+ "eval_loss": 2.2210755348205566,
34
+ "eval_runtime": 5.7395,
35
+ "eval_samples_per_second": 6.969,
36
+ "eval_steps_per_second": 0.871,
37
+ "step": 230
38
+ },
39
+ {
40
+ "epoch": 3.0,
41
+ "learning_rate": 9.412173913043479e-06,
42
+ "loss": 2.3441,
43
+ "step": 345
44
+ },
45
+ {
46
+ "epoch": 3.0,
47
+ "eval_loss": 2.2602248191833496,
48
+ "eval_runtime": 5.7242,
49
+ "eval_samples_per_second": 6.988,
50
+ "eval_steps_per_second": 0.873,
51
+ "step": 345
52
+ },
53
+ {
54
+ "epoch": 4.0,
55
+ "learning_rate": 9.21217391304348e-06,
56
+ "loss": 2.2896,
57
+ "step": 460
58
+ },
59
+ {
60
+ "epoch": 4.0,
61
+ "eval_loss": 2.2359230518341064,
62
+ "eval_runtime": 5.684,
63
+ "eval_samples_per_second": 7.037,
64
+ "eval_steps_per_second": 0.88,
65
+ "step": 460
66
+ },
67
+ {
68
+ "epoch": 5.0,
69
+ "learning_rate": 9.013913043478261e-06,
70
+ "loss": 2.2828,
71
+ "step": 575
72
+ },
73
+ {
74
+ "epoch": 5.0,
75
+ "eval_loss": 2.2430644035339355,
76
+ "eval_runtime": 5.7198,
77
+ "eval_samples_per_second": 6.993,
78
+ "eval_steps_per_second": 0.874,
79
+ "step": 575
80
+ },
81
+ {
82
+ "epoch": 6.0,
83
+ "learning_rate": 8.813913043478261e-06,
84
+ "loss": 2.2972,
85
+ "step": 690
86
+ },
87
+ {
88
+ "epoch": 6.0,
89
+ "eval_loss": 2.1629228591918945,
90
+ "eval_runtime": 5.6906,
91
+ "eval_samples_per_second": 7.029,
92
+ "eval_steps_per_second": 0.879,
93
+ "step": 690
94
+ },
95
+ {
96
+ "epoch": 7.0,
97
+ "learning_rate": 8.615652173913043e-06,
98
+ "loss": 2.3007,
99
+ "step": 805
100
+ },
101
+ {
102
+ "epoch": 7.0,
103
+ "eval_loss": 2.1544721126556396,
104
+ "eval_runtime": 5.7423,
105
+ "eval_samples_per_second": 6.966,
106
+ "eval_steps_per_second": 0.871,
107
+ "step": 805
108
+ },
109
+ {
110
+ "epoch": 8.0,
111
+ "learning_rate": 8.417391304347827e-06,
112
+ "loss": 2.2951,
113
+ "step": 920
114
+ },
115
+ {
116
+ "epoch": 8.0,
117
+ "eval_loss": 2.115345001220703,
118
+ "eval_runtime": 5.7472,
119
+ "eval_samples_per_second": 6.96,
120
+ "eval_steps_per_second": 0.87,
121
+ "step": 920
122
+ },
123
+ {
124
+ "epoch": 9.0,
125
+ "learning_rate": 8.217391304347827e-06,
126
+ "loss": 2.2595,
127
+ "step": 1035
128
+ },
129
+ {
130
+ "epoch": 9.0,
131
+ "eval_loss": 2.1553213596343994,
132
+ "eval_runtime": 5.6778,
133
+ "eval_samples_per_second": 7.045,
134
+ "eval_steps_per_second": 0.881,
135
+ "step": 1035
136
+ },
137
+ {
138
+ "epoch": 10.0,
139
+ "learning_rate": 8.017391304347828e-06,
140
+ "loss": 2.2327,
141
+ "step": 1150
142
+ },
143
+ {
144
+ "epoch": 10.0,
145
+ "eval_loss": 2.205960750579834,
146
+ "eval_runtime": 5.7224,
147
+ "eval_samples_per_second": 6.99,
148
+ "eval_steps_per_second": 0.874,
149
+ "step": 1150
150
+ },
151
+ {
152
+ "epoch": 11.0,
153
+ "learning_rate": 7.817391304347826e-06,
154
+ "loss": 2.2023,
155
+ "step": 1265
156
+ },
157
+ {
158
+ "epoch": 11.0,
159
+ "eval_loss": 2.045210599899292,
160
+ "eval_runtime": 5.6947,
161
+ "eval_samples_per_second": 7.024,
162
+ "eval_steps_per_second": 0.878,
163
+ "step": 1265
164
+ },
165
+ {
166
+ "epoch": 12.0,
167
+ "learning_rate": 7.617391304347826e-06,
168
+ "loss": 2.2117,
169
+ "step": 1380
170
+ },
171
+ {
172
+ "epoch": 12.0,
173
+ "eval_loss": 2.087853193283081,
174
+ "eval_runtime": 5.7626,
175
+ "eval_samples_per_second": 6.941,
176
+ "eval_steps_per_second": 0.868,
177
+ "step": 1380
178
+ },
179
+ {
180
+ "epoch": 13.0,
181
+ "learning_rate": 7.417391304347827e-06,
182
+ "loss": 2.1805,
183
+ "step": 1495
184
+ },
185
+ {
186
+ "epoch": 13.0,
187
+ "eval_loss": 2.1812005043029785,
188
+ "eval_runtime": 5.7549,
189
+ "eval_samples_per_second": 6.951,
190
+ "eval_steps_per_second": 0.869,
191
+ "step": 1495
192
+ },
193
+ {
194
+ "epoch": 14.0,
195
+ "learning_rate": 7.217391304347827e-06,
196
+ "loss": 2.1344,
197
+ "step": 1610
198
+ },
199
+ {
200
+ "epoch": 14.0,
201
+ "eval_loss": 2.0991523265838623,
202
+ "eval_runtime": 5.7805,
203
+ "eval_samples_per_second": 6.92,
204
+ "eval_steps_per_second": 0.865,
205
+ "step": 1610
206
+ },
207
+ {
208
+ "epoch": 15.0,
209
+ "learning_rate": 7.017391304347827e-06,
210
+ "loss": 2.1057,
211
+ "step": 1725
212
+ },
213
+ {
214
+ "epoch": 15.0,
215
+ "eval_loss": 1.983435869216919,
216
+ "eval_runtime": 5.7113,
217
+ "eval_samples_per_second": 7.004,
218
+ "eval_steps_per_second": 0.875,
219
+ "step": 1725
220
+ },
221
+ {
222
+ "epoch": 16.0,
223
+ "learning_rate": 6.817391304347826e-06,
224
+ "loss": 2.086,
225
+ "step": 1840
226
+ },
227
+ {
228
+ "epoch": 16.0,
229
+ "eval_loss": 1.9609792232513428,
230
+ "eval_runtime": 5.7575,
231
+ "eval_samples_per_second": 6.947,
232
+ "eval_steps_per_second": 0.868,
233
+ "step": 1840
234
+ },
235
+ {
236
+ "epoch": 17.0,
237
+ "learning_rate": 6.617391304347827e-06,
238
+ "loss": 2.0591,
239
+ "step": 1955
240
+ },
241
+ {
242
+ "epoch": 17.0,
243
+ "eval_loss": 2.100736141204834,
244
+ "eval_runtime": 5.7633,
245
+ "eval_samples_per_second": 6.94,
246
+ "eval_steps_per_second": 0.868,
247
+ "step": 1955
248
+ },
249
+ {
250
+ "epoch": 18.0,
251
+ "learning_rate": 6.417391304347827e-06,
252
+ "loss": 2.053,
253
+ "step": 2070
254
+ },
255
+ {
256
+ "epoch": 18.0,
257
+ "eval_loss": 2.056126832962036,
258
+ "eval_runtime": 5.7709,
259
+ "eval_samples_per_second": 6.931,
260
+ "eval_steps_per_second": 0.866,
261
+ "step": 2070
262
+ },
263
+ {
264
+ "epoch": 19.0,
265
+ "learning_rate": 6.217391304347826e-06,
266
+ "loss": 2.0387,
267
+ "step": 2185
268
+ },
269
+ {
270
+ "epoch": 19.0,
271
+ "eval_loss": 2.0596375465393066,
272
+ "eval_runtime": 5.7884,
273
+ "eval_samples_per_second": 6.91,
274
+ "eval_steps_per_second": 0.864,
275
+ "step": 2185
276
+ },
277
+ {
278
+ "epoch": 20.0,
279
+ "learning_rate": 6.0173913043478264e-06,
280
+ "loss": 2.0161,
281
+ "step": 2300
282
+ },
283
+ {
284
+ "epoch": 20.0,
285
+ "eval_loss": 1.9885139465332031,
286
+ "eval_runtime": 5.7465,
287
+ "eval_samples_per_second": 6.961,
288
+ "eval_steps_per_second": 0.87,
289
+ "step": 2300
290
+ },
291
+ {
292
+ "epoch": 21.0,
293
+ "learning_rate": 5.817391304347827e-06,
294
+ "loss": 2.0374,
295
+ "step": 2415
296
+ },
297
+ {
298
+ "epoch": 21.0,
299
+ "eval_loss": 2.0041000843048096,
300
+ "eval_runtime": 5.7421,
301
+ "eval_samples_per_second": 6.966,
302
+ "eval_steps_per_second": 0.871,
303
+ "step": 2415
304
+ },
305
+ {
306
+ "epoch": 22.0,
307
+ "learning_rate": 5.617391304347827e-06,
308
+ "loss": 2.0233,
309
+ "step": 2530
310
+ },
311
+ {
312
+ "epoch": 22.0,
313
+ "eval_loss": 2.0102856159210205,
314
+ "eval_runtime": 5.7047,
315
+ "eval_samples_per_second": 7.012,
316
+ "eval_steps_per_second": 0.876,
317
+ "step": 2530
318
+ },
319
+ {
320
+ "epoch": 23.0,
321
+ "learning_rate": 5.417391304347826e-06,
322
+ "loss": 2.0363,
323
+ "step": 2645
324
+ },
325
+ {
326
+ "epoch": 23.0,
327
+ "eval_loss": 2.0540664196014404,
328
+ "eval_runtime": 5.7156,
329
+ "eval_samples_per_second": 6.998,
330
+ "eval_steps_per_second": 0.875,
331
+ "step": 2645
332
+ },
333
+ {
334
+ "epoch": 24.0,
335
+ "learning_rate": 5.2173913043478265e-06,
336
+ "loss": 1.9837,
337
+ "step": 2760
338
+ },
339
+ {
340
+ "epoch": 24.0,
341
+ "eval_loss": 1.9924190044403076,
342
+ "eval_runtime": 5.6809,
343
+ "eval_samples_per_second": 7.041,
344
+ "eval_steps_per_second": 0.88,
345
+ "step": 2760
346
+ },
347
+ {
348
+ "epoch": 25.0,
349
+ "learning_rate": 5.017391304347826e-06,
350
+ "loss": 1.9943,
351
+ "step": 2875
352
+ },
353
+ {
354
+ "epoch": 25.0,
355
+ "eval_loss": 2.0557620525360107,
356
+ "eval_runtime": 5.7087,
357
+ "eval_samples_per_second": 7.007,
358
+ "eval_steps_per_second": 0.876,
359
+ "step": 2875
360
+ },
361
+ {
362
+ "epoch": 26.0,
363
+ "learning_rate": 4.817391304347827e-06,
364
+ "loss": 1.9846,
365
+ "step": 2990
366
+ },
367
+ {
368
+ "epoch": 26.0,
369
+ "eval_loss": 1.9873688220977783,
370
+ "eval_runtime": 5.6682,
371
+ "eval_samples_per_second": 7.057,
372
+ "eval_steps_per_second": 0.882,
373
+ "step": 2990
374
+ },
375
+ {
376
+ "epoch": 27.0,
377
+ "learning_rate": 4.617391304347826e-06,
378
+ "loss": 1.9601,
379
+ "step": 3105
380
+ },
381
+ {
382
+ "epoch": 27.0,
383
+ "eval_loss": 1.9554007053375244,
384
+ "eval_runtime": 5.7979,
385
+ "eval_samples_per_second": 6.899,
386
+ "eval_steps_per_second": 0.862,
387
+ "step": 3105
388
+ },
389
+ {
390
+ "epoch": 28.0,
391
+ "learning_rate": 4.4173913043478265e-06,
392
+ "loss": 1.9837,
393
+ "step": 3220
394
+ },
395
+ {
396
+ "epoch": 28.0,
397
+ "eval_loss": 1.9988619089126587,
398
+ "eval_runtime": 5.7796,
399
+ "eval_samples_per_second": 6.921,
400
+ "eval_steps_per_second": 0.865,
401
+ "step": 3220
402
+ },
403
+ {
404
+ "epoch": 29.0,
405
+ "learning_rate": 4.217391304347827e-06,
406
+ "loss": 1.9664,
407
+ "step": 3335
408
+ },
409
+ {
410
+ "epoch": 29.0,
411
+ "eval_loss": 1.9875919818878174,
412
+ "eval_runtime": 5.7433,
413
+ "eval_samples_per_second": 6.965,
414
+ "eval_steps_per_second": 0.871,
415
+ "step": 3335
416
+ },
417
+ {
418
+ "epoch": 30.0,
419
+ "learning_rate": 4.017391304347826e-06,
420
+ "loss": 1.966,
421
+ "step": 3450
422
+ },
423
+ {
424
+ "epoch": 30.0,
425
+ "eval_loss": 1.9754610061645508,
426
+ "eval_runtime": 5.8653,
427
+ "eval_samples_per_second": 6.82,
428
+ "eval_steps_per_second": 0.852,
429
+ "step": 3450
430
+ },
431
+ {
432
+ "epoch": 31.0,
433
+ "learning_rate": 3.819130434782609e-06,
434
+ "loss": 1.9226,
435
+ "step": 3565
436
+ },
437
+ {
438
+ "epoch": 31.0,
439
+ "eval_loss": 1.9357328414916992,
440
+ "eval_runtime": 5.765,
441
+ "eval_samples_per_second": 6.938,
442
+ "eval_steps_per_second": 0.867,
443
+ "step": 3565
444
+ },
445
+ {
446
+ "epoch": 32.0,
447
+ "learning_rate": 3.6191304347826088e-06,
448
+ "loss": 1.9405,
449
+ "step": 3680
450
+ },
451
+ {
452
+ "epoch": 32.0,
453
+ "eval_loss": 1.9239734411239624,
454
+ "eval_runtime": 5.8194,
455
+ "eval_samples_per_second": 6.874,
456
+ "eval_steps_per_second": 0.859,
457
+ "step": 3680
458
+ },
459
+ {
460
+ "epoch": 33.0,
461
+ "learning_rate": 3.4191304347826086e-06,
462
+ "loss": 1.9035,
463
+ "step": 3795
464
+ },
465
+ {
466
+ "epoch": 33.0,
467
+ "eval_loss": 1.9410585165023804,
468
+ "eval_runtime": 5.8097,
469
+ "eval_samples_per_second": 6.885,
470
+ "eval_steps_per_second": 0.861,
471
+ "step": 3795
472
+ },
473
+ {
474
+ "epoch": 34.0,
475
+ "learning_rate": 3.219130434782609e-06,
476
+ "loss": 1.8924,
477
+ "step": 3910
478
+ },
479
+ {
480
+ "epoch": 34.0,
481
+ "eval_loss": 1.9291362762451172,
482
+ "eval_runtime": 5.8014,
483
+ "eval_samples_per_second": 6.895,
484
+ "eval_steps_per_second": 0.862,
485
+ "step": 3910
486
+ },
487
+ {
488
+ "epoch": 35.0,
489
+ "learning_rate": 3.019130434782609e-06,
490
+ "loss": 1.8801,
491
+ "step": 4025
492
+ },
493
+ {
494
+ "epoch": 35.0,
495
+ "eval_loss": 1.9660656452178955,
496
+ "eval_runtime": 5.7747,
497
+ "eval_samples_per_second": 6.927,
498
+ "eval_steps_per_second": 0.866,
499
+ "step": 4025
500
+ },
501
+ {
502
+ "epoch": 36.0,
503
+ "learning_rate": 2.819130434782609e-06,
504
+ "loss": 1.8698,
505
+ "step": 4140
506
+ },
507
+ {
508
+ "epoch": 36.0,
509
+ "eval_loss": 1.9104881286621094,
510
+ "eval_runtime": 5.7592,
511
+ "eval_samples_per_second": 6.945,
512
+ "eval_steps_per_second": 0.868,
513
+ "step": 4140
514
+ },
515
+ {
516
+ "epoch": 37.0,
517
+ "learning_rate": 2.619130434782609e-06,
518
+ "loss": 1.8572,
519
+ "step": 4255
520
+ },
521
+ {
522
+ "epoch": 37.0,
523
+ "eval_loss": 1.944820761680603,
524
+ "eval_runtime": 5.7796,
525
+ "eval_samples_per_second": 6.921,
526
+ "eval_steps_per_second": 0.865,
527
+ "step": 4255
528
+ },
529
+ {
530
+ "epoch": 38.0,
531
+ "learning_rate": 2.419130434782609e-06,
532
+ "loss": 1.8756,
533
+ "step": 4370
534
+ },
535
+ {
536
+ "epoch": 38.0,
537
+ "eval_loss": 1.9674819707870483,
538
+ "eval_runtime": 5.7301,
539
+ "eval_samples_per_second": 6.981,
540
+ "eval_steps_per_second": 0.873,
541
+ "step": 4370
542
+ },
543
+ {
544
+ "epoch": 39.0,
545
+ "learning_rate": 2.219130434782609e-06,
546
+ "loss": 1.8593,
547
+ "step": 4485
548
+ },
549
+ {
550
+ "epoch": 39.0,
551
+ "eval_loss": 1.9364864826202393,
552
+ "eval_runtime": 5.8116,
553
+ "eval_samples_per_second": 6.883,
554
+ "eval_steps_per_second": 0.86,
555
+ "step": 4485
556
+ },
557
+ {
558
+ "epoch": 40.0,
559
+ "learning_rate": 2.019130434782609e-06,
560
+ "loss": 1.8713,
561
+ "step": 4600
562
+ },
563
+ {
564
+ "epoch": 40.0,
565
+ "eval_loss": 1.9382976293563843,
566
+ "eval_runtime": 5.7132,
567
+ "eval_samples_per_second": 7.001,
568
+ "eval_steps_per_second": 0.875,
569
+ "step": 4600
570
+ },
571
+ {
572
+ "epoch": 41.0,
573
+ "learning_rate": 1.8191304347826088e-06,
574
+ "loss": 1.8436,
575
+ "step": 4715
576
+ },
577
+ {
578
+ "epoch": 41.0,
579
+ "eval_loss": 1.967057466506958,
580
+ "eval_runtime": 5.7284,
581
+ "eval_samples_per_second": 6.983,
582
+ "eval_steps_per_second": 0.873,
583
+ "step": 4715
584
+ },
585
+ {
586
+ "epoch": 42.0,
587
+ "learning_rate": 1.6191304347826088e-06,
588
+ "loss": 1.83,
589
+ "step": 4830
590
+ },
591
+ {
592
+ "epoch": 42.0,
593
+ "eval_loss": 1.9526548385620117,
594
+ "eval_runtime": 5.6918,
595
+ "eval_samples_per_second": 7.028,
596
+ "eval_steps_per_second": 0.878,
597
+ "step": 4830
598
+ },
599
+ {
600
+ "epoch": 43.0,
601
+ "learning_rate": 1.4191304347826089e-06,
602
+ "loss": 1.857,
603
+ "step": 4945
604
+ },
605
+ {
606
+ "epoch": 43.0,
607
+ "eval_loss": 1.944758653640747,
608
+ "eval_runtime": 5.7519,
609
+ "eval_samples_per_second": 6.954,
610
+ "eval_steps_per_second": 0.869,
611
+ "step": 4945
612
+ },
613
+ {
614
+ "epoch": 44.0,
615
+ "learning_rate": 1.2191304347826089e-06,
616
+ "loss": 1.8318,
617
+ "step": 5060
618
+ },
619
+ {
620
+ "epoch": 44.0,
621
+ "eval_loss": 1.9366220235824585,
622
+ "eval_runtime": 5.7436,
623
+ "eval_samples_per_second": 6.964,
624
+ "eval_steps_per_second": 0.871,
625
+ "step": 5060
626
+ },
627
+ {
628
+ "epoch": 45.0,
629
+ "learning_rate": 1.0191304347826089e-06,
630
+ "loss": 1.8177,
631
+ "step": 5175
632
+ },
633
+ {
634
+ "epoch": 45.0,
635
+ "eval_loss": 1.9388927221298218,
636
+ "eval_runtime": 5.8021,
637
+ "eval_samples_per_second": 6.894,
638
+ "eval_steps_per_second": 0.862,
639
+ "step": 5175
640
+ },
641
+ {
642
+ "epoch": 46.0,
643
+ "learning_rate": 8.191304347826088e-07,
644
+ "loss": 1.8034,
645
+ "step": 5290
646
+ },
647
+ {
648
+ "epoch": 46.0,
649
+ "eval_loss": 1.905003309249878,
650
+ "eval_runtime": 5.7813,
651
+ "eval_samples_per_second": 6.919,
652
+ "eval_steps_per_second": 0.865,
653
+ "step": 5290
654
+ },
655
+ {
656
+ "epoch": 47.0,
657
+ "learning_rate": 6.191304347826088e-07,
658
+ "loss": 1.8226,
659
+ "step": 5405
660
+ },
661
+ {
662
+ "epoch": 47.0,
663
+ "eval_loss": 1.9226171970367432,
664
+ "eval_runtime": 5.8014,
665
+ "eval_samples_per_second": 6.895,
666
+ "eval_steps_per_second": 0.862,
667
+ "step": 5405
668
+ },
669
+ {
670
+ "epoch": 48.0,
671
+ "learning_rate": 4.1913043478260874e-07,
672
+ "loss": 1.818,
673
+ "step": 5520
674
+ },
675
+ {
676
+ "epoch": 48.0,
677
+ "eval_loss": 1.9150111675262451,
678
+ "eval_runtime": 5.7701,
679
+ "eval_samples_per_second": 6.932,
680
+ "eval_steps_per_second": 0.867,
681
+ "step": 5520
682
+ },
683
+ {
684
+ "epoch": 49.0,
685
+ "learning_rate": 2.191304347826087e-07,
686
+ "loss": 1.8148,
687
+ "step": 5635
688
+ },
689
+ {
690
+ "epoch": 49.0,
691
+ "eval_loss": 1.9168732166290283,
692
+ "eval_runtime": 5.7338,
693
+ "eval_samples_per_second": 6.976,
694
+ "eval_steps_per_second": 0.872,
695
+ "step": 5635
696
+ },
697
+ {
698
+ "epoch": 50.0,
699
+ "learning_rate": 1.91304347826087e-08,
700
+ "loss": 1.7984,
701
+ "step": 5750
702
+ },
703
+ {
704
+ "epoch": 50.0,
705
+ "eval_loss": 1.9223819971084595,
706
+ "eval_runtime": 5.7595,
707
+ "eval_samples_per_second": 6.945,
708
+ "eval_steps_per_second": 0.868,
709
+ "step": 5750
710
+ },
711
+ {
712
+ "epoch": 50.0,
713
+ "step": 5750,
714
+ "total_flos": 1.098949102848e+19,
715
+ "train_loss": 2.026795845363451,
716
+ "train_runtime": 4682.5898,
717
+ "train_samples_per_second": 4.912,
718
+ "train_steps_per_second": 1.228
719
+ }
720
+ ],
721
+ "logging_steps": 500,
722
+ "max_steps": 5750,
723
+ "num_train_epochs": 50,
724
+ "save_steps": 500,
725
+ "total_flos": 1.098949102848e+19,
726
+ "trial_name": null,
727
+ "trial_params": null
728
+ }