Safawat commited on
Commit
c3e2be0
1 Parent(s): 4cdaa3f

Model save

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.3677
21
- - Accuracy: 0.8960
22
 
23
  ## Model description
24
 
@@ -43,20 +43,33 @@ The following hyperparameters were used during training:
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 4
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:------:|:----:|:---------------:|:--------:|
52
- | 0.7151 | 0.4651 | 100 | 0.5809 | 0.8201 |
53
- | 0.6882 | 0.9302 | 200 | 0.4639 | 0.8498 |
54
- | 0.3897 | 1.3953 | 300 | 0.4704 | 0.8465 |
55
- | 0.4909 | 1.8605 | 400 | 0.5023 | 0.8449 |
56
- | 0.2836 | 2.3256 | 500 | 0.4100 | 0.8746 |
57
- | 0.2669 | 2.7907 | 600 | 0.3389 | 0.8993 |
58
- | 0.2304 | 3.2558 | 700 | 0.3669 | 0.8927 |
59
- | 0.1523 | 3.7209 | 800 | 0.3677 | 0.8960 |
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
 
62
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.3787
21
+ - Accuracy: 0.9076
22
 
23
  ## Model description
24
 
 
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 10
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:------:|:----:|:---------------:|:--------:|
52
+ | 0.7236 | 0.4651 | 100 | 0.6396 | 0.8102 |
53
+ | 0.7243 | 0.9302 | 200 | 0.5124 | 0.8333 |
54
+ | 0.4288 | 1.3953 | 300 | 0.4514 | 0.8630 |
55
+ | 0.5744 | 1.8605 | 400 | 0.6154 | 0.8102 |
56
+ | 0.4077 | 2.3256 | 500 | 0.4612 | 0.8614 |
57
+ | 0.496 | 2.7907 | 600 | 0.4359 | 0.8729 |
58
+ | 0.3446 | 3.2558 | 700 | 0.4276 | 0.8696 |
59
+ | 0.3347 | 3.7209 | 800 | 0.4259 | 0.8795 |
60
+ | 0.3868 | 4.1860 | 900 | 0.4642 | 0.8548 |
61
+ | 0.36 | 4.6512 | 1000 | 0.4242 | 0.8696 |
62
+ | 0.295 | 5.1163 | 1100 | 0.4204 | 0.8812 |
63
+ | 0.2342 | 5.5814 | 1200 | 0.3933 | 0.8911 |
64
+ | 0.1629 | 6.0465 | 1300 | 0.3634 | 0.8977 |
65
+ | 0.2041 | 6.5116 | 1400 | 0.4007 | 0.8911 |
66
+ | 0.1668 | 6.9767 | 1500 | 0.3843 | 0.8927 |
67
+ | 0.0976 | 7.4419 | 1600 | 0.4062 | 0.8927 |
68
+ | 0.1275 | 7.9070 | 1700 | 0.3861 | 0.8894 |
69
+ | 0.1063 | 8.3721 | 1800 | 0.4011 | 0.8911 |
70
+ | 0.1658 | 8.8372 | 1900 | 0.3840 | 0.9043 |
71
+ | 0.1 | 9.3023 | 2000 | 0.3873 | 0.9010 |
72
+ | 0.1045 | 9.7674 | 2100 | 0.3787 | 0.9076 |
73
 
74
 
75
  ### Framework versions
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.0,
3
+ "total_flos": 1.0638481718004941e+18,
4
+ "train_loss": 0.4333847167880036,
5
+ "train_runtime": 591.3634,
6
+ "train_samples_per_second": 23.214,
7
+ "train_steps_per_second": 1.454
8
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5cf7f61b8d96fe997d35731089282a02cf43c700e8d8eef6b3ab437c9d18be0
3
  size 343236280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26e215444bbd66d8ee275069d4f23c942f375f29a375e9b90a1b9b075b93c014
3
  size 343236280
runs/Apr26_01-09-42_fe7d46d3b18e/events.out.tfevents.1714093804.fe7d46d3b18e.21275.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d56bc83735dab2375b59c81c968ce11821ba7e686a5cf1bc4685725a1917639
3
+ size 57302
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.0,
3
+ "total_flos": 1.0638481718004941e+18,
4
+ "train_loss": 0.4333847167880036,
5
+ "train_runtime": 591.3634,
6
+ "train_samples_per_second": 23.214,
7
+ "train_steps_per_second": 1.454
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,704 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.33891761302948,
3
+ "best_model_checkpoint": "finetuned-electrical-images/checkpoint-600",
4
+ "epoch": 4.0,
5
+ "eval_steps": 100,
6
+ "global_step": 860,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.046511627906976744,
13
+ "grad_norm": 2.072422742843628,
14
+ "learning_rate": 0.00019767441860465116,
15
+ "loss": 1.5929,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.09302325581395349,
20
+ "grad_norm": 2.2486143112182617,
21
+ "learning_rate": 0.00019534883720930232,
22
+ "loss": 1.2056,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.13953488372093023,
27
+ "grad_norm": 2.1311681270599365,
28
+ "learning_rate": 0.0001930232558139535,
29
+ "loss": 1.0947,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.18604651162790697,
34
+ "grad_norm": 2.3108925819396973,
35
+ "learning_rate": 0.00019069767441860466,
36
+ "loss": 0.965,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.23255813953488372,
41
+ "grad_norm": 2.6261141300201416,
42
+ "learning_rate": 0.00018837209302325584,
43
+ "loss": 0.7767,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.27906976744186046,
48
+ "grad_norm": 4.883328914642334,
49
+ "learning_rate": 0.000186046511627907,
50
+ "loss": 0.8164,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.32558139534883723,
55
+ "grad_norm": 2.2796030044555664,
56
+ "learning_rate": 0.00018372093023255815,
57
+ "loss": 0.8293,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.37209302325581395,
62
+ "grad_norm": 2.7582902908325195,
63
+ "learning_rate": 0.0001813953488372093,
64
+ "loss": 0.7748,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.4186046511627907,
69
+ "grad_norm": 3.0205562114715576,
70
+ "learning_rate": 0.00017906976744186048,
71
+ "loss": 0.8204,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.46511627906976744,
76
+ "grad_norm": 1.5362632274627686,
77
+ "learning_rate": 0.00017674418604651164,
78
+ "loss": 0.7151,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.46511627906976744,
83
+ "eval_accuracy": 0.8201320132013201,
84
+ "eval_loss": 0.5808877348899841,
85
+ "eval_runtime": 7.6253,
86
+ "eval_samples_per_second": 79.472,
87
+ "eval_steps_per_second": 9.967,
88
+ "step": 100
89
+ },
90
+ {
91
+ "epoch": 0.5116279069767442,
92
+ "grad_norm": 3.7735097408294678,
93
+ "learning_rate": 0.0001744186046511628,
94
+ "loss": 0.523,
95
+ "step": 110
96
+ },
97
+ {
98
+ "epoch": 0.5581395348837209,
99
+ "grad_norm": 5.240425109863281,
100
+ "learning_rate": 0.00017209302325581395,
101
+ "loss": 0.5818,
102
+ "step": 120
103
+ },
104
+ {
105
+ "epoch": 0.6046511627906976,
106
+ "grad_norm": 3.7434163093566895,
107
+ "learning_rate": 0.0001697674418604651,
108
+ "loss": 0.7063,
109
+ "step": 130
110
+ },
111
+ {
112
+ "epoch": 0.6511627906976745,
113
+ "grad_norm": 1.7690590620040894,
114
+ "learning_rate": 0.00016744186046511629,
115
+ "loss": 0.7594,
116
+ "step": 140
117
+ },
118
+ {
119
+ "epoch": 0.6976744186046512,
120
+ "grad_norm": 3.7053258419036865,
121
+ "learning_rate": 0.00016511627906976747,
122
+ "loss": 0.5997,
123
+ "step": 150
124
+ },
125
+ {
126
+ "epoch": 0.7441860465116279,
127
+ "grad_norm": 4.870126247406006,
128
+ "learning_rate": 0.00016279069767441862,
129
+ "loss": 0.5614,
130
+ "step": 160
131
+ },
132
+ {
133
+ "epoch": 0.7906976744186046,
134
+ "grad_norm": 2.533661365509033,
135
+ "learning_rate": 0.00016046511627906978,
136
+ "loss": 0.4965,
137
+ "step": 170
138
+ },
139
+ {
140
+ "epoch": 0.8372093023255814,
141
+ "grad_norm": 3.910142660140991,
142
+ "learning_rate": 0.00015813953488372093,
143
+ "loss": 0.5525,
144
+ "step": 180
145
+ },
146
+ {
147
+ "epoch": 0.8837209302325582,
148
+ "grad_norm": 3.5800535678863525,
149
+ "learning_rate": 0.0001558139534883721,
150
+ "loss": 0.5458,
151
+ "step": 190
152
+ },
153
+ {
154
+ "epoch": 0.9302325581395349,
155
+ "grad_norm": 1.315317153930664,
156
+ "learning_rate": 0.00015348837209302327,
157
+ "loss": 0.6882,
158
+ "step": 200
159
+ },
160
+ {
161
+ "epoch": 0.9302325581395349,
162
+ "eval_accuracy": 0.8498349834983498,
163
+ "eval_loss": 0.4638592302799225,
164
+ "eval_runtime": 7.9986,
165
+ "eval_samples_per_second": 75.764,
166
+ "eval_steps_per_second": 9.502,
167
+ "step": 200
168
+ },
169
+ {
170
+ "epoch": 0.9767441860465116,
171
+ "grad_norm": 3.7220730781555176,
172
+ "learning_rate": 0.00015116279069767442,
173
+ "loss": 0.5207,
174
+ "step": 210
175
+ },
176
+ {
177
+ "epoch": 1.0232558139534884,
178
+ "grad_norm": 4.103209018707275,
179
+ "learning_rate": 0.00014883720930232558,
180
+ "loss": 0.5282,
181
+ "step": 220
182
+ },
183
+ {
184
+ "epoch": 1.069767441860465,
185
+ "grad_norm": 2.3725953102111816,
186
+ "learning_rate": 0.00014651162790697673,
187
+ "loss": 0.4849,
188
+ "step": 230
189
+ },
190
+ {
191
+ "epoch": 1.1162790697674418,
192
+ "grad_norm": 2.339578151702881,
193
+ "learning_rate": 0.00014418604651162791,
194
+ "loss": 0.3707,
195
+ "step": 240
196
+ },
197
+ {
198
+ "epoch": 1.1627906976744187,
199
+ "grad_norm": 2.8100476264953613,
200
+ "learning_rate": 0.0001418604651162791,
201
+ "loss": 0.3821,
202
+ "step": 250
203
+ },
204
+ {
205
+ "epoch": 1.2093023255813953,
206
+ "grad_norm": 2.1530966758728027,
207
+ "learning_rate": 0.00013953488372093025,
208
+ "loss": 0.4797,
209
+ "step": 260
210
+ },
211
+ {
212
+ "epoch": 1.255813953488372,
213
+ "grad_norm": 1.164758324623108,
214
+ "learning_rate": 0.0001372093023255814,
215
+ "loss": 0.4341,
216
+ "step": 270
217
+ },
218
+ {
219
+ "epoch": 1.302325581395349,
220
+ "grad_norm": 1.5009866952896118,
221
+ "learning_rate": 0.00013488372093023256,
222
+ "loss": 0.4527,
223
+ "step": 280
224
+ },
225
+ {
226
+ "epoch": 1.3488372093023255,
227
+ "grad_norm": 2.4176268577575684,
228
+ "learning_rate": 0.00013255813953488372,
229
+ "loss": 0.3878,
230
+ "step": 290
231
+ },
232
+ {
233
+ "epoch": 1.3953488372093024,
234
+ "grad_norm": 4.717296123504639,
235
+ "learning_rate": 0.0001302325581395349,
236
+ "loss": 0.3897,
237
+ "step": 300
238
+ },
239
+ {
240
+ "epoch": 1.3953488372093024,
241
+ "eval_accuracy": 0.8465346534653465,
242
+ "eval_loss": 0.47040635347366333,
243
+ "eval_runtime": 8.212,
244
+ "eval_samples_per_second": 73.794,
245
+ "eval_steps_per_second": 9.255,
246
+ "step": 300
247
+ },
248
+ {
249
+ "epoch": 1.441860465116279,
250
+ "grad_norm": 1.026237964630127,
251
+ "learning_rate": 0.00012790697674418605,
252
+ "loss": 0.3683,
253
+ "step": 310
254
+ },
255
+ {
256
+ "epoch": 1.4883720930232558,
257
+ "grad_norm": 2.894584894180298,
258
+ "learning_rate": 0.0001255813953488372,
259
+ "loss": 0.4968,
260
+ "step": 320
261
+ },
262
+ {
263
+ "epoch": 1.5348837209302326,
264
+ "grad_norm": 1.6250619888305664,
265
+ "learning_rate": 0.00012325581395348836,
266
+ "loss": 0.4784,
267
+ "step": 330
268
+ },
269
+ {
270
+ "epoch": 1.5813953488372094,
271
+ "grad_norm": 2.221461296081543,
272
+ "learning_rate": 0.00012093023255813953,
273
+ "loss": 0.5513,
274
+ "step": 340
275
+ },
276
+ {
277
+ "epoch": 1.627906976744186,
278
+ "grad_norm": 6.982600688934326,
279
+ "learning_rate": 0.00011860465116279071,
280
+ "loss": 0.5509,
281
+ "step": 350
282
+ },
283
+ {
284
+ "epoch": 1.6744186046511627,
285
+ "grad_norm": 2.3711423873901367,
286
+ "learning_rate": 0.00011627906976744187,
287
+ "loss": 0.4542,
288
+ "step": 360
289
+ },
290
+ {
291
+ "epoch": 1.7209302325581395,
292
+ "grad_norm": 2.340607166290283,
293
+ "learning_rate": 0.00011395348837209304,
294
+ "loss": 0.3822,
295
+ "step": 370
296
+ },
297
+ {
298
+ "epoch": 1.7674418604651163,
299
+ "grad_norm": 3.709766387939453,
300
+ "learning_rate": 0.00011162790697674419,
301
+ "loss": 0.4252,
302
+ "step": 380
303
+ },
304
+ {
305
+ "epoch": 1.8139534883720931,
306
+ "grad_norm": 3.5805418491363525,
307
+ "learning_rate": 0.00010930232558139534,
308
+ "loss": 0.6467,
309
+ "step": 390
310
+ },
311
+ {
312
+ "epoch": 1.8604651162790697,
313
+ "grad_norm": 2.1463587284088135,
314
+ "learning_rate": 0.00010697674418604651,
315
+ "loss": 0.4909,
316
+ "step": 400
317
+ },
318
+ {
319
+ "epoch": 1.8604651162790697,
320
+ "eval_accuracy": 0.8448844884488449,
321
+ "eval_loss": 0.5023446083068848,
322
+ "eval_runtime": 8.1791,
323
+ "eval_samples_per_second": 74.091,
324
+ "eval_steps_per_second": 9.292,
325
+ "step": 400
326
+ },
327
+ {
328
+ "epoch": 1.9069767441860463,
329
+ "grad_norm": 3.8787500858306885,
330
+ "learning_rate": 0.00010465116279069768,
331
+ "loss": 0.485,
332
+ "step": 410
333
+ },
334
+ {
335
+ "epoch": 1.9534883720930232,
336
+ "grad_norm": 3.9055089950561523,
337
+ "learning_rate": 0.00010232558139534885,
338
+ "loss": 0.4737,
339
+ "step": 420
340
+ },
341
+ {
342
+ "epoch": 2.0,
343
+ "grad_norm": 1.440263032913208,
344
+ "learning_rate": 0.0001,
345
+ "loss": 0.35,
346
+ "step": 430
347
+ },
348
+ {
349
+ "epoch": 2.046511627906977,
350
+ "grad_norm": 1.907047986984253,
351
+ "learning_rate": 9.767441860465116e-05,
352
+ "loss": 0.3646,
353
+ "step": 440
354
+ },
355
+ {
356
+ "epoch": 2.0930232558139537,
357
+ "grad_norm": 2.902924060821533,
358
+ "learning_rate": 9.534883720930233e-05,
359
+ "loss": 0.3787,
360
+ "step": 450
361
+ },
362
+ {
363
+ "epoch": 2.13953488372093,
364
+ "grad_norm": 2.807384729385376,
365
+ "learning_rate": 9.30232558139535e-05,
366
+ "loss": 0.3122,
367
+ "step": 460
368
+ },
369
+ {
370
+ "epoch": 2.186046511627907,
371
+ "grad_norm": 1.7866243124008179,
372
+ "learning_rate": 9.069767441860465e-05,
373
+ "loss": 0.3379,
374
+ "step": 470
375
+ },
376
+ {
377
+ "epoch": 2.2325581395348837,
378
+ "grad_norm": 6.933365821838379,
379
+ "learning_rate": 8.837209302325582e-05,
380
+ "loss": 0.3066,
381
+ "step": 480
382
+ },
383
+ {
384
+ "epoch": 2.2790697674418605,
385
+ "grad_norm": 5.515610694885254,
386
+ "learning_rate": 8.604651162790697e-05,
387
+ "loss": 0.2632,
388
+ "step": 490
389
+ },
390
+ {
391
+ "epoch": 2.3255813953488373,
392
+ "grad_norm": 4.792200088500977,
393
+ "learning_rate": 8.372093023255814e-05,
394
+ "loss": 0.2836,
395
+ "step": 500
396
+ },
397
+ {
398
+ "epoch": 2.3255813953488373,
399
+ "eval_accuracy": 0.8745874587458746,
400
+ "eval_loss": 0.41001710295677185,
401
+ "eval_runtime": 8.2547,
402
+ "eval_samples_per_second": 73.413,
403
+ "eval_steps_per_second": 9.207,
404
+ "step": 500
405
+ },
406
+ {
407
+ "epoch": 2.3720930232558137,
408
+ "grad_norm": 4.973999977111816,
409
+ "learning_rate": 8.139534883720931e-05,
410
+ "loss": 0.3589,
411
+ "step": 510
412
+ },
413
+ {
414
+ "epoch": 2.4186046511627906,
415
+ "grad_norm": 2.9822804927825928,
416
+ "learning_rate": 7.906976744186047e-05,
417
+ "loss": 0.2937,
418
+ "step": 520
419
+ },
420
+ {
421
+ "epoch": 2.4651162790697674,
422
+ "grad_norm": 3.735166549682617,
423
+ "learning_rate": 7.674418604651163e-05,
424
+ "loss": 0.3345,
425
+ "step": 530
426
+ },
427
+ {
428
+ "epoch": 2.511627906976744,
429
+ "grad_norm": 3.042361259460449,
430
+ "learning_rate": 7.441860465116279e-05,
431
+ "loss": 0.3717,
432
+ "step": 540
433
+ },
434
+ {
435
+ "epoch": 2.558139534883721,
436
+ "grad_norm": 2.4927892684936523,
437
+ "learning_rate": 7.209302325581396e-05,
438
+ "loss": 0.249,
439
+ "step": 550
440
+ },
441
+ {
442
+ "epoch": 2.604651162790698,
443
+ "grad_norm": 1.5524264574050903,
444
+ "learning_rate": 6.976744186046513e-05,
445
+ "loss": 0.3304,
446
+ "step": 560
447
+ },
448
+ {
449
+ "epoch": 2.6511627906976747,
450
+ "grad_norm": 0.39165279269218445,
451
+ "learning_rate": 6.744186046511628e-05,
452
+ "loss": 0.285,
453
+ "step": 570
454
+ },
455
+ {
456
+ "epoch": 2.697674418604651,
457
+ "grad_norm": 1.6114171743392944,
458
+ "learning_rate": 6.511627906976745e-05,
459
+ "loss": 0.3908,
460
+ "step": 580
461
+ },
462
+ {
463
+ "epoch": 2.744186046511628,
464
+ "grad_norm": 2.375959634780884,
465
+ "learning_rate": 6.27906976744186e-05,
466
+ "loss": 0.2845,
467
+ "step": 590
468
+ },
469
+ {
470
+ "epoch": 2.7906976744186047,
471
+ "grad_norm": 3.077956199645996,
472
+ "learning_rate": 6.0465116279069765e-05,
473
+ "loss": 0.2669,
474
+ "step": 600
475
+ },
476
+ {
477
+ "epoch": 2.7906976744186047,
478
+ "eval_accuracy": 0.8993399339933993,
479
+ "eval_loss": 0.33891761302948,
480
+ "eval_runtime": 8.0736,
481
+ "eval_samples_per_second": 75.059,
482
+ "eval_steps_per_second": 9.413,
483
+ "step": 600
484
+ },
485
+ {
486
+ "epoch": 2.8372093023255816,
487
+ "grad_norm": 1.2785547971725464,
488
+ "learning_rate": 5.8139534883720933e-05,
489
+ "loss": 0.2499,
490
+ "step": 610
491
+ },
492
+ {
493
+ "epoch": 2.883720930232558,
494
+ "grad_norm": 2.2260217666625977,
495
+ "learning_rate": 5.5813953488372095e-05,
496
+ "loss": 0.2065,
497
+ "step": 620
498
+ },
499
+ {
500
+ "epoch": 2.9302325581395348,
501
+ "grad_norm": 2.7635715007781982,
502
+ "learning_rate": 5.348837209302326e-05,
503
+ "loss": 0.3334,
504
+ "step": 630
505
+ },
506
+ {
507
+ "epoch": 2.9767441860465116,
508
+ "grad_norm": 3.221409797668457,
509
+ "learning_rate": 5.1162790697674425e-05,
510
+ "loss": 0.2453,
511
+ "step": 640
512
+ },
513
+ {
514
+ "epoch": 3.0232558139534884,
515
+ "grad_norm": 0.77796870470047,
516
+ "learning_rate": 4.883720930232558e-05,
517
+ "loss": 0.2191,
518
+ "step": 650
519
+ },
520
+ {
521
+ "epoch": 3.0697674418604652,
522
+ "grad_norm": 1.0451290607452393,
523
+ "learning_rate": 4.651162790697675e-05,
524
+ "loss": 0.2745,
525
+ "step": 660
526
+ },
527
+ {
528
+ "epoch": 3.116279069767442,
529
+ "grad_norm": 4.356563091278076,
530
+ "learning_rate": 4.418604651162791e-05,
531
+ "loss": 0.2399,
532
+ "step": 670
533
+ },
534
+ {
535
+ "epoch": 3.1627906976744184,
536
+ "grad_norm": 2.47353458404541,
537
+ "learning_rate": 4.186046511627907e-05,
538
+ "loss": 0.3016,
539
+ "step": 680
540
+ },
541
+ {
542
+ "epoch": 3.2093023255813953,
543
+ "grad_norm": 1.1897259950637817,
544
+ "learning_rate": 3.953488372093023e-05,
545
+ "loss": 0.1716,
546
+ "step": 690
547
+ },
548
+ {
549
+ "epoch": 3.255813953488372,
550
+ "grad_norm": 2.9624576568603516,
551
+ "learning_rate": 3.7209302325581394e-05,
552
+ "loss": 0.2304,
553
+ "step": 700
554
+ },
555
+ {
556
+ "epoch": 3.255813953488372,
557
+ "eval_accuracy": 0.8927392739273927,
558
+ "eval_loss": 0.36686915159225464,
559
+ "eval_runtime": 8.0401,
560
+ "eval_samples_per_second": 75.372,
561
+ "eval_steps_per_second": 9.453,
562
+ "step": 700
563
+ },
564
+ {
565
+ "epoch": 3.302325581395349,
566
+ "grad_norm": 0.4115903675556183,
567
+ "learning_rate": 3.488372093023256e-05,
568
+ "loss": 0.2835,
569
+ "step": 710
570
+ },
571
+ {
572
+ "epoch": 3.3488372093023258,
573
+ "grad_norm": 3.0008704662323,
574
+ "learning_rate": 3.2558139534883724e-05,
575
+ "loss": 0.1414,
576
+ "step": 720
577
+ },
578
+ {
579
+ "epoch": 3.395348837209302,
580
+ "grad_norm": 3.6043615341186523,
581
+ "learning_rate": 3.0232558139534883e-05,
582
+ "loss": 0.2309,
583
+ "step": 730
584
+ },
585
+ {
586
+ "epoch": 3.441860465116279,
587
+ "grad_norm": 1.3581503629684448,
588
+ "learning_rate": 2.7906976744186048e-05,
589
+ "loss": 0.3342,
590
+ "step": 740
591
+ },
592
+ {
593
+ "epoch": 3.488372093023256,
594
+ "grad_norm": 1.7710747718811035,
595
+ "learning_rate": 2.5581395348837212e-05,
596
+ "loss": 0.2326,
597
+ "step": 750
598
+ },
599
+ {
600
+ "epoch": 3.5348837209302326,
601
+ "grad_norm": 3.192469835281372,
602
+ "learning_rate": 2.3255813953488374e-05,
603
+ "loss": 0.1234,
604
+ "step": 760
605
+ },
606
+ {
607
+ "epoch": 3.5813953488372094,
608
+ "grad_norm": 0.3328302800655365,
609
+ "learning_rate": 2.0930232558139536e-05,
610
+ "loss": 0.228,
611
+ "step": 770
612
+ },
613
+ {
614
+ "epoch": 3.6279069767441863,
615
+ "grad_norm": 2.4526288509368896,
616
+ "learning_rate": 1.8604651162790697e-05,
617
+ "loss": 0.1601,
618
+ "step": 780
619
+ },
620
+ {
621
+ "epoch": 3.6744186046511627,
622
+ "grad_norm": 1.8664888143539429,
623
+ "learning_rate": 1.6279069767441862e-05,
624
+ "loss": 0.2098,
625
+ "step": 790
626
+ },
627
+ {
628
+ "epoch": 3.7209302325581395,
629
+ "grad_norm": 5.262502193450928,
630
+ "learning_rate": 1.3953488372093024e-05,
631
+ "loss": 0.1523,
632
+ "step": 800
633
+ },
634
+ {
635
+ "epoch": 3.7209302325581395,
636
+ "eval_accuracy": 0.8960396039603961,
637
+ "eval_loss": 0.36768776178359985,
638
+ "eval_runtime": 8.208,
639
+ "eval_samples_per_second": 73.83,
640
+ "eval_steps_per_second": 9.259,
641
+ "step": 800
642
+ },
643
+ {
644
+ "epoch": 3.7674418604651163,
645
+ "grad_norm": 6.241558074951172,
646
+ "learning_rate": 1.1627906976744187e-05,
647
+ "loss": 0.1313,
648
+ "step": 810
649
+ },
650
+ {
651
+ "epoch": 3.813953488372093,
652
+ "grad_norm": 2.1938974857330322,
653
+ "learning_rate": 9.302325581395349e-06,
654
+ "loss": 0.1505,
655
+ "step": 820
656
+ },
657
+ {
658
+ "epoch": 3.8604651162790695,
659
+ "grad_norm": 2.1302292346954346,
660
+ "learning_rate": 6.976744186046512e-06,
661
+ "loss": 0.2306,
662
+ "step": 830
663
+ },
664
+ {
665
+ "epoch": 3.9069767441860463,
666
+ "grad_norm": 10.263894081115723,
667
+ "learning_rate": 4.651162790697674e-06,
668
+ "loss": 0.2138,
669
+ "step": 840
670
+ },
671
+ {
672
+ "epoch": 3.953488372093023,
673
+ "grad_norm": 6.005746364593506,
674
+ "learning_rate": 2.325581395348837e-06,
675
+ "loss": 0.1595,
676
+ "step": 850
677
+ },
678
+ {
679
+ "epoch": 4.0,
680
+ "grad_norm": 3.945340156555176,
681
+ "learning_rate": 0.0,
682
+ "loss": 0.2154,
683
+ "step": 860
684
+ },
685
+ {
686
+ "epoch": 4.0,
687
+ "step": 860,
688
+ "total_flos": 1.0638481718004941e+18,
689
+ "train_loss": 0.4333847167880036,
690
+ "train_runtime": 591.3634,
691
+ "train_samples_per_second": 23.214,
692
+ "train_steps_per_second": 1.454
693
+ }
694
+ ],
695
+ "logging_steps": 10,
696
+ "max_steps": 860,
697
+ "num_input_tokens_seen": 0,
698
+ "num_train_epochs": 4,
699
+ "save_steps": 100,
700
+ "total_flos": 1.0638481718004941e+18,
701
+ "train_batch_size": 16,
702
+ "trial_name": null,
703
+ "trial_params": null
704
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b52367c3a746f38fdd995904828eeedd3588f4a00441c23d83ebf38c944cceaf
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6891f0f5251ee7f7593096070fd05fa5ee6229f2dcfadc5b6253584dd201d3be
3
  size 4984