abdouaziiz commited on
Commit
3422368
1 Parent(s): 4522d7b

Upload 6 files

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - audio-classification
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ - f1
9
+ model-index:
10
+ - name: wavlm-large
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # wavlm-large
18
+
19
+ This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the galsenai/waxal_dataset dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5936
22
+ - Accuracy: 0.8950
23
+ - Precision: 0.9789
24
+ - F1: 0.9334
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 3e-05
44
+ - train_batch_size: 12
45
+ - eval_batch_size: 12
46
+ - seed: 0
47
+ - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 48
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 32.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | F1 |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
58
+ | 4.7405 | 1.01 | 500 | 5.1525 | 0.0 | 0.0 | 0.0 |
59
+ | 4.4299 | 2.02 | 1000 | 5.8969 | 0.0 | 0.0 | 0.0 |
60
+ | 4.2868 | 3.04 | 1500 | 4.9304 | 0.0019 | 0.0031 | 0.0023 |
61
+ | 3.6242 | 4.05 | 2000 | 4.3396 | 0.0409 | 0.0224 | 0.0237 |
62
+ | 2.686 | 5.06 | 2500 | 3.9399 | 0.0549 | 0.0320 | 0.0308 |
63
+ | 1.9284 | 6.07 | 3000 | 3.7736 | 0.0500 | 0.0779 | 0.0442 |
64
+ | 1.3936 | 7.08 | 3500 | 3.5380 | 0.0947 | 0.1381 | 0.0916 |
65
+ | 1.0764 | 8.1 | 4000 | 3.3281 | 0.1584 | 0.3514 | 0.1839 |
66
+ | 0.872 | 9.11 | 4500 | 2.9592 | 0.2755 | 0.6027 | 0.3315 |
67
+ | 0.7026 | 10.12 | 5000 | 2.5049 | 0.3971 | 0.6971 | 0.4587 |
68
+ | 0.603 | 11.13 | 5500 | 2.1485 | 0.5479 | 0.8074 | 0.6129 |
69
+ | 0.5042 | 12.15 | 6000 | 1.6532 | 0.7014 | 0.8604 | 0.7544 |
70
+ | 0.4542 | 13.16 | 6500 | 1.4057 | 0.7435 | 0.8941 | 0.7990 |
71
+ | 0.388 | 14.17 | 7000 | 1.2338 | 0.7802 | 0.9219 | 0.8332 |
72
+ | 0.3515 | 15.18 | 7500 | 0.9898 | 0.8170 | 0.9433 | 0.8681 |
73
+ | 0.3195 | 16.19 | 8000 | 1.1404 | 0.8067 | 0.9523 | 0.8635 |
74
+ | 0.2882 | 17.21 | 8500 | 0.9811 | 0.8177 | 0.9540 | 0.8746 |
75
+ | 0.2695 | 18.22 | 9000 | 0.9483 | 0.8318 | 0.9616 | 0.8878 |
76
+ | 0.2535 | 19.23 | 9500 | 0.6694 | 0.8844 | 0.9692 | 0.9198 |
77
+ | 0.2437 | 20.24 | 10000 | 0.7546 | 0.8700 | 0.9656 | 0.9125 |
78
+ | 0.2376 | 21.25 | 10500 | 0.6698 | 0.8810 | 0.9695 | 0.9202 |
79
+ | 0.2214 | 22.27 | 11000 | 0.7156 | 0.8727 | 0.9726 | 0.9174 |
80
+ | 0.2148 | 23.28 | 11500 | 0.5982 | 0.8931 | 0.9711 | 0.9286 |
81
+ | 0.2087 | 24.29 | 12000 | 0.7109 | 0.8814 | 0.9757 | 0.9243 |
82
+ | 0.2039 | 25.3 | 12500 | 0.6577 | 0.8897 | 0.9799 | 0.9306 |
83
+ | 0.1997 | 26.32 | 13000 | 0.7307 | 0.8746 | 0.9774 | 0.9203 |
84
+ | 0.1896 | 27.33 | 13500 | 0.6143 | 0.8905 | 0.9748 | 0.9290 |
85
+ | 0.1869 | 28.34 | 14000 | 0.6380 | 0.8909 | 0.9739 | 0.9287 |
86
+ | 0.185 | 29.35 | 14500 | 0.6932 | 0.8871 | 0.9791 | 0.9289 |
87
+ | 0.1813 | 30.36 | 15000 | 0.5936 | 0.8950 | 0.9789 | 0.9334 |
88
+ | 0.1801 | 31.38 | 15500 | 0.6150 | 0.8947 | 0.9801 | 0.9334 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.27.0.dev0
94
+ - Pytorch 1.11.0+cu113
95
+ - Datasets 2.9.1.dev0
96
+ - Tokenizers 0.13.2
all_results.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 32.0,
3
+ "eval_accuracy": 0.8950359984842744,
4
+ "eval_f1": 0.9333589015381497,
5
+ "eval_loss": 0.5935563445091248,
6
+ "eval_precision": 0.9789140060741345,
7
+ "eval_runtime": 230.8033,
8
+ "eval_samples_per_second": 11.434,
9
+ "eval_steps_per_second": 0.953,
10
+ "train_loss": 1.0038429288729,
11
+ "train_runtime": 86553.4289,
12
+ "train_samples_per_second": 8.78,
13
+ "train_steps_per_second": 0.183
14
+ }
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 32.0,
3
+ "eval_accuracy": 0.8950359984842744,
4
+ "eval_f1": 0.9333589015381497,
5
+ "eval_loss": 0.5935563445091248,
6
+ "eval_precision": 0.9789140060741345,
7
+ "eval_runtime": 230.8033,
8
+ "eval_samples_per_second": 11.434,
9
+ "eval_steps_per_second": 0.953
10
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 32.0,
3
+ "train_loss": 1.0038429288729,
4
+ "train_runtime": 86553.4289,
5
+ "train_samples_per_second": 8.78,
6
+ "train_steps_per_second": 0.183
7
+ }
trainer_state.json ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.8950359984842744,
3
+ "best_model_checkpoint": "wavlm-large/checkpoint-15000",
4
+ "epoch": 31.998484082870135,
5
+ "global_step": 15808,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 1.01,
12
+ "learning_rate": 9.487666034155598e-06,
13
+ "loss": 4.7405,
14
+ "step": 500
15
+ },
16
+ {
17
+ "epoch": 1.01,
18
+ "eval_accuracy": 0.0,
19
+ "eval_f1": 0.0,
20
+ "eval_loss": 5.152532577514648,
21
+ "eval_precision": 0.0,
22
+ "eval_runtime": 231.7392,
23
+ "eval_samples_per_second": 11.388,
24
+ "eval_steps_per_second": 0.949,
25
+ "step": 500
26
+ },
27
+ {
28
+ "epoch": 2.02,
29
+ "learning_rate": 1.8975332068311197e-05,
30
+ "loss": 4.4299,
31
+ "step": 1000
32
+ },
33
+ {
34
+ "epoch": 2.02,
35
+ "eval_accuracy": 0.0,
36
+ "eval_f1": 0.0,
37
+ "eval_loss": 5.896852016448975,
38
+ "eval_precision": 0.0,
39
+ "eval_runtime": 229.527,
40
+ "eval_samples_per_second": 11.498,
41
+ "eval_steps_per_second": 0.958,
42
+ "step": 1000
43
+ },
44
+ {
45
+ "epoch": 3.04,
46
+ "learning_rate": 2.846299810246679e-05,
47
+ "loss": 4.2868,
48
+ "step": 1500
49
+ },
50
+ {
51
+ "epoch": 3.04,
52
+ "eval_accuracy": 0.0018946570670708603,
53
+ "eval_f1": 0.002260186233515175,
54
+ "eval_loss": 4.930444717407227,
55
+ "eval_precision": 0.0030713084224438304,
56
+ "eval_runtime": 224.9204,
57
+ "eval_samples_per_second": 11.733,
58
+ "eval_steps_per_second": 0.978,
59
+ "step": 1500
60
+ },
61
+ {
62
+ "epoch": 4.05,
63
+ "learning_rate": 2.9116468686300697e-05,
64
+ "loss": 3.6242,
65
+ "step": 2000
66
+ },
67
+ {
68
+ "epoch": 4.05,
69
+ "eval_accuracy": 0.04092459264873058,
70
+ "eval_f1": 0.023734423060596616,
71
+ "eval_loss": 4.339611053466797,
72
+ "eval_precision": 0.02235030209371723,
73
+ "eval_runtime": 231.3364,
74
+ "eval_samples_per_second": 11.408,
75
+ "eval_steps_per_second": 0.951,
76
+ "step": 2000
77
+ },
78
+ {
79
+ "epoch": 5.06,
80
+ "learning_rate": 2.8062135376396992e-05,
81
+ "loss": 2.686,
82
+ "step": 2500
83
+ },
84
+ {
85
+ "epoch": 5.06,
86
+ "eval_accuracy": 0.054945054945054944,
87
+ "eval_f1": 0.030815499275931264,
88
+ "eval_loss": 3.9399423599243164,
89
+ "eval_precision": 0.03198760556742773,
90
+ "eval_runtime": 229.8015,
91
+ "eval_samples_per_second": 11.484,
92
+ "eval_steps_per_second": 0.957,
93
+ "step": 2500
94
+ },
95
+ {
96
+ "epoch": 6.07,
97
+ "learning_rate": 2.700780206649329e-05,
98
+ "loss": 1.9284,
99
+ "step": 3000
100
+ },
101
+ {
102
+ "epoch": 6.07,
103
+ "eval_accuracy": 0.05001894657067071,
104
+ "eval_f1": 0.044225493965773466,
105
+ "eval_loss": 3.7735581398010254,
106
+ "eval_precision": 0.07786075378351333,
107
+ "eval_runtime": 229.6709,
108
+ "eval_samples_per_second": 11.49,
109
+ "eval_steps_per_second": 0.958,
110
+ "step": 3000
111
+ },
112
+ {
113
+ "epoch": 7.08,
114
+ "learning_rate": 2.5953468756589585e-05,
115
+ "loss": 1.3936,
116
+ "step": 3500
117
+ },
118
+ {
119
+ "epoch": 7.08,
120
+ "eval_accuracy": 0.094732853353543,
121
+ "eval_f1": 0.09155559566994678,
122
+ "eval_loss": 3.537994146347046,
123
+ "eval_precision": 0.13805010974816684,
124
+ "eval_runtime": 222.7345,
125
+ "eval_samples_per_second": 11.848,
126
+ "eval_steps_per_second": 0.988,
127
+ "step": 3500
128
+ },
129
+ {
130
+ "epoch": 8.1,
131
+ "learning_rate": 2.489913544668588e-05,
132
+ "loss": 1.0764,
133
+ "step": 4000
134
+ },
135
+ {
136
+ "epoch": 8.1,
137
+ "eval_accuracy": 0.15839333080712392,
138
+ "eval_f1": 0.1838779712805326,
139
+ "eval_loss": 3.328141689300537,
140
+ "eval_precision": 0.35144388335355753,
141
+ "eval_runtime": 231.5928,
142
+ "eval_samples_per_second": 11.395,
143
+ "eval_steps_per_second": 0.95,
144
+ "step": 4000
145
+ },
146
+ {
147
+ "epoch": 9.11,
148
+ "learning_rate": 2.3844802136782175e-05,
149
+ "loss": 0.872,
150
+ "step": 4500
151
+ },
152
+ {
153
+ "epoch": 9.11,
154
+ "eval_accuracy": 0.2754831375521031,
155
+ "eval_f1": 0.33153761364278084,
156
+ "eval_loss": 2.959165096282959,
157
+ "eval_precision": 0.6026709651704474,
158
+ "eval_runtime": 229.3101,
159
+ "eval_samples_per_second": 11.508,
160
+ "eval_steps_per_second": 0.959,
161
+ "step": 4500
162
+ },
163
+ {
164
+ "epoch": 10.12,
165
+ "learning_rate": 2.279046882687847e-05,
166
+ "loss": 0.7026,
167
+ "step": 5000
168
+ },
169
+ {
170
+ "epoch": 10.12,
171
+ "eval_accuracy": 0.3971201212580523,
172
+ "eval_f1": 0.4587007542105457,
173
+ "eval_loss": 2.504917860031128,
174
+ "eval_precision": 0.6970861611172207,
175
+ "eval_runtime": 229.0193,
176
+ "eval_samples_per_second": 11.523,
177
+ "eval_steps_per_second": 0.961,
178
+ "step": 5000
179
+ },
180
+ {
181
+ "epoch": 11.13,
182
+ "learning_rate": 2.1736135516974768e-05,
183
+ "loss": 0.603,
184
+ "step": 5500
185
+ },
186
+ {
187
+ "epoch": 11.13,
188
+ "eval_accuracy": 0.5479348237968927,
189
+ "eval_f1": 0.6128796045577696,
190
+ "eval_loss": 2.1484670639038086,
191
+ "eval_precision": 0.8073946450067734,
192
+ "eval_runtime": 226.9343,
193
+ "eval_samples_per_second": 11.629,
194
+ "eval_steps_per_second": 0.969,
195
+ "step": 5500
196
+ },
197
+ {
198
+ "epoch": 12.15,
199
+ "learning_rate": 2.0681802207071063e-05,
200
+ "loss": 0.5042,
201
+ "step": 6000
202
+ },
203
+ {
204
+ "epoch": 12.15,
205
+ "eval_accuracy": 0.7014020462296324,
206
+ "eval_f1": 0.7543542303794953,
207
+ "eval_loss": 1.6532080173492432,
208
+ "eval_precision": 0.8604295269195706,
209
+ "eval_runtime": 229.9455,
210
+ "eval_samples_per_second": 11.477,
211
+ "eval_steps_per_second": 0.957,
212
+ "step": 6000
213
+ },
214
+ {
215
+ "epoch": 13.16,
216
+ "learning_rate": 1.9627468897167357e-05,
217
+ "loss": 0.4542,
218
+ "step": 6500
219
+ },
220
+ {
221
+ "epoch": 13.16,
222
+ "eval_accuracy": 0.7434634331186055,
223
+ "eval_f1": 0.7989946253783239,
224
+ "eval_loss": 1.4056562185287476,
225
+ "eval_precision": 0.8941430548214513,
226
+ "eval_runtime": 223.4311,
227
+ "eval_samples_per_second": 11.811,
228
+ "eval_steps_per_second": 0.985,
229
+ "step": 6500
230
+ },
231
+ {
232
+ "epoch": 14.17,
233
+ "learning_rate": 1.8573135587263652e-05,
234
+ "loss": 0.388,
235
+ "step": 7000
236
+ },
237
+ {
238
+ "epoch": 14.17,
239
+ "eval_accuracy": 0.7802197802197802,
240
+ "eval_f1": 0.8331608145111185,
241
+ "eval_loss": 1.233764410018921,
242
+ "eval_precision": 0.92185981522448,
243
+ "eval_runtime": 231.0616,
244
+ "eval_samples_per_second": 11.421,
245
+ "eval_steps_per_second": 0.952,
246
+ "step": 7000
247
+ },
248
+ {
249
+ "epoch": 15.18,
250
+ "learning_rate": 1.751880227735995e-05,
251
+ "loss": 0.3515,
252
+ "step": 7500
253
+ },
254
+ {
255
+ "epoch": 15.18,
256
+ "eval_accuracy": 0.8169761273209549,
257
+ "eval_f1": 0.8681431492951,
258
+ "eval_loss": 0.9898241758346558,
259
+ "eval_precision": 0.9432737435443141,
260
+ "eval_runtime": 229.2806,
261
+ "eval_samples_per_second": 11.51,
262
+ "eval_steps_per_second": 0.96,
263
+ "step": 7500
264
+ },
265
+ {
266
+ "epoch": 16.19,
267
+ "learning_rate": 1.6464468967456245e-05,
268
+ "loss": 0.3195,
269
+ "step": 8000
270
+ },
271
+ {
272
+ "epoch": 16.19,
273
+ "eval_accuracy": 0.8067449791587723,
274
+ "eval_f1": 0.8635483223798979,
275
+ "eval_loss": 1.1404353380203247,
276
+ "eval_precision": 0.9523356531691418,
277
+ "eval_runtime": 224.231,
278
+ "eval_samples_per_second": 11.769,
279
+ "eval_steps_per_second": 0.981,
280
+ "step": 8000
281
+ },
282
+ {
283
+ "epoch": 17.21,
284
+ "learning_rate": 1.541013565755254e-05,
285
+ "loss": 0.2882,
286
+ "step": 8500
287
+ },
288
+ {
289
+ "epoch": 17.21,
290
+ "eval_accuracy": 0.8177339901477833,
291
+ "eval_f1": 0.8745687675592978,
292
+ "eval_loss": 0.9810923933982849,
293
+ "eval_precision": 0.9540040162889087,
294
+ "eval_runtime": 230.2032,
295
+ "eval_samples_per_second": 11.464,
296
+ "eval_steps_per_second": 0.956,
297
+ "step": 8500
298
+ },
299
+ {
300
+ "epoch": 18.22,
301
+ "learning_rate": 1.4355802347648837e-05,
302
+ "loss": 0.2695,
303
+ "step": 9000
304
+ },
305
+ {
306
+ "epoch": 18.22,
307
+ "eval_accuracy": 0.8317544524441076,
308
+ "eval_f1": 0.8877909926292662,
309
+ "eval_loss": 0.9483387470245361,
310
+ "eval_precision": 0.9615575975196211,
311
+ "eval_runtime": 227.4444,
312
+ "eval_samples_per_second": 11.603,
313
+ "eval_steps_per_second": 0.967,
314
+ "step": 9000
315
+ },
316
+ {
317
+ "epoch": 19.23,
318
+ "learning_rate": 1.3301469037745133e-05,
319
+ "loss": 0.2535,
320
+ "step": 9500
321
+ },
322
+ {
323
+ "epoch": 19.23,
324
+ "eval_accuracy": 0.8844259189086775,
325
+ "eval_f1": 0.9198466869617786,
326
+ "eval_loss": 0.6694388389587402,
327
+ "eval_precision": 0.9692367120798446,
328
+ "eval_runtime": 229.887,
329
+ "eval_samples_per_second": 11.48,
330
+ "eval_steps_per_second": 0.957,
331
+ "step": 9500
332
+ },
333
+ {
334
+ "epoch": 20.24,
335
+ "learning_rate": 1.2247135727841428e-05,
336
+ "loss": 0.2437,
337
+ "step": 10000
338
+ },
339
+ {
340
+ "epoch": 20.24,
341
+ "eval_accuracy": 0.870026525198939,
342
+ "eval_f1": 0.9124563736238709,
343
+ "eval_loss": 0.7545726299285889,
344
+ "eval_precision": 0.9655808349316077,
345
+ "eval_runtime": 228.7826,
346
+ "eval_samples_per_second": 11.535,
347
+ "eval_steps_per_second": 0.962,
348
+ "step": 10000
349
+ },
350
+ {
351
+ "epoch": 21.25,
352
+ "learning_rate": 1.1192802417937724e-05,
353
+ "loss": 0.2376,
354
+ "step": 10500
355
+ },
356
+ {
357
+ "epoch": 21.25,
358
+ "eval_accuracy": 0.88101553618795,
359
+ "eval_f1": 0.9202319275311999,
360
+ "eval_loss": 0.669846773147583,
361
+ "eval_precision": 0.9694585382786844,
362
+ "eval_runtime": 225.8023,
363
+ "eval_samples_per_second": 11.687,
364
+ "eval_steps_per_second": 0.974,
365
+ "step": 10500
366
+ },
367
+ {
368
+ "epoch": 22.27,
369
+ "learning_rate": 1.013846910803402e-05,
370
+ "loss": 0.2214,
371
+ "step": 11000
372
+ },
373
+ {
374
+ "epoch": 22.27,
375
+ "eval_accuracy": 0.8726790450928382,
376
+ "eval_f1": 0.9174460541012326,
377
+ "eval_loss": 0.7156072854995728,
378
+ "eval_precision": 0.972619503520754,
379
+ "eval_runtime": 230.4418,
380
+ "eval_samples_per_second": 11.452,
381
+ "eval_steps_per_second": 0.955,
382
+ "step": 11000
383
+ },
384
+ {
385
+ "epoch": 23.28,
386
+ "learning_rate": 9.084135798130316e-06,
387
+ "loss": 0.2148,
388
+ "step": 11500
389
+ },
390
+ {
391
+ "epoch": 23.28,
392
+ "eval_accuracy": 0.8931413414172035,
393
+ "eval_f1": 0.9285879507962082,
394
+ "eval_loss": 0.5982441902160645,
395
+ "eval_precision": 0.9711102757838663,
396
+ "eval_runtime": 229.8221,
397
+ "eval_samples_per_second": 11.483,
398
+ "eval_steps_per_second": 0.957,
399
+ "step": 11500
400
+ },
401
+ {
402
+ "epoch": 24.29,
403
+ "learning_rate": 8.029802488226612e-06,
404
+ "loss": 0.2087,
405
+ "step": 12000
406
+ },
407
+ {
408
+ "epoch": 24.29,
409
+ "eval_accuracy": 0.8813944676013642,
410
+ "eval_f1": 0.9242728727643769,
411
+ "eval_loss": 0.7108510732650757,
412
+ "eval_precision": 0.975723189780374,
413
+ "eval_runtime": 223.4291,
414
+ "eval_samples_per_second": 11.811,
415
+ "eval_steps_per_second": 0.985,
416
+ "step": 12000
417
+ },
418
+ {
419
+ "epoch": 25.3,
420
+ "learning_rate": 6.975469178322908e-06,
421
+ "loss": 0.2039,
422
+ "step": 12500
423
+ },
424
+ {
425
+ "epoch": 25.3,
426
+ "eval_accuracy": 0.8897309586964759,
427
+ "eval_f1": 0.93059059028571,
428
+ "eval_loss": 0.6577169895172119,
429
+ "eval_precision": 0.9799317453490524,
430
+ "eval_runtime": 229.7405,
431
+ "eval_samples_per_second": 11.487,
432
+ "eval_steps_per_second": 0.958,
433
+ "step": 12500
434
+ },
435
+ {
436
+ "epoch": 26.32,
437
+ "learning_rate": 5.9211358684192026e-06,
438
+ "loss": 0.1997,
439
+ "step": 13000
440
+ },
441
+ {
442
+ "epoch": 26.32,
443
+ "eval_accuracy": 0.874573702159909,
444
+ "eval_f1": 0.9203192830080359,
445
+ "eval_loss": 0.7307356595993042,
446
+ "eval_precision": 0.9774472205704657,
447
+ "eval_runtime": 226.7422,
448
+ "eval_samples_per_second": 11.639,
449
+ "eval_steps_per_second": 0.97,
450
+ "step": 13000
451
+ },
452
+ {
453
+ "epoch": 27.33,
454
+ "learning_rate": 4.866802558515498e-06,
455
+ "loss": 0.1896,
456
+ "step": 13500
457
+ },
458
+ {
459
+ "epoch": 27.33,
460
+ "eval_accuracy": 0.8904888215233043,
461
+ "eval_f1": 0.9289821714067877,
462
+ "eval_loss": 0.614262044429779,
463
+ "eval_precision": 0.9747583516326127,
464
+ "eval_runtime": 226.7104,
465
+ "eval_samples_per_second": 11.64,
466
+ "eval_steps_per_second": 0.97,
467
+ "step": 13500
468
+ },
469
+ {
470
+ "epoch": 28.34,
471
+ "learning_rate": 3.8124692486117947e-06,
472
+ "loss": 0.1869,
473
+ "step": 14000
474
+ },
475
+ {
476
+ "epoch": 28.34,
477
+ "eval_accuracy": 0.8908677529367185,
478
+ "eval_f1": 0.9286976343726923,
479
+ "eval_loss": 0.637986958026886,
480
+ "eval_precision": 0.9738854766344701,
481
+ "eval_runtime": 229.6341,
482
+ "eval_samples_per_second": 11.492,
483
+ "eval_steps_per_second": 0.958,
484
+ "step": 14000
485
+ },
486
+ {
487
+ "epoch": 29.35,
488
+ "learning_rate": 2.7581359387080904e-06,
489
+ "loss": 0.185,
490
+ "step": 14500
491
+ },
492
+ {
493
+ "epoch": 29.35,
494
+ "eval_accuracy": 0.8870784388025768,
495
+ "eval_f1": 0.9288845844124958,
496
+ "eval_loss": 0.6932182908058167,
497
+ "eval_precision": 0.979135119670676,
498
+ "eval_runtime": 223.3771,
499
+ "eval_samples_per_second": 11.814,
500
+ "eval_steps_per_second": 0.985,
501
+ "step": 14500
502
+ },
503
+ {
504
+ "epoch": 30.36,
505
+ "learning_rate": 1.7038026288043862e-06,
506
+ "loss": 0.1813,
507
+ "step": 15000
508
+ },
509
+ {
510
+ "epoch": 30.36,
511
+ "eval_accuracy": 0.8950359984842744,
512
+ "eval_f1": 0.9333589015381497,
513
+ "eval_loss": 0.5935563445091248,
514
+ "eval_precision": 0.9789140060741345,
515
+ "eval_runtime": 231.0332,
516
+ "eval_samples_per_second": 11.423,
517
+ "eval_steps_per_second": 0.952,
518
+ "step": 15000
519
+ },
520
+ {
521
+ "epoch": 31.38,
522
+ "learning_rate": 6.494693189006819e-07,
523
+ "loss": 0.1801,
524
+ "step": 15500
525
+ },
526
+ {
527
+ "epoch": 31.38,
528
+ "eval_accuracy": 0.8946570670708601,
529
+ "eval_f1": 0.9334021717198819,
530
+ "eval_loss": 0.6150190234184265,
531
+ "eval_precision": 0.9800558320669844,
532
+ "eval_runtime": 224.8697,
533
+ "eval_samples_per_second": 11.736,
534
+ "eval_steps_per_second": 0.978,
535
+ "step": 15500
536
+ },
537
+ {
538
+ "epoch": 32.0,
539
+ "step": 15808,
540
+ "total_flos": 1.2697230517064026e+20,
541
+ "train_loss": 1.0038429288729,
542
+ "train_runtime": 86553.4289,
543
+ "train_samples_per_second": 8.78,
544
+ "train_steps_per_second": 0.183
545
+ }
546
+ ],
547
+ "max_steps": 15808,
548
+ "num_train_epochs": 32,
549
+ "total_flos": 1.2697230517064026e+20,
550
+ "trial_name": null,
551
+ "trial_params": null
552
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09112a064f21bd781296feb9fc4c2a0c7bed628ab5cf47097ca98c038db32a8a
3
+ size 3503