Plim commited on
Commit
3ed13d0
1 Parent(s): 43ee0f8

Add evaluation results on audio dev

Browse files
Files changed (18) hide show
  1. README.md +31 -22
  2. train_results/all_results.json → all_results.json +0 -0
  3. train_results/eval_results.json → eval_results.json +0 -0
  4. test_results/log_mozilla-foundation_common_voice_7_0_fr_test_predictions.txt → log_mozilla-foundation_common_voice_7_0_fr_test_predictions.txt +0 -0
  5. test_results/log_mozilla-foundation_common_voice_7_0_fr_test_targets.txt → log_mozilla-foundation_common_voice_7_0_fr_test_targets.txt +0 -0
  6. log_speech-recognition-community-v2_dev_data_fr_validation_predictions.txt +0 -0
  7. log_speech-recognition-community-v2_dev_data_fr_validation_targets.txt +0 -0
  8. test_results/mozilla-foundation_common_voice_7_0_fr_test_eval_results.txt → mozilla-foundation_common_voice_7_0_fr_test_eval_results.txt +0 -0
  9. speech-recognition-community-v2_dev_data_fr_validation_eval_results.txt +2 -0
  10. test_results/.ipynb_checkpoints/log_mozilla-foundation_common_voice_7_0_fr_test_predictions-checkpoint.txt +0 -0
  11. test_results/.ipynb_checkpoints/log_mozilla-foundation_common_voice_7_0_fr_test_targets-checkpoint.txt +0 -0
  12. test_results/.ipynb_checkpoints/mozilla-foundation_common_voice_7_0_fr_test_eval_results-checkpoint.txt +0 -2
  13. train_results/train_results.json → train_results.json +0 -0
  14. train_results/.ipynb_checkpoints/all_results-checkpoint.json +0 -14
  15. train_results/.ipynb_checkpoints/eval_results-checkpoint.json +0 -9
  16. train_results/.ipynb_checkpoints/train_results-checkpoint.json +0 -8
  17. train_results/.ipynb_checkpoints/trainer_state-checkpoint.json +0 -499
  18. train_results/trainer_state.json → trainer_state.json +0 -0
README.md CHANGED
@@ -24,29 +24,29 @@ model-index:
24
  - name: Test CER
25
  type: cer
26
  value: 7.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
- #
33
-
34
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
35
- It achieves the following results on the evaluation set:
36
- - Loss: 0.2619
37
- - Wer: 0.2457
38
-
39
  ## Model description
40
 
41
- More information needed
42
-
43
- ## Intended uses & limitations
44
-
45
- More information needed
46
-
47
- ## Training and evaluation data
48
-
49
- More information needed
50
 
51
  ## Training procedure
52
 
@@ -82,12 +82,9 @@ The following hyperparameters were used during training:
82
  | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
83
  | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
84
 
85
-
86
- ### Eval results on Common Voice 7 "test" (WER):
87
-
88
- | Without LM | With LM |
89
- |---|---|
90
- | 24.56 | To be computed |
91
 
92
  ### Framework versions
93
 
@@ -95,3 +92,15 @@ The following hyperparameters were used during training:
95
  - Pytorch 1.10.2+cu102
96
  - Datasets 1.18.2.dev0
97
  - Tokenizers 0.11.0
 
 
 
 
 
 
 
 
 
 
 
 
24
  - name: Test CER
25
  type: cer
26
  value: 7.3
27
+ - task:
28
+ name: Automatic Speech Recognition
29
+ type: automatic-speech-recognition
30
+ dataset:
31
+ name: Robust Speech Event - Dev Data
32
+ type: speech-recognition-community-v2/dev_data
33
+ args: fr
34
+ metrics:
35
+ - name: Test WER
36
+ type: wer
37
+ value: 63.62
38
+ - name: Test CER
39
+ type: cer
40
+ value: 17.20
41
+ ---
42
  ---
43
 
44
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
45
  should probably proofread and complete it, then remove this comment. -->
46
 
 
 
 
 
 
 
 
47
  ## Model description
48
 
49
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
 
 
 
 
 
 
 
 
50
 
51
  ## Training procedure
52
 
82
  | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
83
  | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
84
 
85
+ It achieves the best result on STEP 6000 on the validation set:
86
+ - Loss: 0.2619
87
+ - Wer: 0.2457
 
 
 
88
 
89
  ### Framework versions
90
 
92
  - Pytorch 1.10.2+cu102
93
  - Datasets 1.18.2.dev0
94
  - Tokenizers 0.11.0
95
+
96
+ ### Evaluation Commands
97
+ 1. To evaluate on `mozilla-foundation/common_voice_7` with split `test`
98
+
99
+ ```bash
100
+ python eval.py --model_id Plim/xls-r-300m-fr --dataset mozilla-foundation/common_voice_7_0 --config fr --split test
101
+ ```
102
+
103
+ 2. To evaluate on `speech-recognition-community-v2/dev_data`
104
+
105
+ ```bash
106
+ python eval.py --model_id Plim/xls-r-300m-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
train_results/all_results.json → all_results.json RENAMED
File without changes
train_results/eval_results.json → eval_results.json RENAMED
File without changes
test_results/log_mozilla-foundation_common_voice_7_0_fr_test_predictions.txt → log_mozilla-foundation_common_voice_7_0_fr_test_predictions.txt RENAMED
File without changes
test_results/log_mozilla-foundation_common_voice_7_0_fr_test_targets.txt → log_mozilla-foundation_common_voice_7_0_fr_test_targets.txt RENAMED
File without changes
log_speech-recognition-community-v2_dev_data_fr_validation_predictions.txt ADDED
The diff for this file is too large to render. See raw diff
log_speech-recognition-community-v2_dev_data_fr_validation_targets.txt ADDED
The diff for this file is too large to render. See raw diff
test_results/mozilla-foundation_common_voice_7_0_fr_test_eval_results.txt → mozilla-foundation_common_voice_7_0_fr_test_eval_results.txt RENAMED
File without changes
speech-recognition-community-v2_dev_data_fr_validation_eval_results.txt ADDED
@@ -0,0 +1,2 @@
 
 
1
+ WER: 0.6362465106291604
2
+ CER: 0.17202817283379465
test_results/.ipynb_checkpoints/log_mozilla-foundation_common_voice_7_0_fr_test_predictions-checkpoint.txt DELETED
The diff for this file is too large to render. See raw diff
test_results/.ipynb_checkpoints/log_mozilla-foundation_common_voice_7_0_fr_test_targets-checkpoint.txt DELETED
The diff for this file is too large to render. See raw diff
test_results/.ipynb_checkpoints/mozilla-foundation_common_voice_7_0_fr_test_eval_results-checkpoint.txt DELETED
@@ -1,2 +0,0 @@
1
- WER: 0.24561764914155493
2
- CER: 0.07285207821118034
 
 
train_results/train_results.json → train_results.json RENAMED
File without changes
train_results/.ipynb_checkpoints/all_results-checkpoint.json DELETED
@@ -1,14 +0,0 @@
1
- {
2
- "epoch": 2.0,
3
- "eval_loss": 0.26187804341316223,
4
- "eval_runtime": 722.142,
5
- "eval_samples": 15941,
6
- "eval_samples_per_second": 22.075,
7
- "eval_steps_per_second": 1.381,
8
- "eval_wer": 0.24574541380398318,
9
- "train_loss": 1.788894365302016,
10
- "train_runtime": 52105.5599,
11
- "train_samples": 395042,
12
- "train_samples_per_second": 15.163,
13
- "train_steps_per_second": 0.118
14
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_results/.ipynb_checkpoints/eval_results-checkpoint.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "epoch": 2.0,
3
- "eval_loss": 0.26187804341316223,
4
- "eval_runtime": 722.142,
5
- "eval_samples": 15941,
6
- "eval_samples_per_second": 22.075,
7
- "eval_steps_per_second": 1.381,
8
- "eval_wer": 0.24574541380398318
9
- }
 
 
 
 
 
 
 
 
 
train_results/.ipynb_checkpoints/train_results-checkpoint.json DELETED
@@ -1,8 +0,0 @@
1
- {
2
- "epoch": 2.0,
3
- "train_loss": 1.788894365302016,
4
- "train_runtime": 52105.5599,
5
- "train_samples": 395042,
6
- "train_samples_per_second": 15.163,
7
- "train_steps_per_second": 0.118
8
- }
 
 
 
 
 
 
 
 
train_results/.ipynb_checkpoints/trainer_state-checkpoint.json DELETED
@@ -1,499 +0,0 @@
1
- {
2
- "best_metric": 0.26187804341316223,
3
- "best_model_checkpoint": "./checkpoint-6000",
4
- "epoch": 1.9998784982382245,
5
- "global_step": 6172,
6
- "is_hyper_param_search": false,
7
- "is_local_process_zero": true,
8
- "is_world_process_zero": true,
9
- "log_history": [
10
- {
11
- "epoch": 0.03,
12
- "learning_rate": 3.7499999999999997e-06,
13
- "loss": 12.1043,
14
- "step": 100
15
- },
16
- {
17
- "epoch": 0.06,
18
- "learning_rate": 7.499999999999999e-06,
19
- "loss": 6.4771,
20
- "step": 200
21
- },
22
- {
23
- "epoch": 0.1,
24
- "learning_rate": 1.1249999999999999e-05,
25
- "loss": 4.4866,
26
- "step": 300
27
- },
28
- {
29
- "epoch": 0.13,
30
- "learning_rate": 1.4999999999999999e-05,
31
- "loss": 3.8842,
32
- "step": 400
33
- },
34
- {
35
- "epoch": 0.16,
36
- "learning_rate": 1.8712499999999997e-05,
37
- "loss": 3.495,
38
- "step": 500
39
- },
40
- {
41
- "epoch": 0.16,
42
- "eval_loss": 3.3882696628570557,
43
- "eval_runtime": 721.337,
44
- "eval_samples_per_second": 22.099,
45
- "eval_steps_per_second": 1.382,
46
- "eval_wer": 1.0,
47
- "step": 500
48
- },
49
- {
50
- "epoch": 0.19,
51
- "learning_rate": 2.2462499999999997e-05,
52
- "loss": 3.171,
53
- "step": 600
54
- },
55
- {
56
- "epoch": 0.23,
57
- "learning_rate": 2.6212499999999997e-05,
58
- "loss": 3.0275,
59
- "step": 700
60
- },
61
- {
62
- "epoch": 0.26,
63
- "learning_rate": 2.99625e-05,
64
- "loss": 2.9681,
65
- "step": 800
66
- },
67
- {
68
- "epoch": 0.29,
69
- "learning_rate": 3.37125e-05,
70
- "loss": 2.9347,
71
- "step": 900
72
- },
73
- {
74
- "epoch": 0.32,
75
- "learning_rate": 3.7462499999999996e-05,
76
- "loss": 2.9095,
77
- "step": 1000
78
- },
79
- {
80
- "epoch": 0.32,
81
- "eval_loss": 2.9152133464813232,
82
- "eval_runtime": 718.1623,
83
- "eval_samples_per_second": 22.197,
84
- "eval_steps_per_second": 1.388,
85
- "eval_wer": 0.9999871219487068,
86
- "step": 1000
87
- },
88
- {
89
- "epoch": 0.36,
90
- "learning_rate": 4.12125e-05,
91
- "loss": 2.8888,
92
- "step": 1100
93
- },
94
- {
95
- "epoch": 0.39,
96
- "learning_rate": 4.4962499999999995e-05,
97
- "loss": 2.8347,
98
- "step": 1200
99
- },
100
- {
101
- "epoch": 0.42,
102
- "learning_rate": 4.871249999999999e-05,
103
- "loss": 2.5318,
104
- "step": 1300
105
- },
106
- {
107
- "epoch": 0.45,
108
- "learning_rate": 5.2462499999999994e-05,
109
- "loss": 2.0502,
110
- "step": 1400
111
- },
112
- {
113
- "epoch": 0.49,
114
- "learning_rate": 5.62125e-05,
115
- "loss": 1.8434,
116
- "step": 1500
117
- },
118
- {
119
- "epoch": 0.49,
120
- "eval_loss": 1.0473320484161377,
121
- "eval_runtime": 720.1235,
122
- "eval_samples_per_second": 22.136,
123
- "eval_steps_per_second": 1.384,
124
- "eval_wer": 0.7446153648029981,
125
- "step": 1500
126
- },
127
- {
128
- "epoch": 0.52,
129
- "learning_rate": 5.9962499999999994e-05,
130
- "loss": 1.7339,
131
- "step": 1600
132
- },
133
- {
134
- "epoch": 0.55,
135
- "learning_rate": 6.367499999999999e-05,
136
- "loss": 1.6535,
137
- "step": 1700
138
- },
139
- {
140
- "epoch": 0.58,
141
- "learning_rate": 6.7425e-05,
142
- "loss": 1.5793,
143
- "step": 1800
144
- },
145
- {
146
- "epoch": 0.62,
147
- "learning_rate": 7.1175e-05,
148
- "loss": 1.5056,
149
- "step": 1900
150
- },
151
- {
152
- "epoch": 0.65,
153
- "learning_rate": 7.492499999999999e-05,
154
- "loss": 1.4298,
155
- "step": 2000
156
- },
157
- {
158
- "epoch": 0.65,
159
- "eval_loss": 0.5728740692138672,
160
- "eval_runtime": 712.5783,
161
- "eval_samples_per_second": 22.371,
162
- "eval_steps_per_second": 1.399,
163
- "eval_wer": 0.5129521000882147,
164
- "step": 2000
165
- },
166
- {
167
- "epoch": 0.68,
168
- "learning_rate": 7.325623202301054e-05,
169
- "loss": 1.3592,
170
- "step": 2100
171
- },
172
- {
173
- "epoch": 0.71,
174
- "learning_rate": 7.145853307766058e-05,
175
- "loss": 1.2917,
176
- "step": 2200
177
- },
178
- {
179
- "epoch": 0.75,
180
- "learning_rate": 6.966083413231063e-05,
181
- "loss": 1.2536,
182
- "step": 2300
183
- },
184
- {
185
- "epoch": 0.78,
186
- "learning_rate": 6.788111217641419e-05,
187
- "loss": 1.2345,
188
- "step": 2400
189
- },
190
- {
191
- "epoch": 0.81,
192
- "learning_rate": 6.608341323106423e-05,
193
- "loss": 1.1937,
194
- "step": 2500
195
- },
196
- {
197
- "epoch": 0.81,
198
- "eval_loss": 0.3795304000377655,
199
- "eval_runtime": 716.4435,
200
- "eval_samples_per_second": 22.25,
201
- "eval_steps_per_second": 1.392,
202
- "eval_wer": 0.34504806732645216,
203
- "step": 2500
204
- },
205
- {
206
- "epoch": 0.84,
207
- "learning_rate": 6.428571428571427e-05,
208
- "loss": 1.1806,
209
- "step": 2600
210
- },
211
- {
212
- "epoch": 0.87,
213
- "learning_rate": 6.248801534036433e-05,
214
- "loss": 1.1651,
215
- "step": 2700
216
- },
217
- {
218
- "epoch": 0.91,
219
- "learning_rate": 6.069031639501438e-05,
220
- "loss": 1.1455,
221
- "step": 2800
222
- },
223
- {
224
- "epoch": 0.94,
225
- "learning_rate": 5.889261744966442e-05,
226
- "loss": 1.1312,
227
- "step": 2900
228
- },
229
- {
230
- "epoch": 0.97,
231
- "learning_rate": 5.709491850431447e-05,
232
- "loss": 1.1248,
233
- "step": 3000
234
- },
235
- {
236
- "epoch": 0.97,
237
- "eval_loss": 0.3320523500442505,
238
- "eval_runtime": 716.2808,
239
- "eval_samples_per_second": 22.255,
240
- "eval_steps_per_second": 1.392,
241
- "eval_wer": 0.30515830344552264,
242
- "step": 3000
243
- },
244
- {
245
- "epoch": 1.0,
246
- "learning_rate": 5.5297219558964525e-05,
247
- "loss": 1.1017,
248
- "step": 3100
249
- },
250
- {
251
- "epoch": 1.04,
252
- "learning_rate": 5.3499520613614567e-05,
253
- "loss": 1.0978,
254
- "step": 3200
255
- },
256
- {
257
- "epoch": 1.07,
258
- "learning_rate": 5.1701821668264615e-05,
259
- "loss": 1.0954,
260
- "step": 3300
261
- },
262
- {
263
- "epoch": 1.1,
264
- "learning_rate": 4.990412272291467e-05,
265
- "loss": 1.0867,
266
- "step": 3400
267
- },
268
- {
269
- "epoch": 1.13,
270
- "learning_rate": 4.812440076701821e-05,
271
- "loss": 1.0835,
272
- "step": 3500
273
- },
274
- {
275
- "epoch": 1.13,
276
- "eval_loss": 0.3037940561771393,
277
- "eval_runtime": 714.1597,
278
- "eval_samples_per_second": 22.321,
279
- "eval_steps_per_second": 1.396,
280
- "eval_wer": 0.2805032742445413,
281
- "step": 3500
282
- },
283
- {
284
- "epoch": 1.17,
285
- "learning_rate": 4.632670182166826e-05,
286
- "loss": 1.0808,
287
- "step": 3600
288
- },
289
- {
290
- "epoch": 1.2,
291
- "learning_rate": 4.4529002876318304e-05,
292
- "loss": 1.0648,
293
- "step": 3700
294
- },
295
- {
296
- "epoch": 1.23,
297
- "learning_rate": 4.273130393096836e-05,
298
- "loss": 1.0541,
299
- "step": 3800
300
- },
301
- {
302
- "epoch": 1.26,
303
- "learning_rate": 4.093360498561841e-05,
304
- "loss": 1.0621,
305
- "step": 3900
306
- },
307
- {
308
- "epoch": 1.3,
309
- "learning_rate": 3.913590604026845e-05,
310
- "loss": 1.0479,
311
- "step": 4000
312
- },
313
- {
314
- "epoch": 1.3,
315
- "eval_loss": 0.2910499572753906,
316
- "eval_runtime": 718.1665,
317
- "eval_samples_per_second": 22.197,
318
- "eval_steps_per_second": 1.388,
319
- "eval_wer": 0.26888727197800427,
320
- "step": 4000
321
- },
322
- {
323
- "epoch": 1.33,
324
- "learning_rate": 3.7338207094918506e-05,
325
- "loss": 1.0428,
326
- "step": 4100
327
- },
328
- {
329
- "epoch": 1.36,
330
- "learning_rate": 3.555848513902205e-05,
331
- "loss": 1.047,
332
- "step": 4200
333
- },
334
- {
335
- "epoch": 1.39,
336
- "learning_rate": 3.37607861936721e-05,
337
- "loss": 1.0397,
338
- "step": 4300
339
- },
340
- {
341
- "epoch": 1.43,
342
- "learning_rate": 3.1963087248322145e-05,
343
- "loss": 1.0347,
344
- "step": 4400
345
- },
346
- {
347
- "epoch": 1.46,
348
- "learning_rate": 3.0165388302972194e-05,
349
- "loss": 1.0413,
350
- "step": 4500
351
- },
352
- {
353
- "epoch": 1.46,
354
- "eval_loss": 0.27976545691490173,
355
- "eval_runtime": 713.7382,
356
- "eval_samples_per_second": 22.335,
357
- "eval_steps_per_second": 1.397,
358
- "eval_wer": 0.2592995627901586,
359
- "step": 4500
360
- },
361
- {
362
- "epoch": 1.49,
363
- "learning_rate": 2.836768935762224e-05,
364
- "loss": 1.0238,
365
- "step": 4600
366
- },
367
- {
368
- "epoch": 1.52,
369
- "learning_rate": 2.656999041227229e-05,
370
- "loss": 1.0269,
371
- "step": 4700
372
- },
373
- {
374
- "epoch": 1.56,
375
- "learning_rate": 2.4772291466922337e-05,
376
- "loss": 1.021,
377
- "step": 4800
378
- },
379
- {
380
- "epoch": 1.59,
381
- "learning_rate": 2.2974592521572386e-05,
382
- "loss": 1.0186,
383
- "step": 4900
384
- },
385
- {
386
- "epoch": 1.62,
387
- "learning_rate": 2.1176893576222434e-05,
388
- "loss": 1.014,
389
- "step": 5000
390
- },
391
- {
392
- "epoch": 1.62,
393
- "eval_loss": 0.27265554666519165,
394
- "eval_runtime": 707.3075,
395
- "eval_samples_per_second": 22.538,
396
- "eval_steps_per_second": 1.41,
397
- "eval_wer": 0.25117351242409997,
398
- "step": 5000
399
- },
400
- {
401
- "epoch": 1.65,
402
- "learning_rate": 1.9379194630872483e-05,
403
- "loss": 1.0074,
404
- "step": 5100
405
- },
406
- {
407
- "epoch": 1.68,
408
- "learning_rate": 1.759947267497603e-05,
409
- "loss": 1.0193,
410
- "step": 5200
411
- },
412
- {
413
- "epoch": 1.72,
414
- "learning_rate": 1.5801773729626078e-05,
415
- "loss": 1.0044,
416
- "step": 5300
417
- },
418
- {
419
- "epoch": 1.75,
420
- "learning_rate": 1.4004074784276125e-05,
421
- "loss": 1.0005,
422
- "step": 5400
423
- },
424
- {
425
- "epoch": 1.78,
426
- "learning_rate": 1.2206375838926173e-05,
427
- "loss": 1.004,
428
- "step": 5500
429
- },
430
- {
431
- "epoch": 1.78,
432
- "eval_loss": 0.26460376381874084,
433
- "eval_runtime": 719.6956,
434
- "eval_samples_per_second": 22.15,
435
- "eval_steps_per_second": 1.385,
436
- "eval_wer": 0.2470782921128375,
437
- "step": 5500
438
- },
439
- {
440
- "epoch": 1.81,
441
- "learning_rate": 1.0408676893576222e-05,
442
- "loss": 1.0048,
443
- "step": 5600
444
- },
445
- {
446
- "epoch": 1.85,
447
- "learning_rate": 8.610977948226269e-06,
448
- "loss": 0.9988,
449
- "step": 5700
450
- },
451
- {
452
- "epoch": 1.88,
453
- "learning_rate": 6.813279002876318e-06,
454
- "loss": 0.9919,
455
- "step": 5800
456
- },
457
- {
458
- "epoch": 1.91,
459
- "learning_rate": 5.015580057526366e-06,
460
- "loss": 0.9886,
461
- "step": 5900
462
- },
463
- {
464
- "epoch": 1.94,
465
- "learning_rate": 3.217881112176414e-06,
466
- "loss": 0.9949,
467
- "step": 6000
468
- },
469
- {
470
- "epoch": 1.94,
471
- "eval_loss": 0.26187804341316223,
472
- "eval_runtime": 717.4473,
473
- "eval_samples_per_second": 22.219,
474
- "eval_steps_per_second": 1.39,
475
- "eval_wer": 0.24574541380398318,
476
- "step": 6000
477
- },
478
- {
479
- "epoch": 1.98,
480
- "learning_rate": 1.4201821668264622e-06,
481
- "loss": 0.9931,
482
- "step": 6100
483
- },
484
- {
485
- "epoch": 2.0,
486
- "step": 6172,
487
- "total_flos": 1.1573983785360925e+20,
488
- "train_loss": 1.788894365302016,
489
- "train_runtime": 52105.5599,
490
- "train_samples_per_second": 15.163,
491
- "train_steps_per_second": 0.118
492
- }
493
- ],
494
- "max_steps": 6172,
495
- "num_train_epochs": 2,
496
- "total_flos": 1.1573983785360925e+20,
497
- "trial_name": null,
498
- "trial_params": null
499
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train_results/trainer_state.json → trainer_state.json RENAMED
File without changes