Model_S_D_Wav2Vec2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0464
- Wer: 0.2319
- Cer: 0.0598
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
3.5768 | 0.85 | 400 | 0.6152 | 0.5812 | 0.1905 |
0.3226 | 1.71 | 800 | 0.1026 | 0.3195 | 0.0722 |
0.1827 | 2.56 | 1200 | 0.0725 | 0.2048 | 0.0454 |
0.129 | 3.41 | 1600 | 0.0671 | 0.2393 | 0.0525 |
0.1075 | 4.26 | 2000 | 0.0556 | 0.2312 | 0.0497 |
0.0924 | 5.12 | 2400 | 0.0572 | 0.2040 | 0.0478 |
0.076 | 5.97 | 2800 | 0.0596 | 0.1472 | 0.0346 |
0.0695 | 6.82 | 3200 | 0.0608 | 0.2274 | 0.0510 |
0.0707 | 7.68 | 3600 | 0.0490 | 0.2665 | 0.0660 |
0.0597 | 8.53 | 4000 | 0.0509 | 0.2442 | 0.0593 |
0.0557 | 9.38 | 4400 | 0.0501 | 0.2533 | 0.0610 |
0.0503 | 10.23 | 4800 | 0.0519 | 0.2534 | 0.0622 |
0.0471 | 11.09 | 5200 | 0.0512 | 0.2585 | 0.0638 |
0.0417 | 11.94 | 5600 | 0.0497 | 0.2522 | 0.0610 |
0.0415 | 12.79 | 6000 | 0.0508 | 0.2547 | 0.0629 |
0.0372 | 13.65 | 6400 | 0.0497 | 0.2580 | 0.0643 |
0.0364 | 14.5 | 6800 | 0.0448 | 0.2498 | 0.0600 |
0.034 | 15.35 | 7200 | 0.0522 | 0.2419 | 0.0593 |
0.0306 | 16.2 | 7600 | 0.0510 | 0.2433 | 0.0560 |
0.0345 | 17.06 | 8000 | 0.0503 | 0.2610 | 0.0657 |
0.0266 | 17.91 | 8400 | 0.0462 | 0.2434 | 0.0620 |
0.0273 | 18.76 | 8800 | 0.0507 | 0.2456 | 0.0622 |
0.0216 | 19.62 | 9200 | 0.0466 | 0.2214 | 0.0531 |
0.0208 | 20.47 | 9600 | 0.0497 | 0.2396 | 0.0598 |
0.0201 | 21.32 | 10000 | 0.0470 | 0.2332 | 0.0559 |
0.0174 | 22.17 | 10400 | 0.0418 | 0.2346 | 0.0590 |
0.0198 | 23.03 | 10800 | 0.0472 | 0.2386 | 0.0602 |
0.0149 | 23.88 | 11200 | 0.0490 | 0.2446 | 0.0638 |
0.0133 | 24.73 | 11600 | 0.0497 | 0.2430 | 0.0632 |
0.0118 | 25.59 | 12000 | 0.0498 | 0.2368 | 0.0620 |
0.0106 | 26.44 | 12400 | 0.0453 | 0.2309 | 0.0590 |
0.0104 | 27.29 | 12800 | 0.0452 | 0.2296 | 0.0583 |
0.0085 | 28.14 | 13200 | 0.0467 | 0.2352 | 0.0604 |
0.0081 | 29.0 | 13600 | 0.0470 | 0.2310 | 0.0592 |
0.0079 | 29.85 | 14000 | 0.0464 | 0.2319 | 0.0598 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 1.18.3
- Tokenizers 0.13.3
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for rossevine/Model_S_D_Wav2Vec2
Base model
facebook/wav2vec2-xls-r-300m