metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- wer
model-index:
- name: my_awesome_asr_mind_model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: minds14
type: minds14
config: en-US
split: None
args: en-US
metrics:
- name: Wer
type: wer
value: 1
my_awesome_asr_mind_model
This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset. It achieves the following results on the evaluation set:
- Loss: 3.0295
- Wer: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
36.0013 | 0.71 | 50 | 54.8783 | 1.0231 |
29.2024 | 1.43 | 100 | 43.3910 | 1.0 |
18.6027 | 2.14 | 150 | 24.3702 | 1.0 |
6.5918 | 2.86 | 200 | 7.0226 | 1.0 |
4.0512 | 3.57 | 250 | 4.3558 | 1.0 |
3.6216 | 4.29 | 300 | 4.0701 | 1.0 |
3.4344 | 5.0 | 350 | 3.8139 | 1.0 |
3.5985 | 5.71 | 400 | 3.6550 | 1.0 |
3.5004 | 6.43 | 450 | 3.5076 | 1.0 |
3.3916 | 7.14 | 500 | 3.4591 | 1.0 |
3.1966 | 7.86 | 550 | 3.3332 | 1.0 |
3.2384 | 8.57 | 600 | 3.2828 | 1.0 |
3.1981 | 9.29 | 650 | 3.2563 | 1.0 |
3.1743 | 10.0 | 700 | 3.2011 | 1.0 |
3.1251 | 10.71 | 750 | 3.1600 | 1.0 |
3.0371 | 11.43 | 800 | 3.1436 | 1.0 |
3.0702 | 12.14 | 850 | 3.1633 | 1.0 |
3.0748 | 12.86 | 900 | 3.1194 | 1.0 |
3.0459 | 13.57 | 950 | 3.1797 | 1.0 |
3.0496 | 14.29 | 1000 | 3.1073 | 1.0 |
3.0744 | 15.0 | 1050 | 3.1033 | 1.0 |
3.0342 | 15.71 | 1100 | 3.0702 | 1.0 |
3.0469 | 16.43 | 1150 | 3.0680 | 1.0 |
3.0234 | 17.14 | 1200 | 3.0650 | 1.0 |
3.0739 | 17.86 | 1250 | 3.0586 | 1.0 |
2.9964 | 18.57 | 1300 | 3.0542 | 1.0 |
3.0906 | 19.29 | 1350 | 3.0519 | 1.0 |
2.9823 | 20.0 | 1400 | 3.0456 | 1.0 |
3.038 | 20.71 | 1450 | 3.0399 | 1.0 |
2.9952 | 21.43 | 1500 | 3.0357 | 1.0 |
3.0092 | 22.14 | 1550 | 3.0571 | 1.0 |
2.9838 | 22.86 | 1600 | 3.0354 | 1.0 |
3.0611 | 23.57 | 1650 | 3.0435 | 1.0 |
2.9924 | 24.29 | 1700 | 3.0368 | 1.0 |
2.9854 | 25.0 | 1750 | 3.0580 | 1.0 |
3.0193 | 25.71 | 1800 | 3.0347 | 1.0 |
2.9694 | 26.43 | 1850 | 3.0335 | 1.0 |
3.0039 | 27.14 | 1900 | 3.0318 | 1.0 |
2.9789 | 27.86 | 1950 | 3.0322 | 1.0 |
2.9828 | 28.57 | 2000 | 3.0295 | 1.0 |
Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3