torgo_xlsr_finetune_M01
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.3034
- Wer: 0.2292
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
3.4693 | 0.85 | 1000 | 3.2808 | 1.0 |
1.4742 | 1.7 | 2000 | 1.3840 | 0.7581 |
0.7802 | 2.55 | 3000 | 1.2332 | 0.5535 |
0.5771 | 3.4 | 4000 | 1.3305 | 0.4423 |
0.4685 | 4.25 | 5000 | 1.2289 | 0.4032 |
0.4235 | 5.1 | 6000 | 1.3615 | 0.3540 |
0.3593 | 5.95 | 7000 | 1.1796 | 0.3311 |
0.3319 | 6.8 | 8000 | 1.2863 | 0.3336 |
0.298 | 7.65 | 9000 | 1.2067 | 0.3022 |
0.2729 | 8.5 | 10000 | 1.5681 | 0.3090 |
0.24 | 9.35 | 11000 | 1.3628 | 0.3022 |
0.2104 | 10.2 | 12000 | 1.6944 | 0.3022 |
0.2285 | 11.05 | 13000 | 1.6160 | 0.2997 |
0.2027 | 11.89 | 14000 | 1.6614 | 0.3081 |
0.2013 | 12.74 | 15000 | 1.3976 | 0.2683 |
0.1945 | 13.59 | 16000 | 1.0957 | 0.2317 |
0.1644 | 14.44 | 17000 | 1.4140 | 0.2699 |
0.163 | 15.29 | 18000 | 1.2615 | 0.2436 |
0.1414 | 16.14 | 19000 | 1.4278 | 0.2640 |
0.1476 | 16.99 | 20000 | 1.3421 | 0.2360 |
0.1415 | 17.84 | 21000 | 1.3527 | 0.2402 |
0.1217 | 18.69 | 22000 | 1.3593 | 0.2377 |
0.1353 | 19.54 | 23000 | 1.3034 | 0.2292 |
Framework versions
- Transformers 4.26.1
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.