Whisper Tiny Portuguese
This model is a fine-tuned version of openai/whisper-tiny on the mozilla-foundation/common_voice_13_0 pt dataset. It achieves the following results on the evaluation set:
- Loss: 0.9091
- Wer: 28.2309
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 256
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0172 | 14.08 | 1000 | 0.6518 | 29.2084 |
0.0054 | 28.17 | 2000 | 0.7505 | 29.3431 |
0.0018 | 42.25 | 3000 | 0.7833 | 28.7944 |
0.0008 | 56.34 | 4000 | 0.8186 | 28.9702 |
0.0012 | 70.42 | 5000 | 0.8409 | 29.6241 |
0.0004 | 84.51 | 6000 | 0.8530 | 29.0014 |
0.0002 | 98.59 | 7000 | 0.8743 | 28.3163 |
0.0001 | 112.68 | 8000 | 0.8918 | 28.2506 |
0.0001 | 126.76 | 9000 | 0.9091 | 28.2309 |
0.0001 | 140.85 | 10000 | 0.9273 | 28.4001 |
0.0 | 154.93 | 11000 | 0.9460 | 28.6005 |
0.0 | 169.01 | 12000 | 0.9660 | 28.5496 |
0.0 | 183.1 | 13000 | 0.9863 | 28.5151 |
0.0 | 197.18 | 14000 | 1.0075 | 28.5660 |
0.0 | 211.27 | 15000 | 1.0271 | 28.5907 |
0.0 | 225.35 | 16000 | 1.0469 | 28.5003 |
0.0 | 239.44 | 17000 | 1.0647 | 28.5200 |
0.0 | 253.52 | 18000 | 1.0800 | 28.5036 |
0.0 | 267.61 | 19000 | 1.0910 | 28.5545 |
0.0 | 281.69 | 20000 | 1.0960 | 28.6071 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.15.1
- Downloads last month
- 4
Finetuned from
Dataset used to train zuazo/whisper-tiny-pt
Evaluation results
- Wer on mozilla-foundation/common_voice_13_0 pttest set self-reported28.231