Whisper Small - Slovenian
Note: you'll probably want to use the newer version, trained on Artur1.0 dataset
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
- Loss: 0.4640
- Wer: 28.2530
Model description
This is a speech-to-text model specialized for Slovenian language.
Intended uses & limitations
More information needed
Training and evaluation data
The dataset used was Common Voice 11.0 from Mozilla. Train and validation sets were merged and tested against test dataset.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0125 | 6.06 | 1000 | 0.4145 | 29.7100 |
0.0006 | 12.12 | 2000 | 0.4312 | 28.1364 |
0.0003 | 18.18 | 3000 | 0.4560 | 28.0927 |
0.0003 | 24.24 | 4000 | 0.4640 | 28.2530 |
Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for samolego/whisper-small-sl-mozilla
Base model
openai/whisper-small