metadata
library_name: peft
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mesolitica/IMDA-TTS
metrics:
- wer
model-index:
- name: Whisper Small NSC small (1000 steps) - Jarrett Er
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: NSC Small section
type: mesolitica/IMDA-TTS
config: default
split: train
args: 'config: en, split: train'
metrics:
- type: wer
value: 3.123272526257601
name: Wer
Whisper Small NSC small (1000 steps) - Jarrett Er
This model is a fine-tuned version of Thecoder3281f/whisper-small-hi-commonvoice17-1000 on the NSC Small section dataset. It achieves the following results on the evaluation set:
- Loss: 0.0676
- Wer: 3.1233
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0806 | 0.2941 | 100 | 0.0737 | 3.4549 |
0.0618 | 0.5882 | 200 | 0.0690 | 3.2062 |
0.0689 | 0.8824 | 300 | 0.0655 | 3.0265 |
0.0385 | 1.1765 | 400 | 0.0652 | 3.1509 |
0.0441 | 1.4706 | 500 | 0.0653 | 3.1647 |
0.0389 | 1.7647 | 600 | 0.0652 | 3.0404 |
0.032 | 2.0588 | 700 | 0.0646 | 3.1786 |
0.0264 | 2.3529 | 800 | 0.0672 | 3.1095 |
0.0307 | 2.6471 | 900 | 0.0672 | 3.1647 |
0.0266 | 2.9412 | 1000 | 0.0676 | 3.1233 |
Framework versions
- PEFT 0.14.0
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.1.dev0
- Tokenizers 0.20.3