metadata
language:
- en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ./whisper-large-cit-synth-do0.15-wd0-lr1e-06-mask-1000
results: []
./whisper-large-cit-synth-do0.15-wd0-lr1e-06-mask-1000
This model is a fine-tuned version of openai/whisper-large-v3 on the SF 1000 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3877
- Wer: 24.9123
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
1.0947 | 0.3556 | 20 | 0.8311 | 36.7251 |
0.8896 | 0.7111 | 40 | 0.7202 | 34.7368 |
0.8418 | 1.0667 | 60 | 0.6216 | 32.0078 |
0.6567 | 1.4222 | 80 | 0.5254 | 30.7212 |
0.5491 | 1.7778 | 100 | 0.4690 | 27.6803 |
0.5497 | 2.1333 | 120 | 0.4368 | 26.6667 |
0.4875 | 2.4889 | 140 | 0.4211 | 25.7310 |
0.4721 | 2.8444 | 160 | 0.4124 | 25.3801 |
0.46 | 3.2 | 180 | 0.4026 | 25.3801 |
0.4342 | 3.5556 | 200 | 0.3960 | 24.9513 |
0.4248 | 3.9111 | 220 | 0.3945 | 24.8733 |
0.4249 | 4.2667 | 240 | 0.3916 | 24.9123 |
0.4192 | 4.6222 | 260 | 0.3899 | 24.7953 |
0.3823 | 4.9778 | 280 | 0.3884 | 24.6004 |
0.4176 | 5.3333 | 300 | 0.3877 | 24.9123 |
Framework versions
- Transformers 4.42.3
- Pytorch 1.13.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1