Edit model card

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the Hanhpt23/GermanMed-full dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9014
  • Wer: 29.3839

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.747 1.0 194 0.7466 40.1728
0.4004 2.0 388 0.7261 37.1285
0.2012 3.0 582 0.7231 34.3721
0.1134 4.0 776 0.7465 34.0327
0.0606 5.0 970 0.7888 37.7044
0.0525 6.0 1164 0.8147 33.9813
0.0323 7.0 1358 0.8244 31.6569
0.027 8.0 1552 0.8383 31.8215
0.0149 9.0 1746 0.8643 32.1094
0.0119 10.0 1940 0.8747 31.7495
0.009 11.0 2134 0.8765 30.9781
0.0037 12.0 2328 0.8875 29.8879
0.0021 13.0 2522 0.8832 30.0936
0.0011 14.0 2716 0.8943 29.9496
0.0013 15.0 2910 0.8906 29.5485
0.0006 16.0 3104 0.8944 29.5999
0.0006 17.0 3298 0.8968 29.3942
0.0005 18.0 3492 0.8997 29.3839
0.0006 19.0 3686 0.9010 29.4251
0.0005 20.0 3880 0.9014 29.3839

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
16
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference API
or
This model can be loaded on Inference API (serverless).

Finetuned from