Whisper Tiny Taiwanese (vanilla)

This model is a fine-tuned version of openai/whisper-tiny on the TAT ASR Aligned dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3924
  • Cer: 32.8471

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 681
  • training_steps: 6810
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.3116 0.9985 681 0.9744 57.5703
0.1801 1.9971 1362 0.9761 37.9992
0.1094 2.9956 2043 1.0098 36.0103
0.0642 3.9941 2724 1.0710 34.1475
0.0353 4.9927 3405 1.1779 34.8229
0.0194 5.9912 4086 1.2733 34.6312
0.0086 6.9897 4767 1.3132 34.7455
0.0027 7.9883 5448 1.3640 33.1173
0.0009 8.9868 6129 1.3809 32.4291
0.0005 9.9853 6810 1.3924 32.8471

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.0.0.post304
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
8
Safetensors
Model size
37.8M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jethrowang/whisper-tiny_tat-esc_vanilla

Finetuned
(1464)
this model