Edit model card

./whisper-large-cit-synth-do015-wd0-lr1e-06-1000

This model is a fine-tuned version of openai/whisper-large-v3 on the SF 1000 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3706
  • Wer: 23.6647

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0.4444 25 0.7983 35.9064
0.967 0.8889 50 0.6724 32.3977
0.967 1.3333 75 0.5459 30.7602
0.6804 1.7778 100 0.4692 27.4854
0.6804 2.2222 125 0.4341 26.3548
0.5145 2.6667 150 0.4143 25.5361
0.5145 3.1111 175 0.4019 25.4191
0.4614 3.5556 200 0.3914 25.0292
0.4614 4.0 225 0.3879 24.4444
0.3891 4.4444 250 0.3835 24.6784
0.3891 4.8889 275 0.3794 24.6004
0.3765 5.3333 300 0.3772 24.0156
0.3765 5.7778 325 0.3745 23.4308
0.3511 6.2222 350 0.3726 23.5478
0.3511 6.6667 375 0.3713 23.5867
0.3307 7.1111 400 0.3706 23.4308
0.3307 7.5556 425 0.3699 23.1189
0.3176 8.0 450 0.3706 23.3918
0.3176 8.4444 475 0.3708 23.6647
0.31 8.8889 500 0.3706 23.6647

Framework versions

  • Transformers 4.42.3
  • Pytorch 1.13.1+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
1.61B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Makkoen/whisper-large-cit-synth-do015-wd0-lr1e-06-1000

Finetuned
(317)
this model