Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper Small Es - Sanchit Gandhi

This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5129
  • Wer: 56.4413

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 750
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
6.3605 2.3 50 6.2660 55.7247
5.3113 4.6 100 5.1187 56.4590
4.2749 6.9 150 4.2391 55.6185
3.5266 9.2 200 3.4143 53.6719
3.0671 11.49 250 3.1045 49.2037
2.8716 13.79 300 2.9260 50.7786
2.7263 16.09 350 2.7987 53.5746
2.6467 18.39 400 2.7079 55.0787
2.5624 20.69 450 2.6443 55.6008
2.5087 22.99 500 2.5989 57.3881
2.4922 25.29 550 2.5660 55.9370
2.4274 27.59 600 2.5421 56.4325
2.4337 29.89 650 2.5257 57.4058
2.3991 32.18 700 2.5165 57.0165
2.4211 34.48 750 2.5129 56.4413

Framework versions

  • PEFT 0.7.1
  • Transformers 4.37.0.dev0
  • Pytorch 2.1.0
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0
Downloads last month
2
Unable to determine this model’s pipeline type. Check the docs .

Adapter for

Dataset used to train josemhernandezbiometric/whisper-medium-finetuned-int8

Evaluation results