whisper-medium-named-e_v231115

This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0089
  • Wer: 0.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 120
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.4712 5.0 10 2.8412 15.0
1.3896 10.0 20 0.7323 95.0
0.5329 15.0 30 0.5708 10.0
0.4335 20.0 40 0.4734 35.0
0.3684 25.0 50 0.4014 15.0
0.3109 30.0 60 0.3379 15.0
0.2567 35.0 70 0.2760 0.0
0.2065 40.0 80 0.2132 0.0
0.1534 45.0 90 0.1557 0.0
0.105 50.0 100 0.0955 0.0
0.0497 55.0 110 0.0311 0.0
0.0093 60.0 120 0.0089 0.0

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.8.dev0
  • Tokenizers 0.15.0
Downloads last month
21
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kujirahand/whisper-small-named-e

Finetuned
(2101)
this model