Edit model card

Whisper Tiny Tatar - Kirill Milintsevich

This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5106
  • Wer: 49.2285

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4268 2.49 500 0.6232 63.6537
0.2331 4.98 1000 0.5044 52.3818
0.1332 7.46 1500 0.4927 50.2300
0.09 9.95 2000 0.5106 49.2285
0.048 12.44 2500 0.5526 49.7806
0.0346 14.93 3000 0.5850 50.0319
0.0181 17.41 3500 0.6276 50.5592
0.0122 19.9 4000 0.6494 50.3327
0.0086 22.39 4500 0.6737 50.6688
0.0077 24.88 5000 0.6777 50.6724
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train 501Good/whisper-tiny-tt

Collection including 501Good/whisper-tiny-tt

Evaluation results