Edit model card

Whisper Base Hindi

This model is a fine-tuned version of arun100/whisper-base-hi-2 on the google/fleurs hi_in dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4468
  • Wer: 27.7206

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4805 33.0 250 0.4868 30.4186
0.3559 66.0 500 0.4417 29.0909
0.2655 99.0 750 0.4307 28.2165
0.1987 133.0 1000 0.4350 27.8326
0.1472 166.0 1250 0.4468 27.7206
0.1061 199.0 1500 0.4640 28.0992
0.0767 233.0 1750 0.4835 28.5737
0.0541 266.0 2000 0.5032 28.6857
0.0396 299.0 2250 0.5202 28.7763
0.03 333.0 2500 0.5353 29.2029
0.0237 366.0 2750 0.5479 28.9096
0.0195 399.0 3000 0.5587 28.9096
0.0163 433.0 3250 0.5683 28.9469
0.014 466.0 3500 0.5767 29.1336
0.0121 499.0 3750 0.5838 29.3415
0.0108 533.0 4000 0.5900 29.2775
0.01 566.0 4250 0.5951 29.6081
0.0093 599.0 4500 0.5988 29.4855
0.0088 633.0 4750 0.6012 29.5281
0.0087 666.0 5000 0.6020 29.4268

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arun100/whisper-base-hi-3

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train arun100/whisper-base-hi-3

Evaluation results