Edit model card

whisper-a-nomimose-again

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0252
  • Wer: 192.9941

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 9
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.2606 0.9217 100 0.2230 39.7493
0.2758 1.8387 200 0.1259 41.2979
0.0716 2.7558 300 0.0491 24.1888
0.0433 3.6728 400 0.0618 25.7375
0.0339 4.5899 500 0.0383 184.0708
0.0211 5.5069 600 0.0418 139.3068
0.0154 6.4240 700 0.0424 195.4277
0.0095 7.3410 800 0.0251 186.2832
0.0059 8.2581 900 0.0252 192.9941

Framework versions

  • Transformers 4.47.0.dev0
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
11
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for susmitabhatt/whisper-a-nomimose-again

Finetuned
(1970)
this model