Edit model card

Whisper Large UME-ERJ V2

This model is a fine-tuned version of openai/whisper-large on the UME-ERJ dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0568
  • Wer: 0.0496

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.7362 0.1143 200 0.1780 0.1274
0.167 0.2286 400 0.1095 0.0852
0.1248 0.3429 600 0.0959 0.0776
0.0999 0.4571 800 0.0833 0.0669
0.0919 0.5714 1000 0.0821 0.0703
0.0839 0.6857 1200 0.0703 0.0623
0.0749 0.8 1400 0.0686 0.0611
0.0747 0.9143 1600 0.0689 0.0597
0.0624 1.0286 1800 0.0646 0.0586
0.0516 1.1429 2000 0.0638 0.0553
0.0497 1.2571 2200 0.0593 0.0521
0.0462 1.3714 2400 0.0634 0.0556
0.0454 1.4857 2600 0.0588 0.0516
0.0455 1.6 2800 0.0596 0.0540
0.0432 1.7143 3000 0.0622 0.0526
0.0401 1.8286 3200 0.0572 0.0524
0.0437 1.9429 3400 0.0569 0.0529
0.0344 2.0571 3600 0.0568 0.0496

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sage-bergerson/whisper-large-ume-erj-v2

Finetuned
(42)
this model

Dataset used to train sage-bergerson/whisper-large-ume-erj-v2

Evaluation results