Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Demo Model Whisper Large

This model is a fine-tuned version of openai/whisper-large-v3 on the b-brave/speech_disorders_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3289

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.6981 0.0794 10 3.0679
2.7279 0.1587 20 2.9277
2.4063 0.2381 30 2.5832
2.0624 0.3175 40 1.9876
1.5946 0.3968 50 1.4144
1.032 0.4762 60 0.8543
0.6495 0.5556 70 0.5918
0.4086 0.6349 80 0.5795
0.6702 0.7143 90 1.3734
0.7216 0.7937 100 1.2499
0.7264 0.8730 110 0.4231
0.442 0.9524 120 0.3969
0.2848 1.0317 130 0.3757
0.2054 1.1111 140 0.3697
0.307 1.1905 150 0.3584
0.3914 1.2698 160 0.3579
0.175 1.3492 170 0.3515
0.2399 1.4286 180 0.3414
0.3367 1.5079 190 0.3360
0.2457 1.5873 200 0.3315
0.3358 1.6667 210 0.3351
0.1335 1.7460 220 0.3314
0.1844 1.8254 230 0.3308
0.3878 1.9048 240 0.3282
0.3594 1.9841 250 0.3289

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for miosipof/asr_michael_test2.1

Adapter
this model

Dataset used to train miosipof/asr_michael_test2.1