Edit model card

salmon-whisper-medium-smj

This model is a fine-tuned version of openai/whisper-medium on the NbAiLab/salmon-asr-smj dataset. It achieves the following results on the evaluation set:

  • step: 9999
  • validation_loss: 1.4491
  • train_loss: 0.2105
  • validation_wer: 15.6915
  • validation_cer: 4.6710
  • validation_exact_wer: 18.0851
  • validation_exact_cer: 4.9990

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 8
  • total_train_batch_size_per_node: 64
  • total_train_batch_size: 64
  • total_optimization_steps: 10,000
  • starting_optimization_step: None
  • finishing_optimization_step: 10,000
  • num_train_dataset_workers: 32
  • num_hosts: 1
  • total_num_training_examples: 640,000
  • steps_per_epoch: 287
  • num_beams: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.98
  • adam_epsilon: 1e-06
  • dropout: True
  • bpe_dropout_probability: 0.2
  • activation_dropout_probability: 0.1

Training results

step validation_loss train_loss validation_wer validation_cer validation_exact_wer validation_exact_cer
0 6.7175 4.8002 101.9947 49.5329 103.4574 50.6274
1000 1.8389 0.4886 22.2074 6.0723 26.1968 6.6321
2000 1.2170 0.2845 18.75 5.6661 22.8723 6.1741
3000 1.3422 0.3153 15.5585 4.7116 18.0851 5.0588
4000 1.2924 0.2716 16.4894 5.1178 20.7447 5.7558
5000 1.4563 0.2408 17.1543 5.1381 20.0798 5.5567
6000 1.4318 0.2653 16.2234 4.9147 19.4149 5.3774
7000 1.2483 0.2359 15.2926 4.8335 18.6170 5.2579
8000 1.4402 0.2701 15.4255 4.5695 18.2181 4.9592
9000 1.5363 0.2244 15.5585 4.6304 18.3511 5.0189
9999 1.4491 0.2105 15.6915 4.6710 18.0851 4.9990

Framework versions

  • Transformers 4.35.0.dev0
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
2

Finetuned from