Edit model card

salmon-whisper-large-smj-lr7e-5-test1

This model is a fine-tuned version of openai/whisper-large-v2 on the NbAiLab/salmon-asr-smj dataset. It achieves the following results on the evaluation set:

  • step: 999
  • validation_loss: 0.9447
  • train_loss: 0.3067
  • validation_wer: 21.6755
  • validation_cer: 5.6661
  • validation_exact_wer: 25.0
  • validation_exact_cer: 6.1940

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 6
  • total_train_batch_size_per_node: 48
  • total_train_batch_size: 48
  • total_optimization_steps: 1,000
  • starting_optimization_step: None
  • finishing_optimization_step: 1,000
  • num_train_dataset_workers: 32
  • num_hosts: 1
  • total_num_training_examples: 48,000
  • steps_per_epoch: 385
  • num_beams: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.98
  • adam_epsilon: 1e-06
  • dropout: True
  • bpe_dropout_probability: 0.2
  • activation_dropout_probability: 0.1

Training results

step validation_loss train_loss validation_wer validation_cer validation_exact_wer validation_exact_cer
0 4.2254 4.6455 112.7660 59.8700 108.1117 62.0594
100 1.4819 0.9353 59.0426 16.0032 61.8351 16.8293
200 1.2494 0.8903 43.2181 10.9667 45.8777 11.6311
300 1.1444 0.8144 32.4468 8.4281 35.6383 8.8429
400 1.0442 1.3240 30.1862 7.7173 33.3777 8.2454
500 0.9681 0.2736 25.5319 6.3769 28.4574 6.8711
600 1.0579 0.4364 25.0 6.3363 28.3245 6.8313
700 0.9322 0.6873 23.4043 5.9708 26.3298 6.3732
800 0.9255 0.3675 23.2713 6.0114 26.5957 6.5326
900 0.9581 0.6156 22.4734 5.8692 26.0638 6.4330
999 0.9447 0.3067 21.6755 5.6661 25.0 6.1940

Framework versions

  • Transformers 4.34.1
  • Datasets 2.15.0
  • Tokenizers 0.14.0
Downloads last month
1

Finetuned from