Edit model card

whisper-small-smj

This model is a fine-tuned version of openai/whisper-small on the NbAiLab/salmon-asr-smj dataset. It achieves the following results on the evaluation set:

  • step: 9999
  • validation_loss: 0.3690
  • train_loss: 0.2159
  • validation_wer: 19.6809
  • validation_cer: 5.5037
  • validation_exact_wer: 22.3404
  • validation_exact_cer: 5.8753

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 32
  • total_train_batch_size_per_node: 256
  • total_train_batch_size: 256
  • total_optimization_steps: 10,000
  • starting_optimization_step: None
  • finishing_optimization_step: 10,000
  • num_train_dataset_workers: 32
  • num_hosts: 1
  • total_num_training_examples: 2,560,000
  • steps_per_epoch: 70
  • num_beams: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.98
  • adam_epsilon: 1e-06
  • dropout: True
  • bpe_dropout_probability: 0.2
  • activation_dropout_probability: 0.1

Training results

step validation_loss train_loss validation_wer validation_cer validation_exact_wer validation_exact_cer
0 3.4458 4.7979 205.7181 94.0902 150.2660 95.4591
1000 0.8415 0.2440 21.9415 6.4379 25.9309 7.0106
2000 1.0741 0.2249 21.6755 5.7474 25.0 6.1741
3000 0.8933 0.2919 20.4787 5.3615 23.9362 5.8156
4000 0.8445 0.1339 18.8830 5.2193 21.4096 5.6363
5000 0.3739 0.2289 20.0798 5.3818 23.2713 5.8355
6000 0.3746 0.2586 19.8138 5.2600 22.7394 5.6562
7000 0.3555 0.2273 19.2819 5.7067 22.3404 6.0745
8000 0.3671 0.1632 19.4149 5.4224 22.3404 5.8952
9000 0.3508 0.2107 18.3511 5.3006 21.2766 5.7160
9999 0.3690 0.2159 19.6809 5.5037 22.3404 5.8753

Framework versions

  • Transformers 4.34.1
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
11
Safetensors
Model size
242M params
Tensor type
F32
·

Finetuned from