metadata
language:
- 'no'
license: apache-2.0
base_model: NbAiLabBeta/nb-whisper-small
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: nb-whisper-small-v0.7-semantic
results: []
nb-whisper-small-v0.7-semantic
This model is a fine-tuned version of NbAiLabBeta/nb-whisper-small on the NbAiLab/ncc_speech_styling_v4 dataset. It achieves the following results on the evaluation set:
- step: 249
- validation_nst_loss: 0.6651
- train_loss: 0.7577
- validation_nst_wer: 3.1520
- validation_nst_cer: 0.9723
- validation_nst_exact_wer: 3.9305
- validation_nst_exact_cer: 1.0907
- validation_clean_stortinget_no_loss: 0.6931
- validation_clean_stortinget_no_wer: 10.0616
- validation_clean_stortinget_no_cer: 6.2843
- validation_clean_stortinget_no_exact_wer: 13.4693
- validation_clean_stortinget_no_exact_cer: 6.8299
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 32
- total_train_batch_size_per_node: 128
- total_train_batch_size: 1024
- total_optimization_steps: 250
- starting_optimization_step: None
- finishing_optimization_step: 250
- num_train_dataset_workers: 32
- num_hosts: 8
- total_num_training_examples: 256,000
- steps_per_epoch: To be computed after first epoch
- num_beams: None
- weight_decay: 0.01
- adam_beta1: 0.9
- adam_beta2: 0.98
- adam_epsilon: 1e-06
- dropout: True
- bpe_dropout_probability: 0.2
- activation_dropout_probability: 0.1
Training results
step | validation_nst_loss | train_loss | validation_nst_wer | validation_nst_cer | validation_nst_exact_wer | validation_nst_exact_cer | validation_clean_stortinget_no_loss | validation_clean_stortinget_no_wer | validation_clean_stortinget_no_cer | validation_clean_stortinget_no_exact_wer | validation_clean_stortinget_no_exact_cer |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.4428 | 1.3025 | 3.0160 | 0.9359 | 3.7835 | 1.0513 | 0.7127 | 9.9100 | 6.1361 | 13.3223 | 6.6798 |
40 | 0.6682 | 0.8769 | 3.4732 | 1.2370 | 4.4423 | 1.3773 | 0.6724 | 10.9593 | 7.0159 | 14.4443 | 7.5606 |
80 | 0.6729 | 0.7636 | 3.2664 | 1.0338 | 4.1102 | 1.1575 | 0.6495 | 10.2037 | 6.3692 | 13.5785 | 6.8974 |
120 | 0.6729 | 0.7334 | 3.2500 | 1.0310 | 4.0666 | 1.1530 | 0.6760 | 10.1232 | 6.3145 | 13.4646 | 6.8420 |
160 | 0.6740 | 0.7517 | 3.1412 | 0.9788 | 3.9033 | 1.0925 | 0.6922 | 10.0734 | 6.2954 | 13.4243 | 6.8245 |
200 | 0.6677 | 0.7282 | 3.1575 | 0.9751 | 3.9196 | 1.0898 | 0.6981 | 10.0924 | 6.3054 | 13.4741 | 6.8416 |
240 | 0.6658 | 0.7448 | 3.1520 | 0.9919 | 3.9196 | 1.1081 | 0.6929 | 10.0379 | 6.2800 | 13.4599 | 6.8288 |
249 | 0.6651 | 0.7577 | 3.1520 | 0.9723 | 3.9305 | 1.0907 | |||||
249 | 0.6931 | 0.7577 | 10.0616 | 6.2843 | 13.4693 | 6.8299 |
Framework versions
- Transformers 4.34.1
- Datasets 2.15.0
- Tokenizers 0.14.1