--- language: - ko license: apache-2.0 base_model: openai/whisper-tiny tags: - hf-asr-leaderboard - generated_from_trainer datasets: - haseong8012/child-50k model-index: - name: whisper-small_child50K_timestretch_stepLR results: [] metrics: - wer - cer --- # whisper-small_child50K_timeStretch_stepLR This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the child-50k dataset. It achieves the following results on the evaluation set: - Loss: 0.0201 - Wer: 2.1061 - Cer: 0.8189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.1004 | 0.18 | 500 | 0.0796 | 9.5440 | 4.0921 | | 0.0846 | 0.36 | 1000 | 0.0453 | 5.3319 | 2.3843 | | 0.0729 | 0.53 | 1500 | 0.0355 | 4.2849 | 1.7311 | | 0.0486 | 0.71 | 2000 | 0.0284 | 2.8701 | 1.2241 | | 0.045 | 0.89 | 2500 | 0.0261 | 3.6220 | 2.6581 | | 0.0206 | 1.07 | 3000 | 0.0207 | 2.0616 | 0.8263 | | 0.0264 | 1.24 | 3500 | 0.0219 | 2.0091 | 0.8275 | | 0.0304 | 1.42 | 4000 | 0.0205 | 1.7827 | 0.7207 | | 0.0224 | 1.6 | 4500 | 0.0233 | 2.3527 | 0.9527 | | 0.0215 | 1.78 | 5000 | 0.0201 | 2.1061 | 0.8189 | ### Framework versions - Transformers 4.34.1 - Pytorch 1.12.1 - Datasets 2.14.5 - Tokenizers 0.14.1