--- language: - en license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer metrics: - wer model-index: - name: ./whisper-large-cit-synth-do015-wd0-lr5e-06-1000 results: [] --- # ./whisper-large-cit-synth-do015-wd0-lr5e-06-1000 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the SF 1000 dataset. It achieves the following results on the evaluation set: - Loss: 0.4526 - Wer: 20.3899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.7187 | 0.8889 | 50 | 0.4062 | 24.2105 | | 0.4122 | 1.7778 | 100 | 0.3523 | 22.3782 | | 0.2917 | 2.6667 | 150 | 0.3494 | 23.5867 | | 0.2242 | 3.5556 | 200 | 0.3618 | 23.0019 | | 0.1529 | 4.4444 | 250 | 0.3770 | 22.3392 | | 0.1322 | 5.3333 | 300 | 0.3906 | 21.2476 | | 0.0987 | 6.2222 | 350 | 0.4133 | 20.9747 | | 0.0798 | 7.1111 | 400 | 0.4302 | 23.8986 | | 0.0613 | 8.0 | 450 | 0.4438 | 20.5848 | | 0.0545 | 8.8889 | 500 | 0.4526 | 20.3899 | ### Framework versions - Transformers 4.42.3 - Pytorch 1.13.1+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1