--- language: - ga - en license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed - ymoslem/Tatoeba-Speech-Irish - ymoslem/Wikimedia-Speech-Irish metrics: - bleu - wer model-index: - name: Whisper Small GA-EN Speech Translation results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 23.1 - name: Wer type: wer value: 82.89058982440342 --- # Whisper Small GA-EN Speech Translation This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.2172 - Bleu: 23.1 - Chrf: 42.54 - Wer: 82.8906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer | |:-------------:|:-----:|:----:|:---------------:|:-----:|:-----:|:--------:| | 2.8459 | 0.07 | 100 | 2.0769 | 3.28 | 18.43 | 149.0770 | | 2.3328 | 0.13 | 200 | 1.8396 | 4.5 | 22.06 | 207.7443 | | 2.1669 | 0.2 | 300 | 1.6215 | 14.6 | 30.8 | 89.1941 | | 1.8606 | 0.26 | 400 | 1.5030 | 14.65 | 33.33 | 92.4358 | | 1.7255 | 0.33 | 500 | 1.4085 | 14.9 | 35.14 | 103.8271 | | 1.5855 | 0.39 | 600 | 1.3587 | 15.78 | 35.02 | 103.0617 | | 1.5875 | 0.46 | 700 | 1.2986 | 25.3 | 41.37 | 69.4732 | | 1.44 | 0.53 | 800 | 1.2575 | 25.78 | 42.23 | 70.0585 | | 1.3317 | 0.59 | 900 | 1.2338 | 23.24 | 41.64 | 79.1085 | | 1.3166 | 0.66 | 1000 | 1.2172 | 23.1 | 42.54 | 82.8906 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2