--- license: apache-2.0 base_model: nutella-toast/wav2vec2-large-xls-r-ssw tags: - generated_from_trainer datasets: - ml-superb-subset metrics: - wer model-index: - name: wav2vec2-large-xls-r-ssw results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: ml-superb-subset type: ml-superb-subset config: ssw split: dev args: ssw metrics: - name: Wer type: wer value: 0.7320872274143302 --- # wav2vec2-large-xls-r-ssw This model is a fine-tuned version of [nutella-toast/wav2vec2-large-xls-r-ssw](https://huggingface.co/nutella-toast/wav2vec2-large-xls-r-ssw) on the ml-superb-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.7327 - Wer: 0.7321 ## Model description Finetuned version of vanilla wav2vec2-large-xls-r for siSwati. For CS224S at Stanford University. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.5779 | 1.0471 | 100 | 0.7902 | 0.8785 | | 0.5307 | 2.0942 | 200 | 0.8185 | 0.8660 | | 0.4826 | 3.1414 | 300 | 0.8378 | 0.8692 | | 0.4529 | 4.1885 | 400 | 0.8048 | 0.9097 | | 0.5053 | 5.2356 | 500 | 0.9541 | 0.8910 | | 0.4149 | 6.2827 | 600 | 0.7687 | 0.7913 | | 0.3179 | 7.3298 | 700 | 0.7678 | 0.7850 | | 0.2642 | 8.3770 | 800 | 0.7151 | 0.7321 | | 0.2147 | 9.4241 | 900 | 0.7327 | 0.7321 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1