--- tags: - generated_from_trainer base_model: batoula187/wav2vec2-large-xls-r-300m-arabic-colab datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-arabic-colab results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: ar split: test[:10%] args: ar metrics: - type: wer value: 0.627304825421734 name: Wer --- # wav2vec2-large-xls-r-300m-arabic-colab This model is a fine-tuned version of [batoula187/wav2vec2-large-xls-r-300m-arabic-colab](https://huggingface.co/batoula187/wav2vec2-large-xls-r-300m-arabic-colab) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.5330 - Wer: 0.6273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 0.0457 | 1.6901 | 200 | 1.5030 | 0.6377 | | 0.0408 | 3.3803 | 400 | 1.4683 | 0.6503 | | 0.0693 | 5.0704 | 600 | 1.6023 | 0.6897 | | 0.0766 | 6.7606 | 800 | 1.3947 | 0.6709 | | 0.0653 | 8.4507 | 1000 | 1.5052 | 0.6858 | | 0.0542 | 10.1408 | 1200 | 1.6550 | 0.6999 | | 0.0535 | 11.8310 | 1400 | 1.4820 | 0.6591 | | 0.0645 | 13.5211 | 1600 | 1.5134 | 0.6732 | | 0.0583 | 15.2113 | 1800 | 1.4606 | 0.6561 | | 0.0551 | 16.9014 | 2000 | 1.4476 | 0.6534 | | 0.0462 | 18.5915 | 2200 | 1.5556 | 0.6557 | | 0.0447 | 20.2817 | 2400 | 1.5289 | 0.6503 | | 0.0395 | 21.9718 | 2600 | 1.5145 | 0.6434 | | 0.0327 | 23.6620 | 2800 | 1.5916 | 0.6475 | | 0.0317 | 25.3521 | 3000 | 1.5830 | 0.6526 | | 0.0276 | 27.0423 | 3200 | 1.5935 | 0.6432 | | 0.026 | 28.7324 | 3400 | 1.5330 | 0.6273 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1