--- language: - uz license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice_8_0 metrics: - wer model-index: - name: xls-r-uzbek-cv8 results: [] --- # xls-r-uzbek-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ dataset. It achieves the following results on the evaluation set: - Loss: 0.2924 - Wer: 0.3780 - Cer: 0.0760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 3.1444 | 0.4055 | 500 | 3.1200 | 1.0 | 1.0 | | 2.9488 | 0.8110 | 1000 | 2.9562 | 1.0 | 0.9807 | | 1.4553 | 1.2165 | 1500 | 0.7868 | 0.7034 | 0.1644 | | 1.1495 | 1.6221 | 2000 | 0.5598 | 0.6076 | 0.1337 | | 1.041 | 2.0276 | 2500 | 0.4650 | 0.5537 | 0.1174 | | 0.9524 | 2.4331 | 3000 | 0.4204 | 0.5098 | 0.1061 | | 0.902 | 2.8386 | 3500 | 0.3919 | 0.4984 | 0.1026 | | 0.8505 | 3.2441 | 4000 | 0.3688 | 0.4678 | 0.0965 | | 0.8353 | 3.6496 | 4500 | 0.3491 | 0.4488 | 0.0915 | | 0.8015 | 4.0552 | 5000 | 0.3410 | 0.4356 | 0.0896 | | 0.7771 | 4.4607 | 5500 | 0.3367 | 0.4330 | 0.0883 | | 0.7894 | 4.8662 | 6000 | 0.3274 | 0.4201 | 0.0858 | | 0.7624 | 5.2717 | 6500 | 0.3266 | 0.4115 | 0.0835 | | 0.7522 | 5.6772 | 7000 | 0.3172 | 0.4072 | 0.0825 | | 0.7545 | 6.0827 | 7500 | 0.3096 | 0.4034 | 0.0817 | | 0.7412 | 6.4882 | 8000 | 0.3062 | 0.4014 | 0.0810 | | 0.7405 | 6.8938 | 8500 | 0.3057 | 0.3933 | 0.0796 | | 0.703 | 7.2993 | 9000 | 0.2966 | 0.3894 | 0.0784 | | 0.7091 | 7.7048 | 9500 | 0.3000 | 0.3895 | 0.0784 | | 0.7117 | 8.1103 | 10000 | 0.2988 | 0.3881 | 0.0781 | | 0.6871 | 8.5158 | 10500 | 0.2939 | 0.3832 | 0.0771 | | 0.6942 | 8.9213 | 11000 | 0.2950 | 0.3816 | 0.0766 | | 0.6919 | 9.3268 | 11500 | 0.2910 | 0.3781 | 0.0760 | | 0.6756 | 9.7324 | 12000 | 0.2927 | 0.3785 | 0.0760 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1