--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec2-large-xls-r-300m-j-phoneme-common-test results: [] --- # wav2vec2-large-xls-r-300m-j-phoneme-common-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Wer: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1488 | 7.14 | 2000 | 0.0788 | 0.0919 | | 0.0308 | 14.28 | 4000 | 0.0155 | 0.0271 | | 0.0121 | 21.43 | 6000 | 0.0070 | 0.0103 | | 0.0067 | 28.57 | 8000 | 0.0059 | 0.0067 | | 0.0025 | 35.71 | 10000 | 0.0143 | 0.0180 | | 0.0001 | 42.85 | 12000 | 0.0000 | 0.0001 | | 0.0 | 50.0 | 14000 | 0.0000 | 0.0001 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.10.0+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1