--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_13_0 metrics: - wer model-index: - name: b22-wav2vec2-large-xls-r-romansh-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_13_0 type: common_voice_13_0 config: rm-vallader split: test args: rm-vallader metrics: - name: Wer type: wer value: 0.4990684676292501 --- # b22-wav2vec2-large-xls-r-romansh-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7362 - Wer: 0.4991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.8045 | 0.76 | 100 | 2.9604 | 1.0 | | 2.9578 | 1.52 | 200 | 3.0626 | 1.0 | | 2.9565 | 2.29 | 300 | 3.0432 | 1.0 | | 2.9533 | 3.05 | 400 | 2.9304 | 1.0 | | 2.9263 | 3.81 | 500 | 2.9134 | 1.0 | | 2.9174 | 4.58 | 600 | 2.9022 | 1.0 | | 2.9282 | 5.34 | 700 | 2.8967 | 1.0 | | 2.8973 | 6.11 | 800 | 2.8477 | 1.0 | | 2.6047 | 6.87 | 900 | 2.0269 | 1.0 | | 1.6468 | 7.63 | 1000 | 1.0780 | 0.9029 | | 1.1006 | 8.4 | 1100 | 0.8305 | 0.8319 | | 0.8708 | 9.16 | 1200 | 0.7704 | 0.8055 | | 0.7708 | 9.92 | 1300 | 0.6815 | 0.7385 | | 0.6608 | 10.68 | 1400 | 0.6738 | 0.7212 | | 0.6014 | 11.45 | 1500 | 0.6535 | 0.6940 | | 0.5419 | 12.21 | 1600 | 0.6608 | 0.6639 | | 0.4961 | 12.97 | 1700 | 0.6568 | 0.6372 | | 0.4462 | 13.74 | 1800 | 0.6557 | 0.6362 | | 0.4169 | 14.5 | 1900 | 0.6487 | 0.5985 | | 0.3951 | 15.27 | 2000 | 0.7126 | 0.6376 | | 0.3643 | 16.03 | 2100 | 0.6539 | 0.5859 | | 0.3243 | 16.79 | 2200 | 0.6803 | 0.5946 | | 0.3243 | 17.56 | 2300 | 0.6619 | 0.5745 | | 0.2869 | 18.32 | 2400 | 0.6826 | 0.5592 | | 0.2895 | 19.08 | 2500 | 0.6980 | 0.5524 | | 0.2612 | 19.84 | 2600 | 0.6599 | 0.5445 | | 0.2492 | 20.61 | 2700 | 0.6533 | 0.5394 | | 0.2485 | 21.37 | 2800 | 0.7103 | 0.5494 | | 0.2352 | 22.14 | 2900 | 0.7339 | 0.5501 | | 0.2136 | 22.9 | 3000 | 0.7154 | 0.5470 | | 0.2079 | 23.66 | 3100 | 0.7360 | 0.5389 | | 0.2011 | 24.43 | 3200 | 0.7481 | 0.5263 | | 0.1925 | 25.19 | 3300 | 0.7409 | 0.5186 | | 0.193 | 25.95 | 3400 | 0.7334 | 0.5091 | | 0.1874 | 26.71 | 3500 | 0.7493 | 0.5075 | | 0.1802 | 27.48 | 3600 | 0.7362 | 0.5102 | | 0.1736 | 28.24 | 3700 | 0.7427 | 0.5033 | | 0.1725 | 29.01 | 3800 | 0.7404 | 0.5033 | | 0.1684 | 29.77 | 3900 | 0.7362 | 0.4991 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3