--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - en - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R-1B - English results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: en metrics: - name: Test WER type: wer value: 21.05 - name: Test CER type: cer value: 8.44 - name: Test WER (+LM) type: wer value: 17.31 - name: Test CER (+LM) type: cer value: 7.77 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: en metrics: - name: Dev WER type: wer value: 20.53 - name: Dev CER type: cer value: 9.31 - name: Dev WER (+LM) type: wer value: 17.70 - name: Dev CER (+LM) type: cer value: 8.93 --- # XLS-R-1B-ENGLISH Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on English using the [Common Voice 8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset mozilla-foundation/common_voice_8_0 --config en --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```