--- language: - sv-SE license: apache-2.0 tags: - automatic-speech-recognition - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 metrics: - wer - cer model-index: - name: wav2vec2-xls-r-300m-swedish results: - task: type: automatic-speech-recognition # Required. Example: automatic-speech-recognition name: Speech Recognition # Optional. Example: Speech Recognition dataset: type: mozilla-foundation/common_voice_8_0 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: common-voice # Required. Example: Common Voice zh-CN args: sv-SE # Optional. Example: zh-CN metrics: - type: wer # Required. Example: wer value: 38.57 # Required. Example: 20.90 name: Test WER # Optional. Example: Test WER args: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP # Optional. Example for BLEU: max_order - type: cer # Required. Example: wer value: 10.98 # Required. Example: 20.90 name: Test CER # Optional. Example: Test WER args: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP # Optional. Example for BLEU: max_order --- # wav2vec2-large-xls-r-300m-Swedish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4286 - Wer: 0.2729 - Cer: 0.0858 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 4.6203 | 5.49 | 500 | 2.8904 | 1.0 | 1.0 | | 1.147 | 10.98 | 1000 | 0.5255 | 0.4107 | 0.1304 | | 0.5246 | 16.48 | 1500 | 0.4598 | 0.3342 | 0.1058 | | 0.378 | 21.97 | 2000 | 0.4316 | 0.2991 | 0.0949 | | 0.298 | 27.47 | 2500 | 0.4286 | 0.2729 | 0.0858 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0