This acoustic model is a fine-tuned version of facebook/wav2vec2-xls-r-300m for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in this paper and first released at this page.
Note: there is a version with KenLM language model used in the decoding phase producing better transcriptions: Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
You can use this model for Finnish ASR (speech-to-text) task.
Check the run-finnish-asr-models.ipynb notebook in this repository for an detailed example on how to use this model.
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in this blog post.
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
|Dataset||Hours||% of total hours|
|Common Voice 7.0 Finnish train + evaluation + other splits||9.70 h||3.52 %|
|Finnish parliament session 2||0.24 h||0.09 %|
|VoxPopuli Finnish||21.97 h||7.97 %|
|CSS10 Finnish||10.32 h||3.74 %|
|Aalto Finnish Parliament ASR Corpus||228.00 h||82.73 %|
|Finnish Broadcast Corpus||5.37 h||1.95 %|
Datasets were filtered to include maximum length of 20 seconds long audio samples.
This model was trained during Robust Speech Challenge Event organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available here. We only modified its data loading for our custom datasets.
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: 8-bit Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
facebook/wav2vec2-xls-r-300m model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
|Training Loss||Epoch||Step||Validation Loss||Wer|
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
Evaluation was done with the Common Voice 7.0 Finnish test split.
To evaluate this model, run the
eval.py script in this repository:
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
|WER (with LM)||WER (without LM)||CER (with LM)||CER (without LM)|
- Aapo Tanskanen, Hugging Face profile, LinkedIn profile
- Rasmus Toivanen, Hugging Face profile, LinkedIn profile
Feel free to contact us for more details 🤗
- Downloads last month