Edit model card


Continued pretraining of facebook/wav2vec2-xls-r-300m for 120.000 steps on 141.000 hours of speech from Danish radio (DR P1 and Radio24Syv from 2005 to 2021).

The model was pretrained on 16kHz audio using fairseq and should be fine-tuned to perform speech recognition.

A fine-tuned version of this model for ASR can be found here.

The model was trained by Lasse Hansen (CHCAA) and Alvenir on the UCloud platform. Many thanks to the Royal Danish Library for providing access to the data.

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .