File size: 282 Bytes
0a16267
 
 
1
2
3
# Wav2vec2-base pretraining for Danish

This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we are allowed to distribute the pretrained model.