ESPnet
102 languages
audio
self-supervised-learning
speech-recognition
wanchichen commited on
Commit
466c8ef
1 Parent(s): 9da4213

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -124,7 +124,7 @@ license: cc-by-4.0
124
  [Paper](https://arxiv.org/abs/2309.15317)
125
 
126
  This model was trained by [William Chen](https://wanchichen.github.io/) using ESPNet2's SSL recipe in [espnet](https://github.com/espnet/espnet/).
127
- WavLabLM is an self-supervised audio encoder pre-trained on 40,000 hours of multilingual data across 136 languages. This specific variant, WavLabLM-MK, uses a K-means model trained on English data for the quantization, making it especially strong for European languages.
128
 
129
 
130
  ```BibTex
 
124
  [Paper](https://arxiv.org/abs/2309.15317)
125
 
126
  This model was trained by [William Chen](https://wanchichen.github.io/) using ESPNet2's SSL recipe in [espnet](https://github.com/espnet/espnet/).
127
+ WavLabLM is an self-supervised audio encoder pre-trained on 40,000 hours of multilingual data across 136 languages. This specific variant, WavLabLM-EK, uses a K-means model trained on English data for the quantization, making it especially strong for European languages.
128
 
129
 
130
  ```BibTex