Splend1dchan commited on
Commit
287652e
1 Parent(s): a4f73be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  value: None
25
  ---
26
  # Wav2Vec2-Large-10min-Lv60 + Self-Training
27
-
28
  [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
29
 
30
  The large model pretrained and fine-tuned on 10min of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
 
24
  value: None
25
  ---
26
  # Wav2Vec2-Large-10min-Lv60 + Self-Training
27
+ # This is a direct state_dict transfer from fairseq to huggingface, the weights are identical
28
  [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
29
 
30
  The large model pretrained and fine-tuned on 10min of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.