patrickvonplaten commited on
Commit
ca7f36f
1 Parent(s): 8a701fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -43,21 +43,17 @@ model-index:
43
 
44
  # Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
45
 
46
- [Facebook's Wav2Vec2 Conformer (TODO-add link)]()
47
 
48
- Wav2Vec2 Conformer with relative position embeddings, pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
49
 
50
- [Paper (TODO)](https://arxiv.org/abs/2006.11477)
51
 
52
- Authors: ...
53
 
54
- **Abstract**
55
-
56
- ...
57
 
58
  The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
59
 
60
-
61
  # Usage
62
 
63
  To transcribe audio files the model can be used as a standalone acoustic model as follows:
43
 
44
  # Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
45
 
46
+ Wav2Vec2-Conformer with relative position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
47
 
48
+ **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
49
 
50
+ **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
51
 
52
+ The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
53
 
 
 
 
54
 
55
  The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
56
 
 
57
  # Usage
58
 
59
  To transcribe audio files the model can be used as a standalone acoustic model as follows: