patrickvonplaten commited on
Commit
9c280b4
1 Parent(s): 8b63e4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -14,19 +14,15 @@ license: apache-2.0
14
 
15
  [Facebook's Wav2Vec2 Conformer (TODO-add link)]()
16
 
17
- Wav2Vec2 Conformer with relative position embeddings, pretrained on 960h hours of Librispeech and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
18
 
19
- [Paper (TODO)](https://arxiv.org/abs/2006.11477)
20
 
21
- Authors: ...
22
-
23
- **Abstract**
24
-
25
- ...
26
 
 
27
  The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
28
 
29
-
30
  # Usage
31
 
32
  To transcribe audio files the model can be used as a standalone acoustic model as follows:
14
 
15
  [Facebook's Wav2Vec2 Conformer (TODO-add link)]()
16
 
17
+ Wav2Vec2 Conformer with relative position embeddings, pretrained on 960h hours of Librispeech and and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
18
 
19
+ **Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
20
 
21
+ **Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
 
 
 
 
22
 
23
+ The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
24
  The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
25
 
 
26
  # Usage
27
 
28
  To transcribe audio files the model can be used as a standalone acoustic model as follows: