smajumdar94 commited on
Commit
da899ef
1 Parent(s): 036ba9d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -180,7 +180,7 @@ This model provides transcribed speech as a string for a given audio sample.
180
 
181
  ## Model Architecture
182
 
183
- FastConformer is an optimized version of the Conformer model [1] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
184
 
185
  ## Training
186
 
@@ -231,7 +231,7 @@ Although this model isn’t supported yet by Riva, the [list of supported models
231
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
232
 
233
  ## References
234
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
235
 
236
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
237
 
 
180
 
181
  ## Model Architecture
182
 
183
+ FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
184
 
185
  ## Training
186
 
 
231
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
232
 
233
  ## References
234
+ [1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
235
 
236
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
237