smajumdar94 commited on
Commit
453047c
1 Parent(s): 777c43b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -323,7 +323,7 @@ This model provides transcribed speech as a string for a given audio sample.
323
 
324
  ## Model Architecture
325
 
326
- FastConformer is an optimized version of the Conformer model [1] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
327
 
328
  ## Training
329
 
@@ -380,7 +380,7 @@ Although this model isn’t supported yet by Riva, the [list of supported models
380
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
381
 
382
  ## References
383
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
384
 
385
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
386
 
 
323
 
324
  ## Model Architecture
325
 
326
+ FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
327
 
328
  ## Training
329
 
 
380
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
381
 
382
  ## References
383
+ [1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
384
 
385
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
386