smajumdar94 commited on
Commit
ab74426
1 Parent(s): 377e60e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -179,7 +179,7 @@ This model provides transcribed speech as a string for a given audio sample.
179
 
180
  ## Model Architecture
181
 
182
- FastConformer is an optimized version of the Conformer model [1] with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
183
 
184
  ## Training
185
 
@@ -232,7 +232,7 @@ Although this model isn’t supported yet by Riva, the [list of supported models
232
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
233
 
234
  ## References
235
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
236
 
237
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
238
 
 
179
 
180
  ## Model Architecture
181
 
182
+ FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
183
 
184
  ## Training
185
 
 
232
  Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
233
 
234
  ## References
235
+ [1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
236
 
237
  [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
238