erastorgueva-nv commited on
Commit
1d60520
1 Parent(s): 80dc66f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -24
README.md CHANGED
@@ -18,7 +18,6 @@ tags:
18
  - pytorch
19
  - NeMo
20
  - hf-asr-leaderboard
21
- - Riva
22
  license: cc-by-4.0
23
  model-index:
24
  - name: stt_es_conformer_transducer_large
@@ -79,7 +78,6 @@ model-index:
79
  - name: Test WER
80
  type: wer
81
  value: 3.2
82
-
83
  ---
84
  # NVIDIA Conformer-Transducer Large (es)
85
 
@@ -89,32 +87,30 @@ img {
89
  }
90
  </style>
91
 
92
- | [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--Transformer-lightgrey#model-badge)](#model-architecture)
93
  | [![Model size](https://img.shields.io/badge/Params-120M-lightgrey#model-badge)](#model-architecture)
94
  | [![Language](https://img.shields.io/badge/Language-es-lightgrey#model-badge)](#datasets)
95
- | [![Riva Compatible](https://img.shields.io/badge/NVIDIA%20Riva-compatible-brightgreen#model-badge)](#deployment-with-nvidia-riva) |
96
 
97
 
98
  This model transcribes speech in lowercase Spanish alphabet including spaces, and was trained on a composite dataset comprising of 1340 hours of Spanish speech. It is a "large" variant of Conformer-Transducer, with around 120 million parameters.
99
  See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
100
- It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
101
-
102
-
103
- ## Usage
104
-
105
- The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
106
 
107
- To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
108
 
 
109
  ```
110
  pip install nemo_toolkit['all']
111
- ```
 
 
 
 
112
 
113
  ### Automatically instantiate the model
114
 
115
  ```python
116
  import nemo.collections.asr as nemo_asr
117
- asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_es_conformer_transducer_large")
118
  ```
119
 
120
  ### Transcribing using Python
@@ -131,13 +127,13 @@ asr_model.transcribe(['2086-149220-0033.wav'])
131
 
132
  ```shell
133
  python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
134
- pretrained_name="nvidia/stt_es_conformer_transducer_large"
135
  audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
136
  ```
137
 
138
  ### Input
139
 
140
- This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
141
 
142
  ### Output
143
 
@@ -145,11 +141,11 @@ This model provides transcribed speech as a string for a given audio sample.
145
 
146
  ## Model Architecture
147
 
148
- Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer).
149
 
150
  ## Training
151
 
152
- The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_rnnt_bpe.yaml).
153
 
154
  The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
155
 
@@ -172,14 +168,24 @@ The list of the available models in this collection is shown in the following ta
172
 
173
  ## Limitations
174
  Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
175
- ## Deployment with NVIDIA Riva
176
- For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
 
 
177
  Additionally, Riva provides:
 
178
  * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
179
  * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
180
- * Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
181
- Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
 
 
 
182
  ## References
183
- - [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
184
- - [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
185
- - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
 
 
 
 
 
18
  - pytorch
19
  - NeMo
20
  - hf-asr-leaderboard
 
21
  license: cc-by-4.0
22
  model-index:
23
  - name: stt_es_conformer_transducer_large
 
78
  - name: Test WER
79
  type: wer
80
  value: 3.2
 
81
  ---
82
  # NVIDIA Conformer-Transducer Large (es)
83
 
 
87
  }
88
  </style>
89
 
90
+ | [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--Transducer-lightgrey#model-badge)](#model-architecture)
91
  | [![Model size](https://img.shields.io/badge/Params-120M-lightgrey#model-badge)](#model-architecture)
92
  | [![Language](https://img.shields.io/badge/Language-es-lightgrey#model-badge)](#datasets)
 
93
 
94
 
95
  This model transcribes speech in lowercase Spanish alphabet including spaces, and was trained on a composite dataset comprising of 1340 hours of Spanish speech. It is a "large" variant of Conformer-Transducer, with around 120 million parameters.
96
  See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
 
 
 
 
 
 
97
 
98
+ ## NVIDIA NeMo: Training
99
 
100
+ To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
101
  ```
102
  pip install nemo_toolkit['all']
103
+ ```
104
+
105
+ ## How to Use this Model
106
+
107
+ The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
108
 
109
  ### Automatically instantiate the model
110
 
111
  ```python
112
  import nemo.collections.asr as nemo_asr
113
+ asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_es_conformer_transducer_large")
114
  ```
115
 
116
  ### Transcribing using Python
 
127
 
128
  ```shell
129
  python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
130
+ pretrained_name="nvidia/stt_es_conformer_transducer_xlarge"
131
  audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
132
  ```
133
 
134
  ### Input
135
 
136
+ This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
137
 
138
  ### Output
139
 
 
141
 
142
  ## Model Architecture
143
 
144
+ Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
145
 
146
  ## Training
147
 
148
+ The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
149
 
150
  The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
151
 
 
168
 
169
  ## Limitations
170
  Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
171
+
172
+ ## NVIDIA Riva: Deployment
173
+
174
+ [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
175
  Additionally, Riva provides:
176
+
177
  * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
178
  * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
179
+ * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
180
+
181
+ Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
182
+ Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
183
+
184
  ## References
185
+ [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
186
+ [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
187
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
188
+
189
+ ## Licence
190
+
191
+ License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.