Automatic Speech Recognition
NeMo
PyTorch
English
speech
audio
CTC
Citrinet
Transformer
NeMo
hf-asr-leaderboard
Riva
Eval Results
smajumdar94 commited on
Commit
d1314f7
1 Parent(s): 4b61c85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +250 -0
README.md CHANGED
@@ -1,3 +1,253 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: nemo
5
+ datasets:
6
+ - librispeech_asr
7
+ - fisher_corpus
8
+ - Switchboard-1
9
+ - WSJ-0
10
+ - WSJ-1
11
+ - National Singapore Corpus Part 1
12
+ - National Singapore Corpus Part 6
13
+ - mozilla-foundation/common_voice_7_0
14
+ thumbnail: null
15
+ tags:
16
+ - automatic-speech-recognition
17
+ - speech
18
+ - audio
19
+ - CTC
20
+ - Citrinet
21
+ - Transformer
22
+ - pytorch
23
+ - NeMo
24
+ - hf-asr-leaderboard
25
+ - Riva
26
  license: cc-by-4.0
27
+ widget:
28
+ - example_title: Librispeech sample 1
29
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
30
+ - example_title: Librispeech sample 2
31
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
32
+ model-index:
33
+ - name: stt_en_citrinet_1024_gamma_0_25
34
+ results:
35
+ - task:
36
+ name: Automatic Speech Recognition
37
+ type: automatic-speech-recognition
38
+ dataset:
39
+ name: LibriSpeech (clean)
40
+ type: librispeech_asr
41
+ config: clean
42
+ split: test
43
+ args:
44
+ language: en
45
+ metrics:
46
+ - name: Test WER
47
+ type: wer
48
+ value: 2.2
49
+ - task:
50
+ type: Automatic Speech Recognition
51
+ name: automatic-speech-recognition
52
+ dataset:
53
+ name: LibriSpeech (other)
54
+ type: librispeech_asr
55
+ config: other
56
+ split: test
57
+ args:
58
+ language: en
59
+ metrics:
60
+ - name: Test WER
61
+ type: wer
62
+ value: 4.3
63
+ - task:
64
+ type: Automatic Speech Recognition
65
+ name: automatic-speech-recognition
66
+ dataset:
67
+ name: Multilingual LibriSpeech
68
+ type: facebook/multilingual_librispeech
69
+ config: english
70
+ split: test
71
+ args:
72
+ language: en
73
+ metrics:
74
+ - name: Test WER
75
+ type: wer
76
+ value: 7.2
77
+ - task:
78
+ type: Automatic Speech Recognition
79
+ name: automatic-speech-recognition
80
+ dataset:
81
+ name: Mozilla Common Voice 7.0
82
+ type: mozilla-foundation/common_voice_7_0
83
+ config: en
84
+ split: test
85
+ args:
86
+ language: en
87
+ metrics:
88
+ - name: Test WER
89
+ type: wer
90
+ value: 8.0
91
+ - task:
92
+ type: Automatic Speech Recognition
93
+ name: automatic-speech-recognition
94
+ dataset:
95
+ name: Mozilla Common Voice 8.0
96
+ type: mozilla-foundation/common_voice_8_0
97
+ config: en
98
+ split: test
99
+ args:
100
+ language: en
101
+ metrics:
102
+ - name: Test WER
103
+ type: wer
104
+ value: 9.48
105
+ - task:
106
+ type: Automatic Speech Recognition
107
+ name: automatic-speech-recognition
108
+ dataset:
109
+ name: Wall Street Journal 92
110
+ type: wsj_0
111
+ args:
112
+ language: en
113
+ metrics:
114
+ - name: Test WER
115
+ type: wer
116
+ value: 2.0
117
+ - task:
118
+ type: Automatic Speech Recognition
119
+ name: automatic-speech-recognition
120
+ dataset:
121
+ name: Wall Street Journal 93
122
+ type: wsj_1
123
+ args:
124
+ language: en
125
+ metrics:
126
+ - name: Test WER
127
+ type: wer
128
+ value: 2.9
129
+ - task:
130
+ type: Automatic Speech Recognition
131
+ name: automatic-speech-recognition
132
+ dataset:
133
+ name: National Singapore Corpus
134
+ type: nsc_part_1
135
+ args:
136
+ language: en
137
+ metrics:
138
+ - name: Test WER
139
+ type: wer
140
+ value: 7.0
141
  ---
142
+ # NVIDIA Streaming Citrinet 1024 (en-US)
143
+
144
+ <style>
145
+ img {
146
+ display: inline;
147
+ }
148
+ </style>
149
+
150
+ | [![Model architecture](https://img.shields.io/badge/Model_Arch-Citrinet--CTC-lightgrey#model-badge)](#model-architecture)
151
+ | [![Model size](https://img.shields.io/badge/Params-145M-lightgrey#model-badge)](#model-architecture)
152
+ | [![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)
153
+ | [![Riva Compatible](https://img.shields.io/badge/NVIDIA%20Riva-compatible-brightgreen#model-badge)](#deployment-with-nvidia-riva) |
154
+
155
+
156
+ This model transcribes speech in lowercase English alphabet including spaces and apostrophes, and is trained on several thousand hours of English speech data.
157
+ It is a non-autoregressive "large" variant of Streaming Citrinet, with around 140 million parameters.
158
+ See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
159
+ It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
160
+
161
+
162
+ ## Usage
163
+
164
+ The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
165
+
166
+ To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
167
+
168
+ ```
169
+ pip install nemo_toolkit['all']
170
+ ```
171
+
172
+ ### Automatically instantiate the model
173
+
174
+ ```python
175
+ import nemo.collections.asr as nemo_asr
176
+ asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_en_citrinet_1024_gamma_0_25")
177
+ ```
178
+
179
+ ### Transcribing using Python
180
+ First, let's get a sample
181
+ ```
182
+ wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
183
+ ```
184
+ Then simply do:
185
+ ```
186
+ asr_model.transcribe(['2086-149220-0033.wav'])
187
+ ```
188
+
189
+ ### Transcribing many audio files
190
+
191
+ ```shell
192
+ python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
193
+ pretrained_name="nvidia/stt_en_citrinet_1024_gamma_0_25"
194
+ audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
195
+ ```
196
+
197
+ ### Input
198
+
199
+ This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
200
+
201
+ ### Output
202
+
203
+ This model provides transcribed speech as a string for a given audio sample.
204
+
205
+ ## Model Architecture
206
+
207
+ Streaming Citrinet-1024 model is a non-autoregressive, streaming variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet).
208
+
209
+ ## Training
210
+
211
+ The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml).
212
+
213
+ The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
214
+
215
+
216
+ ### Datasets
217
+
218
+ All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
219
+
220
+ - Librispeech 960 hours of English speech
221
+ - Fisher Corpus
222
+ - Switchboard-1 Dataset
223
+ - WSJ-0 and WSJ-1
224
+ - National Speech Corpus (Part 1, Part 6)
225
+ - Mozilla Common Voice (v7.0)
226
+
227
+ Note: older versions of the model may have trained on smaller set of datasets.
228
+
229
+ ## Performance
230
+
231
+ The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
232
+
233
+ | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 |Train Dataset |
234
+ |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|---------|
235
+ | 1.0.0 | SentencePiece Unigram | 1024 | 7.6 | 3.4 | 2.5 | 4.0 | 6.2 | NeMo ASRSET 1.0 |
236
+
237
+ While deploying with [NVIDIA Riva](https://developer.nvidia.com/riva), you can combine this model with external language models to further improve WER. The WER(%) of the latest model with different language modeling techniques are reported in the following table.
238
+
239
+ ## Limitations
240
+ Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
241
+
242
+ ## Deployment with NVIDIA Riva
243
+ For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
244
+ Additionally, Riva provides:
245
+ * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
246
+ * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
247
+ * Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
248
+ Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
249
+
250
+ ## References
251
+ [1] [Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721)
252
+ [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
253
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)