okuchaiev commited on
Commit
5f5918e
1 Parent(s): 1af3da0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md CHANGED
@@ -1,3 +1,115 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ ## Model Overview
5
+
6
+ This is a "large" versions of Conformer-CTC (around 120M parameters) trained on NeMo ASRSet with around 16000 hours of english speech. The model transcribes speech in lower case english alphabet along with spaces and apostrophes.
7
+
8
+ ## NVIDIA NeMo
9
+
10
+ To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
11
+ ```
12
+ pip install nemo_toolkit['all']
13
+ ```
14
+
15
+ ## How to Use this Model
16
+
17
+ The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
18
+
19
+ ### Automatically load the model from NGC
20
+
21
+ ```python
22
+ import nemo.collections.asr as nemo_asr
23
+ asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_en_conformer_ctc_large")
24
+ ```
25
+
26
+ ### Transcribing text with this model
27
+
28
+ ```shell
29
+ python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py \
30
+ pretrained_name="stt_en_conformer_ctc_large" \
31
+ audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
32
+ ```
33
+
34
+ ### Input
35
+
36
+ This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
37
+
38
+ ### Output
39
+
40
+ This model provides transcribed speech as a string for a given audio sample.
41
+
42
+ ## Production Deployment
43
+
44
+ This model can be efficiently deployed with [NVIDIA Riva](https://developer.nvidia.com/riva) on prem or with most popular cloud providers.
45
+
46
+
47
+ ## Model Architecture
48
+
49
+ Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
50
+
51
+ ## Training
52
+
53
+ The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
54
+
55
+ The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
56
+
57
+ The checkpoint of the language model used as the neural rescorer can be found [here](https://ngc.nvidia.com/catalog/models/nvidia:nemo:asrlm_en_transformer_large_ls). You may find more info on how to train and use language models for ASR models here: [ASR Language Modeling](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html)
58
+
59
+ ### Datasets
60
+
61
+ All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
62
+
63
+ - Librispeech 960 hours of English speech
64
+ - Fisher Corpus
65
+ - Switchboard-1 Dataset
66
+ - WSJ-0 and WSJ-1
67
+ - National Speech Corpus (Part 1, Part 6)
68
+ - VCTK
69
+ - VoxPopuli (EN)
70
+ - Europarl-ASR (EN)
71
+ - Multilingual Librispeech (MLS EN) - 2,000 hours subset
72
+ - Mozilla Common Voice (v7.0)
73
+
74
+ Note: older versions of the model may have trained on smaller set of datasets.
75
+
76
+ ## Performance
77
+
78
+ The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
79
+
80
+ | Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 6.1 |Train Dataset |
81
+ |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-------|------|-----|-------|---------|
82
+ | 1.6.0 | SentencePiece Unigram | 128 | 4.3 | 2.2 | 2.0 | 2.9 | 7.0 | 7.2 | 6.5 | 8.0 | NeMo ASRSET 2.0 |
83
+ | 1.0.0 | SentencePiece Unigram | 128 | 5.4 | 2.5 | 2.1 | 3.0 | 7.9 | - | - | - | NeMo ASRSET 1.4.1 |
84
+ | rc1.0.0 | WordPiece | 128 | 6.3 | 2.7 | - | - | - | - | - | - | LibriSpeech |
85
+
86
+
87
+ You may use language models to improve the accuracy of the models. The WER(%) of the latest model with different language modeling techniques are reported in the follwoing table.
88
+
89
+ | Language Modeling | Training Dataset | LS test-other | LS test-clean | Comment |
90
+ |-------------------------------------|-------------------------|---------------|---------------|---------------------------------------------------------|
91
+ |N-gram LM | LS Train + LS LM Corpus | 3.5 | 1.8 | N=10, beam_width=128, n_gram_alpha=1.0, n_gram_beta=1.0 |
92
+ |Neural Rescorer(Transformer) | LS Train + LS LM Corpus | 3.4 | 1.7 | N=10, beam_width=128 |
93
+ |N-gram + Neural Rescorer(Transformer)| LS Train + LS LM Corpus | 3.2 | 1.8 | N=10, beam_width=128, n_gram_alpha=1.0, n_gram_beta=1.0 |
94
+
95
+
96
+ ## Limitations
97
+
98
+ Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
99
+
100
+
101
+ ## References
102
+
103
+
104
+ [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
105
+
106
+ [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
107
+
108
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
109
+
110
+
111
+ ## Licence
112
+
113
+ License to use this model is covered by the NGC [TERMS OF USE](https://ngc.nvidia.com/legal/terms) unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC [TERMS OF USE](https://ngc.nvidia.com/legal/terms).
114
+
115
+