Titouan commited on
Commit
63e5d51
1 Parent(s): 013809d

Pushing models

Browse files
Files changed (5) hide show
  1. README.md +77 -0
  2. asr.ckpt +3 -0
  3. hyperparams.yaml +133 -0
  4. normalizer.ckpt +3 -0
  5. tokenizer.ckpt +3 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "it"
3
+ thumbnail:
4
+ tags:
5
+ - ASR
6
+ - CTC
7
+ - Attention
8
+ - pytorch
9
+ license: "apache-2.0"
10
+ datasets:
11
+ - commonvoice
12
+ metrics:
13
+ - wer
14
+ - cer
15
+ ---
16
+
17
+ # CRDNN with CTC/Attention trained on CommonVoice Italian (No LM)
18
+
19
+ This repository provides all the necessary tools to perform automatic speech
20
+ recognition from an end-to-end system pretrained on CommonVoice (IT) within
21
+ SpeechBrain. For a better experience we encourage you to learn more about
22
+ [SpeechBrain](https://speechbrain.github.io). The given ASR model performance are:
23
+
24
+ | Release | Test CER | Test WER | GPUs |
25
+ |:-------------:|:--------------:|:--------------:| :--------:|
26
+ | 07-03-21 | 5.40 | 16.61 | 2xV100 16GB |
27
+
28
+ ## Pipeline description
29
+
30
+ This ASR system is composed with 2 different but linked blocks:
31
+ 1. Tokenizer (unigram) that transforms words into subword units and trained with
32
+ the train transcriptions (train.tsv) of CommonVoice (IT).
33
+ 3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
34
+ N blocks of convolutional neural networks with normalisation and pooling on the
35
+ frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
36
+ the final acoustic representation that is given to the CTC and attention decoders.
37
+
38
+ ## Intended uses & limitations
39
+
40
+ This model has been primilarly developed to be run within SpeechBrain as a pretrained ASR model
41
+ for the Italian language. Thanks to the flexibility of SpeechBrain, any of the 2 blocks
42
+ detailed above can be extracted and connected to you custom pipeline as long as SpeechBrain is
43
+ installed.
44
+
45
+ ## Install SpeechBrain
46
+
47
+ First of all, please install SpeechBrain with the following command:
48
+
49
+ ```
50
+ pip install \\we hide ! SpeechBrain is still private :p
51
+ ```
52
+
53
+ Please notice that we encourage you to read our tutorials and learn more about
54
+ [SpeechBrain](https://speechbrain.github.io).
55
+
56
+ ### Transcribing your own audio files
57
+
58
+ ```python
59
+ from speechbrain.pretrained import EncoderDecoderASR
60
+
61
+ asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-it")
62
+ asr_model.transcribe_file("path_to_your_file.wav")
63
+
64
+ ```
65
+
66
+ #### Referencing SpeechBrain
67
+
68
+ ```
69
+ @misc{SB2021,
70
+ author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
71
+ title = {SpeechBrain},
72
+ year = {2021},
73
+ publisher = {GitHub},
74
+ journal = {GitHub repository},
75
+ howpublished = {\url{https://github.com/speechbrain/speechbrain}},
76
+ }
77
+ ```
asr.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb07bae801b84794b333917a12de74c06e73aa467ea8591970259040f97ba8c3
3
+ size 592775157
hyperparams.yaml ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ################################
2
+ # Model: VGG2 + LSTM + time pooling
3
+ # Augmentation: SpecAugment
4
+ # Authors: Titouan Parcollet, Mirco Ravanelli, Peter Plantinga, Ju-Chieh Chou,
5
+ # and Abdel HEBA 2020
6
+ # ################################
7
+
8
+ # Feature parameters (FBANKS etc)
9
+ sample_rate: 16000
10
+ n_fft: 400
11
+ n_mels: 80
12
+
13
+ # Model parameters
14
+ activation: !name:torch.nn.LeakyReLU
15
+ dropout: 0.15
16
+ cnn_blocks: 3
17
+ cnn_channels: (128, 200, 256)
18
+ inter_layer_pooling_size: (2, 2, 2)
19
+ cnn_kernelsize: (3, 3)
20
+ time_pooling_size: 4
21
+ rnn_class: !name:speechbrain.nnet.RNN.LSTM
22
+ rnn_layers: 5
23
+ rnn_neurons: 1024
24
+ rnn_bidirectional: True
25
+ dnn_blocks: 2
26
+ dnn_neurons: 1024
27
+ emb_size: 128
28
+ dec_neurons: 1024
29
+
30
+ # Outputs
31
+ output_neurons: 500 # BPE size, index(blank/eos/bos) = 0
32
+
33
+ # Decoding parameters
34
+ # Be sure that the bos and eos index match with the BPEs ones
35
+ blank_index: 0
36
+ bos_index: 0
37
+ eos_index: 0
38
+ min_decode_ratio: 0.0
39
+ max_decode_ratio: 1.0
40
+ beam_size: 80
41
+ eos_threshold: 1.5
42
+ using_max_attn_shift: True
43
+ max_attn_shift: 140
44
+ ctc_weight_decode: 0.0
45
+ temperature: 1.50
46
+
47
+ normalize: !new:speechbrain.processing.features.InputNormalization
48
+ norm_type: global
49
+
50
+ compute_features: !new:speechbrain.lobes.features.Fbank
51
+ sample_rate: !ref <sample_rate>
52
+ n_fft: !ref <n_fft>
53
+ n_mels: !ref <n_mels>
54
+
55
+ enc: !new:speechbrain.lobes.models.CRDNN.CRDNN
56
+ input_shape: [null, null, !ref <n_mels>]
57
+ activation: !ref <activation>
58
+ dropout: !ref <dropout>
59
+ cnn_blocks: !ref <cnn_blocks>
60
+ cnn_channels: !ref <cnn_channels>
61
+ cnn_kernelsize: !ref <cnn_kernelsize>
62
+ inter_layer_pooling_size: !ref <inter_layer_pooling_size>
63
+ time_pooling: True
64
+ using_2d_pooling: False
65
+ time_pooling_size: !ref <time_pooling_size>
66
+ rnn_class: !ref <rnn_class>
67
+ rnn_layers: !ref <rnn_layers>
68
+ rnn_neurons: !ref <rnn_neurons>
69
+ rnn_bidirectional: !ref <rnn_bidirectional>
70
+ rnn_re_init: True
71
+ dnn_blocks: !ref <dnn_blocks>
72
+ dnn_neurons: !ref <dnn_neurons>
73
+
74
+ emb: !new:speechbrain.nnet.embedding.Embedding
75
+ num_embeddings: !ref <output_neurons>
76
+ embedding_dim: !ref <emb_size>
77
+
78
+ dec: !new:speechbrain.nnet.RNN.AttentionalRNNDecoder
79
+ enc_dim: !ref <dnn_neurons>
80
+ input_size: !ref <emb_size>
81
+ rnn_type: gru
82
+ attn_type: location
83
+ hidden_size: 1024
84
+ attn_dim: 1024
85
+ num_layers: 1
86
+ scaling: 1.0
87
+ channels: 10
88
+ kernel_size: 100
89
+ re_init: True
90
+ dropout: !ref <dropout>
91
+
92
+ ctc_lin: !new:speechbrain.nnet.linear.Linear
93
+ input_size: !ref <dnn_neurons>
94
+ n_neurons: !ref <output_neurons>
95
+
96
+ seq_lin: !new:speechbrain.nnet.linear.Linear
97
+ input_size: !ref <dec_neurons>
98
+ n_neurons: !ref <output_neurons>
99
+
100
+ log_softmax: !new:speechbrain.nnet.activations.Softmax
101
+ apply_log: True
102
+
103
+ asr_model: !new:torch.nn.ModuleList
104
+ - [!ref <enc>, !ref <emb>, !ref <dec>, !ref <ctc_lin>, !ref <seq_lin>]
105
+
106
+ tokenizer: !new:sentencepiece.SentencePieceProcessor
107
+
108
+ beam_searcher: !new:speechbrain.decoders.S2SRNNBeamSearcher
109
+ embedding: !ref <emb>
110
+ decoder: !ref <dec>
111
+ linear: !ref <seq_lin>
112
+ bos_index: !ref <bos_index>
113
+ eos_index: !ref <eos_index>
114
+ min_decode_ratio: !ref <min_decode_ratio>
115
+ max_decode_ratio: !ref <max_decode_ratio>
116
+ beam_size: !ref <beam_size>
117
+ eos_threshold: !ref <eos_threshold>
118
+ using_max_attn_shift: !ref <using_max_attn_shift>
119
+ max_attn_shift: !ref <max_attn_shift>
120
+ temperature: !ref <temperature>
121
+
122
+ modules:
123
+ compute_features: !ref <compute_features>
124
+ normalize: !ref <normalize>
125
+ asr_model: !ref <asr_model>
126
+ asr_encoder: !ref <enc>
127
+ asr_decoder: !ref <dec>
128
+ beam_searcher: !ref <beam_searcher>
129
+
130
+ pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
131
+ loadables:
132
+ asr: !ref <asr_model>
133
+ tokenizer: !ref <tokenizer>
normalizer.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc525f85eea303d62356ac3909e6c87a9bc90d0977643ce563d63c1aba5990fe
3
+ size 1785
tokenizer.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4790528a4f8360c84d8e0cf46e97a41e65664d44cbb8e4d7363242650a245a23
3
+ size 244732