ybbwcwaps commited on
Commit
c578da5
1 Parent(s): d0226b2

some llllll

Browse files
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/.gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *.tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.db* filter=lfs diff=lfs merge=lfs -text
29
+ *.ark* filter=lfs diff=lfs merge=lfs -text
30
+ **/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
31
+ **/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
32
+ **/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
33
+ punc.pb filter=lfs diff=lfs merge=lfs -text
34
+ lm.pb filter=lfs diff=lfs merge=lfs -text
35
+ model.pb filter=lfs diff=lfs merge=lfs -text
36
+ vad.pb filter=lfs diff=lfs merge=lfs -text
37
+ lm/lm.pb filter=lfs diff=lfs merge=lfs -text
38
+ punc/punc.pb filter=lfs diff=lfs merge=lfs -text
39
+ vad/vad.pb filter=lfs diff=lfs merge=lfs -text
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/README.md ADDED
@@ -0,0 +1,411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tasks:
3
+ - auto-speech-recognition
4
+ domain:
5
+ - audio
6
+ model-type:
7
+ - Non-autoregressive
8
+ frameworks:
9
+ - pytorch
10
+ backbone:
11
+ - transformer/conformer
12
+ metrics:
13
+ - CER
14
+ license: Apache License 2.0
15
+ language:
16
+ - cn
17
+ tags:
18
+ - FunASR
19
+ - Paraformer
20
+ - Alibaba
21
+ - INTERSPEECH 2022
22
+ datasets:
23
+ train:
24
+ - 60,000 hour industrial Mandarin task
25
+ test:
26
+ - AISHELL-1 dev/test
27
+ - AISHELL-2 dev_android/dev_ios/dev_mic/test_android/test_ios/test_mic
28
+ - WentSpeech dev/test_meeting/test_net
29
+ - SpeechIO TIOBE
30
+ - 60,000 hour industrial Mandarin task
31
+ indexing:
32
+ results:
33
+ - task:
34
+ name: Automatic Speech Recognition
35
+ dataset:
36
+ name: 60,000 hour industrial Mandarin task
37
+ type: audio # optional
38
+ args: 16k sampling rate, 8404 characters # optional
39
+ metrics:
40
+ - type: CER
41
+ value: 8.53% # float
42
+ description: greedy search, withou lm, avg.
43
+ args: default
44
+ - type: RTF
45
+ value: 0.0251 # float
46
+ description: GPU inference on V100
47
+ args: batch_size=1
48
+ widgets:
49
+ - task: auto-speech-recognition
50
+ model_revision: v2.0.4
51
+ inputs:
52
+ - type: audio
53
+ name: input
54
+ title: 音频
55
+ examples:
56
+ - name: 1
57
+ title: 示例1
58
+ inputs:
59
+ - name: input
60
+ data: git://example/asr_example.wav
61
+ inferencespec:
62
+ cpu: 8 #CPU数量
63
+ memory: 4096
64
+ finetune-support: True
65
+ ---
66
+
67
+
68
+ # Highlights
69
+ - Paraformer-large长音频模型集成VAD、ASR、标点与时间戳功能,可直接对时长为数小时音频进行识别,并输出带标点文字与时间戳:
70
+ - ASR模型:[Parformer-large模型](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)结构为非自回归语音识别模型,多个中文公开数据集上取得SOTA效果,可快速地基于ModelScope对模型进行微调定制和推理。
71
+ - 热词版本:[Paraformer-large热词版模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/summary)支持热词定制功能,基于提供的热词列表进行激励增强,提升热词的召回率和准确率。
72
+
73
+
74
+ ## <strong>[FunASR开源项目介绍](https://github.com/alibaba-damo-academy/FunASR)</strong>
75
+ <strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
76
+
77
+ [**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
78
+ | [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
79
+ | [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
80
+ | [**服务部署**](https://www.funasr.com)
81
+ | [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
82
+ | [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
83
+
84
+
85
+ ## 模型原理介绍
86
+
87
+ Paraformer是达摩院语音团队提出的一种高效的非自回归端到端语音识别框架。本项目为Paraformer中文通用语音识别模型,采用工业级数万小时的标注音频进行模型训练,保证了模型的通用识别效果。模型可以被应用于语音输入法、语音导航、智能会议纪要等场景。
88
+
89
+ <p align="center">
90
+ <img src="fig/struct.png" alt="Paraformer模型结构" width="500" />
91
+
92
+
93
+ Paraformer模型结构如上图所示,由 Encoder、Predictor、Sampler、Decoder 与 Loss function 五部分组成。Encoder可以采用不同的网络结构,例如self-attention,conformer,SAN-M等。Predictor 为两层FFN,预测目标文字个数以及抽取目标文字对应的声学向量。Sampler 为无可学习参数模块,依据输入的声学向量和目标向量,生产含有语义的特征向量。Decoder 结构与自回归模型类似,为双向建模(自回归为单向建模)。Loss function 部分,除了交叉熵(CE)与 MWER 区分性优化目标,还包括了 Predictor 优化目标 MAE。
94
+
95
+
96
+ 其核心点主要有:
97
+ - Predictor 模块:基于 Continuous integrate-and-fire (CIF) 的 预测器 (Predictor) 来抽取目标文字对应的声学特征向量,可以更加准确的预测语音中目标文字个数。
98
+ - Sampler:通过采样,将声学特征向量与目标文字向量变换成含有语义信息的特征向量,配合双向的 Decoder 来增强模型对于上下文的建模能力。
99
+ - 基于负样本采样的 MWER 训练准则。
100
+
101
+ 更详细的细节见:
102
+ - 论文: [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317)
103
+ - 论文解读:[Paraformer: 高识别率、高计算效率的单轮非自回归端到端语音识别模型](https://mp.weixin.qq.com/s/xQ87isj5_wxWiQs4qUXtVw)
104
+
105
+
106
+
107
+ #### 基��ModelScope进行推理
108
+
109
+ - 推理支持音频格式如下:
110
+ - wav文件路径,例如:data/test/audios/asr_example.wav
111
+ - pcm文件路径,例如:data/test/audios/asr_example.pcm
112
+ - wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav
113
+ - wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
114
+ - 已解析的audio音频,例如:audio, rate = soundfile.read("asr_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
115
+ - wav.scp文件,需符合如下要求:
116
+
117
+ ```sh
118
+ cat wav.scp
119
+ asr_example1 data/test/audios/asr_example1.wav
120
+ asr_example2 data/test/audios/asr_example2.wav
121
+ ...
122
+ ```
123
+
124
+ - 若输入格式wav文件url,api调用方式可参考如下范例:
125
+
126
+ ```python
127
+ from modelscope.pipelines import pipeline
128
+ from modelscope.utils.constant import Tasks
129
+
130
+ inference_pipeline = pipeline(
131
+ task=Tasks.auto_speech_recognition,
132
+ model='iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
133
+ model_revision="v2.0.4")
134
+
135
+ rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_vad_punc_example.wav')
136
+ print(rec_result)
137
+ ```
138
+
139
+ - 输入音频为pcm格式,调用api时需要传入音频采样率参数audio_fs,例如:
140
+
141
+ ```python
142
+ rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_vad_punc_example.pcm', fs=16000)
143
+ ```
144
+
145
+ - 输入音频为wav格式,api调用方式可参考如下范例:
146
+
147
+ ```python
148
+ rec_result = inference_pipeline('asr_vad_punc_example.wav')
149
+ ```
150
+
151
+ - 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,api调用方式可参考如下范例:
152
+
153
+ ```python
154
+ inference_pipeline("wav.scp", output_dir='./output_dir')
155
+ ```
156
+ 识别结果输出路径结构如下:
157
+
158
+ ```sh
159
+ tree output_dir/
160
+ output_dir/
161
+ └── 1best_recog
162
+ ├── score
163
+ ├── text
164
+
165
+ 1 directory, 4 files
166
+ ```
167
+ score:识别路径得分
168
+
169
+ text:语音识别结果文件
170
+
171
+
172
+ - 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
173
+
174
+ ```python
175
+ import soundfile
176
+
177
+ waveform, sample_rate = soundfile.read("asr_vad_punc_example.wav")
178
+ rec_result = inference_pipeline(waveform)
179
+ ```
180
+
181
+ - ASR、VAD、PUNC模型自由组合
182
+
183
+ 可根据使用需求对VAD和PUNC标点模型进行自由组合,使用方式如下:
184
+ ```python
185
+ inference_pipeline = pipeline(
186
+ task=Tasks.auto_speech_recognition,
187
+ model='iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch', model_revision="v2.0.4",
188
+ vad_model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch', vad_model_revision="v2.0.4",
189
+ punc_model='iic/punc_ct-transformer_zh-cn-common-vocab272727-pytorch', punc_model_revision="v2.0.3",
190
+ # spk_model="iic/speech_campplus_sv_zh-cn_16k-common",
191
+ # spk_model_revision="v2.0.2",
192
+ )
193
+ ```
194
+ 若不使用PUNC模型,可配置punc_model="",或不传入punc_model参数,如需加入LM模型,可增加配置lm_model='damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch',并设置lm_weight和beam_size参数。
195
+
196
+ ## 基于FunASR进行推理
197
+
198
+ 下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_en.wav))
199
+
200
+ ### 可执行命令行
201
+ 在命令行终端执行:
202
+
203
+ ```shell
204
+ funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=vad_example.wav
205
+ ```
206
+
207
+ 注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id wav_path`
208
+
209
+ ### python示例
210
+ #### 非实时语音识别
211
+ ```python
212
+ from funasr import AutoModel
213
+ # paraformer-zh is a multi-functional asr model
214
+ # use vad, punc, spk or not as you need
215
+ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
216
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
217
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
218
+ # spk_model="cam++", spk_model_revision="v2.0.2",
219
+ )
220
+ res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
221
+ batch_size_s=300,
222
+ hotword='魔搭')
223
+ print(res)
224
+ ```
225
+ 注:`model_hub`:表示模型仓库,`ms`为选择modelscope下载,`hf`为选择huggingface下载。
226
+
227
+ #### 实时语音识别
228
+
229
+ ```python
230
+ from funasr import AutoModel
231
+
232
+ chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
233
+ encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
234
+ decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
235
+
236
+ model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
237
+
238
+ import soundfile
239
+ import os
240
+
241
+ wav_file = os.path.join(model.model_path, "example/asr_example.wav")
242
+ speech, sample_rate = soundfile.read(wav_file)
243
+ chunk_stride = chunk_size[1] * 960 # 600ms
244
+
245
+ cache = {}
246
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
247
+ for i in range(total_chunk_num):
248
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
249
+ is_final = i == total_chunk_num - 1
250
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
251
+ print(res)
252
+ ```
253
+
254
+ 注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。
255
+
256
+ #### 语音端点检测(非实时)
257
+ ```python
258
+ from funasr import AutoModel
259
+
260
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
261
+
262
+ wav_file = f"{model.model_path}/example/asr_example.wav"
263
+ res = model.generate(input=wav_file)
264
+ print(res)
265
+ ```
266
+
267
+ #### 语音端点检测(实时)
268
+ ```python
269
+ from funasr import AutoModel
270
+
271
+ chunk_size = 200 # ms
272
+ model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
273
+
274
+ import soundfile
275
+
276
+ wav_file = f"{model.model_path}/example/vad_example.wav"
277
+ speech, sample_rate = soundfile.read(wav_file)
278
+ chunk_stride = int(chunk_size * sample_rate / 1000)
279
+
280
+ cache = {}
281
+ total_chunk_num = int(len((speech)-1)/chunk_stride+1)
282
+ for i in range(total_chunk_num):
283
+ speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
284
+ is_final = i == total_chunk_num - 1
285
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
286
+ if len(res[0]["value"]):
287
+ print(res)
288
+ ```
289
+
290
+ #### 标点恢复
291
+ ```python
292
+ from funasr import AutoModel
293
+
294
+ model = AutoModel(model="ct-punc", model_revision="v2.0.4")
295
+
296
+ res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
297
+ print(res)
298
+ ```
299
+
300
+ #### 时间戳预测
301
+ ```python
302
+ from funasr import AutoModel
303
+
304
+ model = AutoModel(model="fa-zh", model_revision="v2.0.4")
305
+
306
+ wav_file = f"{model.model_path}/example/asr_example.wav"
307
+ text_file = f"{model.model_path}/example/text.txt"
308
+ res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
309
+ print(res)
310
+ ```
311
+
312
+ 更多详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
313
+
314
+
315
+ ## 微调
316
+
317
+ 详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
318
+
319
+
320
+
321
+ ## Benchmark
322
+ 结合大数据、大模型优化的Paraformer在一序列语音识别的benchmark上获得当前SOTA的效果,以下展示学术数据集AISHELL-1、AISHELL-2、WenetSpeech,公开评测项目SpeechIO TIOBE白盒测试场景的效果。在学术界常用的中文语音识别评测任务中,其表现远远超于目前公开发表论文中的结果,远好于单独封闭数据集上的模型。此结果为[Paraformer-large模型](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-aishell1-vocab8404-pytorch/summary)在无VAD和标点模型下的测试结果。
323
+
324
+ ### AISHELL-1
325
+
326
+ | AISHELL-1 test | w/o LM | w/ LM |
327
+ |:------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|
328
+ | <div style="width: 150pt">Espnet</div> | <div style="width: 150pt">4.90</div> | <div style="width: 150pt">4.70</div> |
329
+ | <div style="width: 150pt">Wenet</div> | <div style="width: 150pt">4.61</div> | <div style="width: 150pt">4.36</div> |
330
+ | <div style="width: 150pt">K2</div> | <div style="width: 150pt">-</div> | <div style="width: 150pt">4.26</div> |
331
+ | <div style="width: 150pt">Blockformer</div> | <div style="width: 150pt">4.29</div> | <div style="width: 150pt">4.05</div> |
332
+ | <div style="width: 150pt">Paraformer-large</div> | <div style="width: 150pt">1.95</div> | <div style="width: 150pt">1.68</div> |
333
+
334
+ ### AISHELL-2
335
+
336
+ | | dev_ios| test_android| test_ios|test_mic|
337
+ |:-------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|:------------------------------------:|:------------------------------------:|
338
+ | <div style="width: 150pt">Espnet</div> | <div style="width: 70pt">5.40</div> |<div style="width: 70pt">6.10</div> |<div style="width: 70pt">5.70</div> |<div style="width: 70pt">6.10</div> |
339
+ | <div style="width: 150pt">WeNet</div> | <div style="width: 70pt">-</div> |<div style="width: 70pt">-</div> |<div style="width: 70pt">5.39</div> |<div style="width: 70pt">-</div> |
340
+ | <div style="width: 150pt">Paraformer-large</div> | <div style="width: 70pt">2.80</div> |<div style="width: 70pt">3.13</div> |<div style="width: 70pt">2.85</div> |<div style="width: 70pt">3.06</div> |
341
+
342
+
343
+ ### Wenetspeech
344
+
345
+ | | dev| test_meeting| test_net|
346
+ |:-------------------------------------------------:|:-------------------------------------:|:-------------------------------------:|:------------------------------------:|
347
+ | <div style="width: 150pt">Espnet</div> | <div style="width: 100pt">9.70</div> |<div style="width: 100pt">15.90</div> |<div style="width: 100pt">8.80</div> |
348
+ | <div style="width: 150pt">WeNet</div> | <div style="width: 100pt">8.60</div> |<div style="width: 100pt">17.34</div> |<div style="width: 100pt">9.26</div> |
349
+ | <div style="width: 150pt">K2</div> | <div style="width: 100pt">7.76</div> |<div style="width: 100pt">13.41</div> |<div style="width: 100pt">8.71</div> |
350
+ | <div style="width: 150pt">Paraformer-large</div> | <div style="width: 100pt">3.57</div> |<div style="width: 100pt">6.97</div> |<div style="width: 100pt">6.74</div> |
351
+
352
+ ### [SpeechIO TIOBE](https://github.com/SpeechColab/Leaderboard)
353
+
354
+ Paraformer-large模型结合Transformer-LM模型做shallow fusion,在公开评测项目SpeechIO TIOBE白盒测试场景上获得当前SOTA的效果,目前[Transformer-LM模型](https://modelscope.cn/models/damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch/summary)已在ModelScope上开源,以下展示SpeechIO TIOBE白盒测试场景without LM、with Transformer-LM的效果:
355
+
356
+ - Decode config w/o LM:
357
+ - Decode without LM
358
+ - Beam size: 1
359
+ - Decode config w/ LM:
360
+ - Decode with [Transformer-LM](https://modelscope.cn/models/damo/speech_transformer_lm_zh-cn-common-vocab8404-pytorch/summary)
361
+ - Beam size: 10
362
+ - LM weight: 0.15
363
+
364
+ | testset | w/o LM | w/ LM |
365
+ |:------------------:|:----:|:----:|
366
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00001</div>| <div style="width: 150pt">0.49</div> | <div style="width: 150pt">0.35</div> |
367
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00002</div>| <div style="width: 150pt">3.23</div> | <div style="width: 150pt">2.86</div> |
368
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00003</div>| <div style="width: 150pt">1.13</div> | <div style="width: 150pt">0.80</div> |
369
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00004</div>| <div style="width: 150pt">1.33</div> | <div style="width: 150pt">1.10</div> |
370
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00005</div>| <div style="width: 150pt">1.41</div> | <div style="width: 150pt">1.18</div> |
371
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00006</div>| <div style="width: 150pt">5.25</div> | <div style="width: 150pt">4.85</div> |
372
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00007</div>| <div style="width: 150pt">5.51</div> | <div style="width: 150pt">4.97</div> |
373
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00008</div>| <div style="width: 150pt">3.69</div> | <div style="width: 150pt">3.18</div> |
374
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH00009</div>| <div style="width: 150pt">3.02</div> | <div style="width: 150pt">2.78</div> |
375
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000010</div>| <div style="width: 150pt">3.35</div> | <div style="width: 150pt">2.99</div> |
376
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000011</div>| <div style="width: 150pt">1.54</div> | <div style="width: 150pt">1.25</div> |
377
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000012</div>| <div style="width: 150pt">2.06</div> | <div style="width: 150pt">1.68</div> |
378
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000013</div>| <div style="width: 150pt">2.57</div> | <div style="width: 150pt">2.25</div> |
379
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000014</div>| <div style="width: 150pt">3.86</div> | <div style="width: 150pt">3.08</div> |
380
+ |<div style="width: 200pt">SPEECHIO_ASR_ZH000015</div>| <div style="width: 150pt">3.34</div> | <div style="width: 150pt">2.67</div> |
381
+
382
+
383
+ ## 使用方式以及适用范围
384
+
385
+ 运行范围
386
+ - 支持Linux-x86_64、Mac和Windows运行。
387
+
388
+ 使用方式
389
+ - 直接推理:可以直接对输入音频进行解码,输出目标文字。
390
+ - 微调:加载训练好的模型,采用私有或者开源数据进行模型训练。
391
+
392
+ 使用范围与目标场景
393
+ - 适合与离线语音识别场景,如录音文件转写,配合GPU推理效果更加,输入音频时长不限制,可以为几个小时音频。
394
+
395
+
396
+ ## 模型局限性以及可能的偏差
397
+
398
+ 考虑到特征提取流程和工具以及训练工具差异,会对CER的数据带来一定的差异(<0.1%),推理GPU环境差异导致的RTF数值差异。
399
+
400
+
401
+
402
+ ## 相关论文以及引用信息
403
+
404
+ ```BibTeX
405
+ @inproceedings{gao2022paraformer,
406
+ title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
407
+ author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie},
408
+ booktitle={INTERSPEECH},
409
+ year={2022}
410
+ }
411
+ ```
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/am.mvn ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ <Nnet>
2
+ <Splice> 560 560
3
+ [ 0 ]
4
+ <AddShift> 560 560
5
+ <LearnRateCoef> 0 [ -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 ]
6
+ <Rescale> 560 560
7
+ <LearnRateCoef> 0 [ 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 ]
8
+ </Nnet>
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/config.yaml ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This is an example that demonstrates how to configure a model file.
2
+ # You can modify the configuration according to your own requirements.
3
+
4
+ # to print the register_table:
5
+ # from funasr.register import tables
6
+ # tables.print()
7
+
8
+ # network architecture
9
+ #model: funasr.models.paraformer.model:Paraformer
10
+ model: BiCifParaformer
11
+ model_conf:
12
+ ctc_weight: 0.0
13
+ lsm_weight: 0.1
14
+ length_normalized_loss: true
15
+ predictor_weight: 1.0
16
+ predictor_bias: 1
17
+ sampling_ratio: 0.75
18
+
19
+ # encoder
20
+ encoder: SANMEncoder
21
+ encoder_conf:
22
+ output_size: 512
23
+ attention_heads: 4
24
+ linear_units: 2048
25
+ num_blocks: 50
26
+ dropout_rate: 0.1
27
+ positional_dropout_rate: 0.1
28
+ attention_dropout_rate: 0.1
29
+ input_layer: pe
30
+ pos_enc_class: SinusoidalPositionEncoder
31
+ normalize_before: true
32
+ kernel_size: 11
33
+ sanm_shfit: 0
34
+ selfattention_layer_type: sanm
35
+
36
+ # decoder
37
+ decoder: ParaformerSANMDecoder
38
+ decoder_conf:
39
+ attention_heads: 4
40
+ linear_units: 2048
41
+ num_blocks: 16
42
+ dropout_rate: 0.1
43
+ positional_dropout_rate: 0.1
44
+ self_attention_dropout_rate: 0.1
45
+ src_attention_dropout_rate: 0.1
46
+ att_layer_num: 16
47
+ kernel_size: 11
48
+ sanm_shfit: 0
49
+
50
+ predictor: CifPredictorV3
51
+ predictor_conf:
52
+ idim: 512
53
+ threshold: 1.0
54
+ l_order: 1
55
+ r_order: 1
56
+ tail_threshold: 0.45
57
+ smooth_factor2: 0.25
58
+ noise_threshold2: 0.01
59
+ upsample_times: 3
60
+ use_cif1_cnn: false
61
+ upsample_type: cnn_blstm
62
+
63
+ # frontend related
64
+ frontend: WavFrontend
65
+ frontend_conf:
66
+ fs: 16000
67
+ window: hamming
68
+ n_mels: 80
69
+ frame_length: 25
70
+ frame_shift: 10
71
+ lfr_m: 7
72
+ lfr_n: 6
73
+
74
+ specaug: SpecAugLFR
75
+ specaug_conf:
76
+ apply_time_warp: false
77
+ time_warp_window: 5
78
+ time_warp_mode: bicubic
79
+ apply_freq_mask: true
80
+ freq_mask_width_range:
81
+ - 0
82
+ - 30
83
+ lfr_rate: 6
84
+ num_freq_mask: 1
85
+ apply_time_mask: true
86
+ time_mask_width_range:
87
+ - 0
88
+ - 12
89
+ num_time_mask: 1
90
+
91
+ train_conf:
92
+ accum_grad: 1
93
+ grad_clip: 5
94
+ max_epoch: 150
95
+ val_scheduler_criterion:
96
+ - valid
97
+ - acc
98
+ best_model_criterion:
99
+ - - valid
100
+ - acc
101
+ - max
102
+ keep_nbest_models: 10
103
+ log_interval: 50
104
+
105
+ optim: adam
106
+ optim_conf:
107
+ lr: 0.0005
108
+ scheduler: warmuplr
109
+ scheduler_conf:
110
+ warmup_steps: 30000
111
+
112
+ dataset: AudioDataset
113
+ dataset_conf:
114
+ index_ds: IndexDSJsonl
115
+ batch_sampler: DynamicBatchLocalShuffleSampler
116
+ batch_type: example # example or length
117
+ batch_size: 1 # if batch_type is example, batch_size is the numbers of samples; if length, batch_size is source_token_len+target_token_len;
118
+ max_token_length: 2048 # filter samples if source_token_len+target_token_len > max_token_length,
119
+ buffer_size: 500
120
+ shuffle: True
121
+ num_workers: 0
122
+
123
+ tokenizer: CharTokenizer
124
+ tokenizer_conf:
125
+ unk_symbol: <unk>
126
+ split_with_space: true
127
+
128
+
129
+ ctc_conf:
130
+ dropout_rate: 0.0
131
+ ctc_type: builtin
132
+ reduce: true
133
+ ignore_nan_grad: true
134
+ normalize: null
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/configuration.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task" : "auto-speech-recognition",
4
+ "model": {"type" : "funasr"},
5
+ "pipeline": {"type":"funasr-pipeline"},
6
+ "vad_model": "iic/speech_fsmn_vad_zh-cn-16k-common-pytorch",
7
+ "punc_model": "iic/punc_ct-transformer_cn-en-common-vocab471067-large",
8
+ "lm_model": "iic/speech_transformer_lm_zh-cn-common-vocab8404-pytorch",
9
+ "model_name_in_hub": {
10
+ "ms":"iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch",
11
+ "hf":""},
12
+ "file_path_metas": {
13
+ "init_param":"model.pt",
14
+ "config":"config.yaml",
15
+ "tokenizer_conf": {"token_list": "tokens.json", "seg_dict_file": "seg_dict"},
16
+ "frontend_conf":{"cmvn_file": "am.mvn"}}
17
+ }
FakeVD/Models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/tokens.json ADDED
The diff for this file is too large to render. See raw diff