Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ license_link: https://github.com/modelscope/FunASR/blob/main/MODEL_LICENSE
|
|
8 |
|
9 |
SenseVoice is a speech foundation model with multiple speech understanding capabilities, including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and audio event detection (AED).
|
10 |
|
11 |
-
<div align="center"><img src="
|
12 |
|
13 |
|
14 |
# Highlights
|
@@ -167,7 +167,7 @@ bash finetune.sh
|
|
167 |
python webui.py
|
168 |
```
|
169 |
|
170 |
-
<div align="center"><img src="
|
171 |
|
172 |
<a name="Community"></a>
|
173 |
# Community
|
@@ -180,7 +180,7 @@ python webui.py
|
|
180 |
We compared the performance of multilingual speech recognition between SenseVoice and Whisper on open-source benchmark datasets, including AISHELL-1, AISHELL-2, Wenetspeech, LibriSpeech, and Common Voice. n terms of Chinese and Cantonese recognition, the SenseVoice-Small model has advantages.
|
181 |
|
182 |
<div align="center">
|
183 |
-
<img src="
|
184 |
</div>
|
185 |
|
186 |
|
@@ -190,13 +190,13 @@ We compared the performance of multilingual speech recognition between SenseVoic
|
|
190 |
Due to the current lack of widely-used benchmarks and methods for speech emotion recognition, we conducted evaluations across various metrics on multiple test sets and performed a comprehensive comparison with numerous results from recent benchmarks. The selected test sets encompass data in both Chinese and English, and include multiple styles such as performances, films, and natural conversations. Without finetuning on the target data, SenseVoice was able to achieve and exceed the performance of the current best speech emotion recognition models.
|
191 |
|
192 |
<div align="center">
|
193 |
-
<img src="
|
194 |
</div>
|
195 |
|
196 |
Furthermore, we compared multiple open-source speech emotion recognition models on the test sets, and the results indicate that the SenseVoice-Large model achieved the best performance on nearly all datasets, while the SenseVoice-Small model also surpassed other open-source models on the majority of the datasets.
|
197 |
|
198 |
<div align="center">
|
199 |
-
<img src="
|
200 |
</div>
|
201 |
|
202 |
## Audio Event Detection
|
@@ -204,7 +204,7 @@ Furthermore, we compared multiple open-source speech emotion recognition models
|
|
204 |
Although trained exclusively on speech data, SenseVoice can still function as a standalone event detection model. We compared its performance on the environmental sound classification ESC-50 dataset against the widely used industry models BEATS and PANN. The SenseVoice model achieved commendable results on these tasks. However, due to limitations in training data and methodology, its event classification performance has some gaps compared to specialized AED models.
|
205 |
|
206 |
<div align="center">
|
207 |
-
<img src="
|
208 |
</div>
|
209 |
|
210 |
|
@@ -213,5 +213,5 @@ Although trained exclusively on speech data, SenseVoice can still function as a
|
|
213 |
The SenseVoice-Small model non-autoregressive end-to-end architecture, resulting in extremely low inference latency. With a similar number of parameters to the Whisper-Small model, it infers 7 times faster than Whisper-Small and 17 times faster than Whisper-Large.
|
214 |
|
215 |
<div align="center">
|
216 |
-
<img src="
|
217 |
</div>
|
|
|
8 |
|
9 |
SenseVoice is a speech foundation model with multiple speech understanding capabilities, including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and audio event detection (AED).
|
10 |
|
11 |
+
<div align="center"><img src="fig/sensevoice.png" width="1000"/> </div>
|
12 |
|
13 |
|
14 |
# Highlights
|
|
|
167 |
python webui.py
|
168 |
```
|
169 |
|
170 |
+
<div align="center"><img src="fig/webui.png" width="700"/> </div>
|
171 |
|
172 |
<a name="Community"></a>
|
173 |
# Community
|
|
|
180 |
We compared the performance of multilingual speech recognition between SenseVoice and Whisper on open-source benchmark datasets, including AISHELL-1, AISHELL-2, Wenetspeech, LibriSpeech, and Common Voice. n terms of Chinese and Cantonese recognition, the SenseVoice-Small model has advantages.
|
181 |
|
182 |
<div align="center">
|
183 |
+
<img src="fig/asr_results.png" width="1000" />
|
184 |
</div>
|
185 |
|
186 |
|
|
|
190 |
Due to the current lack of widely-used benchmarks and methods for speech emotion recognition, we conducted evaluations across various metrics on multiple test sets and performed a comprehensive comparison with numerous results from recent benchmarks. The selected test sets encompass data in both Chinese and English, and include multiple styles such as performances, films, and natural conversations. Without finetuning on the target data, SenseVoice was able to achieve and exceed the performance of the current best speech emotion recognition models.
|
191 |
|
192 |
<div align="center">
|
193 |
+
<img src="fig/ser_table.png" width="1000" />
|
194 |
</div>
|
195 |
|
196 |
Furthermore, we compared multiple open-source speech emotion recognition models on the test sets, and the results indicate that the SenseVoice-Large model achieved the best performance on nearly all datasets, while the SenseVoice-Small model also surpassed other open-source models on the majority of the datasets.
|
197 |
|
198 |
<div align="center">
|
199 |
+
<img src="fig/ser_figure.png" width="500" />
|
200 |
</div>
|
201 |
|
202 |
## Audio Event Detection
|
|
|
204 |
Although trained exclusively on speech data, SenseVoice can still function as a standalone event detection model. We compared its performance on the environmental sound classification ESC-50 dataset against the widely used industry models BEATS and PANN. The SenseVoice model achieved commendable results on these tasks. However, due to limitations in training data and methodology, its event classification performance has some gaps compared to specialized AED models.
|
205 |
|
206 |
<div align="center">
|
207 |
+
<img src="fig/aed_figure.png" width="500" />
|
208 |
</div>
|
209 |
|
210 |
|
|
|
213 |
The SenseVoice-Small model non-autoregressive end-to-end architecture, resulting in extremely low inference latency. With a similar number of parameters to the Whisper-Small model, it infers 7 times faster than Whisper-Small and 17 times faster than Whisper-Large.
|
214 |
|
215 |
<div align="center">
|
216 |
+
<img src="fig/inference.png" width="1000" />
|
217 |
</div>
|