Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,61 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
-
## 📂 Content Tree
|
| 5 |
|
| 6 |
|
|
|
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
```
|
| 9 |
WenetSpeech-Chuan
|
| 10 |
├── metadata.jsonl
|
|
@@ -18,9 +69,9 @@ WenetSpeech-Chuan
|
|
| 18 |
├── .gitattributes
|
| 19 |
└── README.md
|
| 20 |
```
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
| 24 |
{
|
| 25 |
"utt_id": 原始长音频id,
|
| 26 |
"wav_utt_id": 转化为wav后的长音频id,
|
|
@@ -30,7 +81,7 @@ WenetSpeech-Chuan
|
|
| 30 |
}
|
| 31 |
|
| 32 |
|
| 33 |
-
|
| 34 |
{ <br>
|
| 35 |
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str), <br>
|
| 36 |
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str), <br>
|
|
@@ -46,9 +97,9 @@ WenetSpeech-Chuan
|
|
| 46 |
"gender": 说话人性别标签 (type: str,eg: 男/女), <br>
|
| 47 |
} <br>
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
-
|
| 52 |
{ <br>
|
| 53 |
"utt_id": Original long audio ID, <br>
|
| 54 |
"wav_utt_id": Converted long audio ID after transforming to WAV format, <br>
|
|
@@ -58,7 +109,7 @@ WenetSpeech-Chuan
|
|
| 58 |
} <br>
|
| 59 |
|
| 60 |
|
| 61 |
-
|
| 62 |
|
| 63 |
{ <br>
|
| 64 |
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str), <br>
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
| 4 |
|
| 5 |
|
| 6 |
+
# WenetSpeech-Chuan: A Large-Scale Sichuanese Corpus With Rich Annotation For Dialectal Speech Processing
|
| 7 |
|
| 8 |
+
|
| 9 |
+
<p align="center">
|
| 10 |
+
Yuhang Dai<sup>1</sup><sup>,*</sup>, Ziyu Zhang<sup>1</sup><sup>,*</sup>, Shuai Wang<sup>4</sup><sup>,5</sup>,
|
| 11 |
+
Longhao Li<sup>1</sup>, Zhao Guo<sup>1</sup>, Tianlun Zuo<sup>1</sup>,
|
| 12 |
+
Shuiyuan Wang<sup>1</sup>, Hongfei Xue<sup>1</sup>, Chengyou Wang<sup>1</sup>,
|
| 13 |
+
Qing Wang<sup>3</sup>, Xin Xu<sup>2</sup>, Hui Bu<sup>2</sup>, Jie Li<sup>3</sup>,
|
| 14 |
+
Jian Kang<sup>3</sup>, Binbin Zhang<sup>5</sup>, Lei Xie<sup>1</sup><sup>,╀</sup>
|
| 15 |
+
</p>
|
| 16 |
+
<p align="center">
|
| 17 |
+
<sup>1</sup> Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University <br>
|
| 18 |
+
<sup>2</sup> Beijing AISHELL Technology Co., Ltd. <br>
|
| 19 |
+
<sup>3</sup> Institute of Artificial Intelligence (TeleAI), China Telecom <br>
|
| 20 |
+
<sup>4</sup> School of Intelligence Science and Technology, Nanjing University <br>
|
| 21 |
+
<sup>5</sup> WeNet Open Source Community <br>
|
| 22 |
+
</p>
|
| 23 |
+
|
| 24 |
+
<p align="center">
|
| 25 |
+
📑 <a href="https://arxiv.org/abs/2509.18004">Paper</a>    |   
|
| 26 |
+
🐙 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan">GitHub</a>    |   
|
| 27 |
+
🤗 <a href="https://huggingface.co/collections/ASLP-lab/wenetspeech-chuan-68bade9d02bcb1faece65bda">HuggingFace</a>
|
| 28 |
+
<br>
|
| 29 |
+
🎤 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/demopage/index.html">Demo Page</a>    |   
|
| 30 |
+
💬 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan?tab=readme-ov-file#contact">Contact Us</a>
|
| 31 |
+
</p>
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
<div align="center">
|
| 35 |
+
<img width="800px" src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/logo/WenetSpeech-Chuan-Logo.png?raw=true" />
|
| 36 |
+
</div>
|
| 37 |
+
|
| 38 |
+
## Dataset
|
| 39 |
+
### WenetSpeech-Chuan Overview
|
| 40 |
+
|
| 41 |
+
* Contains 10,000 hours of large-scale Chuan-Yu dialect speech corpus with rich annotations, the largest open-source resource for Chuan-Yu dialect speech research.</li>
|
| 42 |
+
* Stores metadata in a single JSON file, including audio path, duration, text confidence, speaker identity, SNR, DNSMOS, age, gender, and character-level timestamps. Additional metadata tags may be added in the future.</li>
|
| 43 |
+
* Covers ten domains: Short videos, Entertainment, Live streams, Documentary, Audiobook, Drama, Interview, News and others.</li>
|
| 44 |
+
|
| 45 |
+
<div align="center">
|
| 46 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/domain.png?raw=true" width="300" style="display:inline-block; margin-right:10px;" />
|
| 47 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/quality_distribution.jpg?raw=true" width="300" style="display:inline-block;" />
|
| 48 |
+
</div>
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
### Metadata Format
|
| 55 |
+
We store all audio metadata in a standardized JSON format, where the core fields include `utt_id` (unique identifier for each audio segment), `rover_result` (ROVER result of three ASR transcriptions), `confidence` (confidence score of text transcription), `jyutping_confidence` (confidence score of Cantonese pinyin transcriptions), and `duration` (audio duration); speaker attributes include `speaker_id`, `gender`, and `age`; audio quality assessment metrics include `sample_rate`, `DNSMOS`, and `SNR`; timestamp information includes `timestamp` (precisely recording segment boundaries with `start` and `end`); and extended metadata under the `meta_info` field includes `program` (program name), `region` (geographical information), `link` (original content link), and `domain` (domain classification).
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
#### 📂 Content Tree
|
| 59 |
```
|
| 60 |
WenetSpeech-Chuan
|
| 61 |
├── metadata.jsonl
|
|
|
|
| 69 |
├── .gitattributes
|
| 70 |
└── README.md
|
| 71 |
```
|
| 72 |
+
#### Data sample(CN):
|
| 73 |
|
| 74 |
+
###### metadata.jsonl
|
| 75 |
{
|
| 76 |
"utt_id": 原始长音频id,
|
| 77 |
"wav_utt_id": 转化为wav后的长音频id,
|
|
|
|
| 81 |
}
|
| 82 |
|
| 83 |
|
| 84 |
+
###### audio_labels/wav_utt_id.jsonl:
|
| 85 |
{ <br>
|
| 86 |
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str), <br>
|
| 87 |
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str), <br>
|
|
|
|
| 97 |
"gender": 说话人性别标签 (type: str,eg: 男/女), <br>
|
| 98 |
} <br>
|
| 99 |
|
| 100 |
+
#### Data sample(EN):
|
| 101 |
|
| 102 |
+
###### metadata.jsonl
|
| 103 |
{ <br>
|
| 104 |
"utt_id": Original long audio ID, <br>
|
| 105 |
"wav_utt_id": Converted long audio ID after transforming to WAV format, <br>
|
|
|
|
| 109 |
} <br>
|
| 110 |
|
| 111 |
|
| 112 |
+
###### audio_labels/wav_utt_id.jsonl:
|
| 113 |
|
| 114 |
{ <br>
|
| 115 |
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str), <br>
|