WSC-Train / README.md
ASLP-lab's picture
Update README.md
a7d386c verified
metadata
license: apache-2.0

WenetSpeech-Chuan: A Large-Scale Sichuanese Corpus With Rich Annotation For Dialectal Speech Processing

Yuhang Dai1,*, Ziyu Zhang1,*, Shuai Wang4,5, Longhao Li1, Zhao Guo1, Tianlun Zuo1, Shuiyuan Wang1, Hongfei Xue1, Chengyou Wang1, Qing Wang3, Xin Xu2, Hui Bu2, Jie Li3, Jian Kang3, Binbin Zhang5, Lei Xie1,╀

1 Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University
2 Beijing AISHELL Technology Co., Ltd.
3 Institute of Artificial Intelligence (TeleAI), China Telecom
4 School of Intelligence Science and Technology, Nanjing University
5 WeNet Open Source Community

📑 Paper    |    🐙 GitHub    |    🤗 HuggingFace
🎤 Demo Page    |    💬 Contact Us

Dataset

WenetSpeech-Chuan Overview

  • Contains 10,000 hours of large-scale Chuan-Yu dialect speech corpus with rich annotations, the largest open-source resource for Chuan-Yu dialect speech research.
  • Stores metadata in a single JSON file, including audio path, duration, text confidence, speaker identity, SNR, DNSMOS, age, gender, and character-level timestamps. Additional metadata tags may be added in the future.
  • Covers ten domains: Short videos, Entertainment, Live streams, Documentary, Audiobook, Drama, Interview, News and others.

Metadata Format

We store all audio metadata in a standardized JSON format, where the core fields include utt_id (unique identifier for each audio segment), rover_result (ROVER result of three ASR transcriptions), confidence (confidence score of text transcription), jyutping_confidence (confidence score of Cantonese pinyin transcriptions), and duration (audio duration); speaker attributes include speaker_id, gender, and age; audio quality assessment metrics include sample_rate, DNSMOS, and SNR; timestamp information includes timestamp (precisely recording segment boundaries with start and end); and extended metadata under the meta_info field includes program (program name), region (geographical information), link (original content link), and domain (domain classification).

📂 Content Tree

WenetSpeech-Chuan
├── metadata.jsonl
│
├── audio_labels/
│   ├── wav_utt_id.jsonl
│   ├── wav_utt_id.jsonl
│   ├── ...
│   └── wav_utt_id.jsonl
│
├── .gitattributes
└── README.md

Data sample(CN):

metadata.jsonl

{ "utt_id": 原始长音频id, "wav_utt_id": 转化为wav后的长音频id, "source_audio_path": 原始长音频路径, "audio_labels": 转化后的长音频切分出的短音频标签文件路径, "url": 原始长音频下载链接 }

audio_labels/wav_utt_id.jsonl:

{
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str),
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str),
"audio_clip_id": 该段短音频在长音频中的切分顺序编号,
"timestamp": 时间戳信息,
"wvmos_score": wvmos分数,衡量音频片段质量 (type: float),
"text": 对应时间戳的音频片段的抄本 (type: str),
"text_punc": 带标点的抄本 (type: str),
"spk_num": 音频片段说话人个数,single/multi (type: str)
"confidence": 抄本置信度 (type: float),
"emotion": 说话人情感标签 (type: str,eg: 愤怒),
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)),
"gender": 说话人性别标签 (type: str,eg: 男/女),
}

Data sample(EN):

metadata.jsonl

{
"utt_id": Original long audio ID,
"wav_utt_id": Converted long audio ID after transforming to WAV format,
"source_audio_path": Path to the original long audio file,
"audio_labels": Path to the label file of short audio segments cut from the converted long audio,
"url": Download link for the original long audio
}

audio_labels/wav_utt_id.jsonl:

{
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str),
"wav_utt_id_timestamp_path": Path to the short audio data (type: str),
"audio_clip_id": Sequence number of this short segment within the long audio,
"timestamp": Timestamp information,
"wvmos_score": WVMOS score, measuring the quality of the audio segment (type: float),
"text": Transcript of the audio segment corresponding to the timestamp (type: str),
"text_punc": Transcript with punctuation (type: str),
"spk_num": Number of speakers in the audio segment, single/multi (type: str),
"confidence": Confidence score of the transcript (type: float),
"emotion": Speaker’s emotion label (type: str, e.g., anger),
"age": Speaker’s age label (type: int range, e.g., middle-aged (36–59)),
"gender": Speaker’s gender label (type: str, e.g., male/female)
}

WenetSpeech Usage

You can obtain the original video source through the link field in the metadata file (metadata.json). Segment the audio according to the timestamps field to extract the corresponding record. For pre-processed audio data, please contact us using the information provided below.

Contact

If you have any questions or would like to collaborate, feel free to reach out to our research team via email: yhdai@mail.nwpu.edu.cn or ziyu_zhang@mail.nwpu.edu.cn.

You’re also welcome to join our WeChat group for technical discussions, updates, and — as mentioned above — access to pre-processed audio data.

WeChat Group QR Code Scan to join our WeChat discussion group

Official Account QR Code