Datasets:

Languages:
Chinese
ArXiv:
License:
WenetSpeech4TTS / README.md
dukkkk's picture
Update README.md
11e5afc verified
---
annotations_creators: []
language_creators: []
language:
- zh
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: WenetSpeech4TTS
source_datasets: []
task_categories:
- text-to-speech
extra_gated_prompt: >-
The WenetSpeech4TTS dataset, derived from the open-source WenetSpeech dataset,
is available for download for non-commercial purposes under a Creative Commons Attribution 4.0 International License.
We do not own the copyright of the audios: the copyright remains with the original owners of the video or audio, and
the public URL is provided in WenetSpeech for the original video or audio.
Terms of Access: The researcher has requested permission to use the WenetSpeech4TTS database.
In exchange for such permission, the researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and
educational purposes.
2. The authors make no representations or warranties regarding the Database,
including but not limited to warranties of non-infringement or fitness for a
particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database
and shall defend and indemnify the authors of WenetSpeech4TTS, including their
employees, Trustees, officers and agents, against any and all claims arising
from Researcher's use of the Database, including but not limited to
Researcher's use of any copies of copyrighted audio files that he or she may
create from the Database.
4.Researcher may provide research associates and colleagues with access to the
Database provided that they first agree to be bound by these terms and
conditions.
5. The authors reserve the right to terminate Researcher's access to the
Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's
employer shall also be bound by these terms and conditions, and Researcher
hereby represents that he or she is fully authorized to enter into this
agreement on behalf of such employer.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I accept the terms of access: checkbox
size_categories:
- 10M<n<100M
---
## **News** 🎉
- **2024.07.09**: Available on [modelscope](https://modelscope.cn/datasets/dukguo/WenetSpeech4TTS)(for users in the Chinese mainland).
- **2024.06.20**: Checkpoints and pre-print are available.
- **2024.06.07**: Accepted by INTERSPEECH 2024! Pre-print and pre-trained checkpoints are coming soon!
- **2024.04.26**: The dataset now supports `datasets.load_dataset()` for easier access and integration. Please note, that the data viewer is temporarily unavailable due to [current platform restrictions](https://discuss.huggingface.co/t/dataset-repo-requires-arbitrary-python-code-execution/59346/5).
- **2024.04.23**: The dataset upload and verification have been completed! We welcome everyone to download and explore it!
**Enjoy using the data!** 🎊
# Dataset Card for WenetSpeech4TTS
<!-- Provide a quick summary of the dataset. -->
**WenetSpeech4TTS** is a multi-domain **Mandarin** corpus derived from the open-sourced [WenetSpeech](https://arxiv.org/abs/2110.03370) dataset. Tailored for the text-to-speech tasks, we refined WenetSpeech by adjusting segment boundaries, enhancing the audio quality, and eliminating speaker mixing within each segment. Following a more accurate transcription process and quality-based data filtering process, the obtained WenetSpeech4TTS corpus contains 12,800 hours of paired audio-text data. Furthermore, we have created subsets of varying sizes, categorized by segment quality scores to allow for TTS model training and finetuning.
[VALL-E](https://arxiv.org/abs/2301.02111) and [NaturalSpeech 2](https://arxiv.org/pdf/2304.09116.pdf) systems are trained and fine-tuned on these subsets, establishing benchmarks for the usability of WenetSpeech4TTS and the fair comparison of TTS systems. We make the corpus and corresponding benchmarks publicly available to advance research in this field. We provide audio-text pairs and DNSMOS P.808 scores for all audios in WenetSpeech4TTS. The checkpoints for VALL-E and NaturalSpeech 2 will be released soon.
## Subsets Details
|**Training Subsets** |**DNSMOS Threshold**|**Hours** |**Average Segment Duration (s)**|
|:-----------------:|:---------------:|:------:|:----------------------------:|
|<font color=#00FF7F>Premium</font>| 4.0 |945|8.3|
|<font color=#FFD700>Standard</font> | 3.8 |4056|7.5|
|<font color=#FF0000>Basic</font>|3.6 |7226|6.6|
|Rest| <3.6|5574|N/A|
|WenetSpeech (orig)|N/A|12483|N/A|
<center><figure>
<img src="fig/Mos.png" width="50%">
<figcaption> The distribution of DNSMOS P.808 scores</figcaption>
</figure></center>
## Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Paper :** https://arxiv.org/abs/2406.05763v3
- **TTS Demo :** https://wenetspeech4tts.github.io/wenetspeech4tts
## License
The WenetSpeech4TTS dataset is available to download for non-commercial purposes under a [Creative Commons Attribution 4.0 International License.](https://creativecommons.org/licenses/by/4.0/) Consistent with WenetSpeech, WenetSpeech4TTS doesn't own the copyright of the audios, the copyright remains with the original owners of the video or audio, and the public URL is given in [WenetSpeech](https://wenet.org.cn/WenetSpeech/) for the original video or audio.
## Download
For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through our site under certain conditions and terms.
> **Notice:** In order to facilitate the individual download and use needs of researchers, we organize the dataset into different directories according to Premium, Standard and Basic subsets, but the data in their package files do not overlap with each other. Therefore, if you want to use the WenetSpeech4TTS Basic subset, you should download all the package files in the Premium, Standard, and Basic directories; if you want to use the WenetSpeech4TTS Standard subset, you should download all the package files in the Premium and Standard directories.
## Dataset Structure
The dataset is structured into directories corresponding to each quality subset, each containing TAR.GZ archives of audio files and their corresponding transcription text files. Checksum files are provided for verification. The structure facilitates easy downloading and usage in research settings.
WenetSpeech4TTS
|
|____ Premium
| |__ Premium_md5check.txt
| |__ WenetSpeech4TTS_Premium_0.tar.gz
| |__ WenetSpeech4TTS_Premium_1.tar.gz
| |__ ...
|
|____ Standard
| |__ Standard_md5check.txt
| |__ WenetSpeech4TTS_Standard_0.tar.gz
| |__ WenetSpeech4TTS_Standard_1.tar.gz
| |__ ...
|
|____ Basic
| |__ Basic_md5check.txt
| |__ WenetSpeech4TTS_Basic_0.tar.gz
| |__ WenetSpeech4TTS_Basic_1.tar.gz
| |__ ...
|
|____ Rest
| |__ Rest_md5check.txt
| |__ WenetSpeech4TTS_Rest_0.tar.gz
| |__ WenetSpeech4TTS_Rest_1.tar.gz
| |__ ...
|
|____ Filelists
| |__ Premium.lst
| |__ Standard.lst
| |__ Basic.lst
| |__ Rest.lst
|
|____ DNSMOS
| |__ Premium_DNSMOS.lst
| |__ Standard_DNSMOS.lst
| |__ Basic_DNSMOS.lst
| |__ Rest_DNSMOS.lst
|
|____ Testset
| |__...
|
|____ README.md
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
## Data format
Audio file (.wav):
Channels: 1
Sample Rate: 16000
Sample Encoding: 16-bit Signed Integer PCM
Each text file (.txt) includes the transcript corresponding to the speech and the timestamp of each word in the transcript.
<Utt_id>\t<Text>\n
\t<Timestamps>
The filelist of each set contains the relative paths of the wav and txt files for all metadata in the set.
<Utt_id1>\t<wavpath1>\t<txtpath1>\n
<Utt_id2>\t<wavpath2>\t<txtpath2>\n
<Utt_id3>\t<wavpath3>\t<txtpath3>\n
...
The DNSMOS_lst of each set contains the DNSMOS P808 scores of all metadata in the set. We floor all scores to one decimal place.
<Utt_id1>\t<score1>\n
<Utt_id2>\t<score2>\n
<Utt_id3>\t<score3>\n
...
## Uses
The audio-text pairs files in each xxxx.tar.gz package are organized like this:
WenetSpeech4TTS_Premium_0
|
|____ wavs
| |__ X0000000021_240514196_S00041.wav
| |__ X0000000028_244085533_S00092-S00094.wav
| |__ X0000000030_245144288_S00041.wav
| |__ ...
|
|____ txts
| |__ X0000000021_240514196_S00041.txt
| |__ X0000000028_244085533_S00092-S00094.txt
| |__ X0000000030_245144288_S00041.txt
| |__ ...
You can directly download the .tar.gz file and the filelist, or use `datasets.load()` to access the dataset.
> In our experiments, we used the `SoX` program to normalize the audio volume and found that it helped improve the synthesis quality, so we recommend researchers to perform volume normalization on the audio data and then use it to train the model.
### Direct Use
``` python
# Loading Premium subset
load_dataset("Wenetspeech4TTS/WenetSpeech4TTS",split='train.Premium')
```
## Dataset Creation
### Curation Rationale
WenetSpeech4TTS enhances the WenetSpeech dataset by improving audio quality and transcription accuracy to better serve TTS research and applications.
[More Information Needed]
### Source Data
Sourced from YouTube and Podcasts, the original WenetSpeech data was meticulously processed to create WenetSpeech4TTS.
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
From WenetSpeech to WenetSpeech4TTS, we designed an automatic pipeline containing multiple processing steps. Specifically, we refined WenetSpeech for TTS by merging segments based on speaker similarity and pause duration, and by expanding segment boundaries to prevent truncated words. Audio quality was enhanced using the denoising model, followed by quality scoring. Furthermore, a speaker diarization system clustered segments from the same speaker, while a more advanced ASR system provided more accurate transcriptions.
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
#### Annotation process
We use the open-source industrial-level [Paraformer-large](https://www.modelscope.cn/models/iic/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) system to obtain text transcriptions for the speech segments, this model achieved a 6.74% character error rate (CER) on the WenetSpeech “Test Net” set.
### Recommendations
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
## Terms of Access
The WenetSpeech4TTS dataset, derived from the open-source WenetSpeech dataset, is available for download for non-commercial purposes under a Creative Commons Attribution 4.0 International License. We do not own the copyright of the audios: the copyright remains with the original owners of the video or audio, and the public URL is provided in WenetSpeech for the original video or audio.
Terms of Access: The Researcher has requested permission to use the WenetSpeech4TTS database. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. The authors make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the authors of WenetSpeech4TTS, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. The authors reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher
hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.