Datasets:
File size: 5,820 Bytes
630fb85 95b2d2b 35f26bd 95b2d2b 35f26bd 95b2d2b 630fb85 95b2d2b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
license: cc-by-4.0
task_categories:
- text-to-speech
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: dev
data_files:
- split: dev.clean
path: "data/dev.clean/dev.clean*.parquet"
- config_name: clean
data_files:
- split: dev.clean
path: "data/dev.clean/dev.clean*.parquet"
- split: test.clean
path: "data/test.clean/test.clean*.parquet"
- split: train.clean.100
path: "data/train.clean.100/train.clean.100*.parquet"
- split: train.clean.360
path: "data/train.clean.360/train.clean.360*.parquet"
- config_name: other
data_files:
- split: dev.other
path: "data/dev.other/dev.other*.parquet"
- split: test.other
path: "data/test.other/test.other*.parquet"
- split: train.other.500
path: "data/train.other.500/train.other.500*.parquet"
- config_name: all
data_files:
- split: dev.clean
path: "data/dev.clean/dev.clean*.parquet"
- split: dev.other
path: "data/dev.other/dev.other*.parquet"
- split: test.clean
path: "data/test.clean/test.clean*.parquet"
- split: test.other
path: "data/test.other/test.other*.parquet"
- split: train.clean.100
path: "data/train.clean.100/train.clean.100*.parquet"
- split: train.clean.360
path: "data/train.clean.360/train.clean.360*.parquet"
- split: train.other.500
path: "data/train.other.500/train.other.500*.parquet"
---
# Dataset Card for LibriTTS
<!-- Provide a quick summary of the dataset. -->
LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate,
prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is
designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files
from Project Gutenberg) of the LibriSpeech corpus.
## Overview
This is the LibriTTS dataset, adapted for the `datasets` library.
## Usage
### Splits
There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):
- dev.clean
- dev.other
- test.clean
- test.other
- train.clean.100
- train.clean.360
- train.other.500
### Configurations
There are 3 configurations, each which limits the splits the `load_dataset()` function will download.
The default configuration is "all".
- "dev": only the "dev.clean" split (good for testing the dataset quickly)
- "clean": contains only "clean" splits
- "other": contains only "other" splits
- "all": contains only "all" splits
### Example
Loading the `clean` config with only the `train.clean.360` split.
```
load_dataset("blabble-io/libritts", "clean", split="train.clean.100")
```
Streaming is also supported.
```
load_dataset("blabble-io/libritts", streaming=True)
```
### Columns
```
{
"audio": datasets.Audio(sampling_rate=24_000),
"text_normalized": datasets.Value("string"),
"text_original": datasets.Value("string"),
"speaker_id": datasets.Value("string"),
"path": datasets.Value("string"),
"chapter_id": datasets.Value("string"),
"id": datasets.Value("string"),
}
```
### Example Row
```
{
'audio': {
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav',
'array': ...,
'sampling_rate': 24000
},
'text_normalized': 'How quickly he disappeared!"',
'text_original': 'How quickly he disappeared!"',
'speaker_id': '3081',
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav',
'chapter_id': '166546',
'id': '3081_166546_000028_000002'
}
```
## Dataset Details
### Dataset Description
- **License:** CC BY 4.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Homepage:** https://www.openslr.org/60/
- **Paper:** https://arxiv.org/abs/1904.02882
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@ARTICLE{Zen2019-kz,
title = "{LibriTTS}: A corpus derived from {LibriSpeech} for
text-to-speech",
author = "Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and
Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui",
abstract = "This paper introduces a new speech corpus called
``LibriTTS'' designed for text-to-speech use. It is derived
from the original audio and text materials of the
LibriSpeech corpus, which has been used for training and
evaluating automatic speech recognition systems. The new
corpus inherits desired properties of the LibriSpeech corpus
while addressing a number of issues which make LibriSpeech
less than ideal for text-to-speech work. The released corpus
consists of 585 hours of speech data at 24kHz sampling rate
from 2,456 speakers and the corresponding texts.
Experimental results show that neural end-to-end TTS models
trained from the LibriTTS corpus achieved above 4.0 in mean
opinion scores in naturalness in five out of six evaluation
speakers. The corpus is freely available for download from
http://www.openslr.org/60/.",
month = apr,
year = 2019,
copyright = "http://arxiv.org/licenses/nonexclusive-distrib/1.0/",
archivePrefix = "arXiv",
primaryClass = "cs.SD",
eprint = "1904.02882"
}
``` |