Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
brthor commited on
Commit
95b2d2b
1 Parent(s): 1839991

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md CHANGED
@@ -1,3 +1,180 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - text-to-speech
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 10K<n<100K
9
+
10
+ configs:
11
+ - config_name: dev
12
+ data_files:
13
+ - split: dev.clean
14
+ path: "data/dev.clean/dev.clean*.parquet"
15
+
16
+ - config_name: clean
17
+ data_files:
18
+ - split: dev.clean
19
+ path: "data/dev.clean/dev.clean*.parquet"
20
+ - split: test.clean
21
+ path: "data/test.clean/test.clean*.parquet"
22
+ - split: train.clean.100
23
+ path: "data/train.clean.100/train.clean.100*.parquet"
24
+ - split: train.clean.360'
25
+ path: "data/train.clean.360/train.clean.360*.parquet"
26
+
27
+ - config_name: other
28
+ data_files:
29
+ - split: dev.other
30
+ path: "data/dev.other/dev.other*.parquet"
31
+ - split: test.other
32
+ path: "data/test.other/test.other*.parquet"
33
+ - split: train.other.500
34
+ path: "data/train.other.500/train.other.500*.parquet"
35
+
36
+ - config_name: all
37
+ data_files:
38
+ - split: dev.clean
39
+ path: "data/dev.clean/dev.clean*.parquet"
40
+ - split: dev.other
41
+ path: "data/dev.other/dev.other*.parquet"
42
+ - split: test.clean
43
+ path: "data/test.clean/test.clean*.parquet"
44
+ - split: test.other
45
+ path: "data/test.other/test.other*.parquet"
46
+ - split: train.clean.100
47
+ path: "data/train.clean.100/train.clean.100*.parquet"
48
+ - split: train.clean.360'
49
+ path: "data/train.clean.360/train.clean.360*.parquet"
50
+ - split: train.other.500
51
+ path: "data/train.other.500/train.other.500*.parquet"
52
  ---
53
+ # Dataset Card for LibriTTS
54
+
55
+ <!-- Provide a quick summary of the dataset. -->
56
+
57
+ LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate,
58
+ prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is
59
+ designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files
60
+ from Project Gutenberg) of the LibriSpeech corpus.
61
+
62
+ ## Overview
63
+
64
+ This is the LibriTTS dataset, adapted for the `datasets` library.
65
+
66
+ ## Usage
67
+
68
+ ### Splits
69
+
70
+ There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):
71
+
72
+ - dev.clean
73
+ - dev.other
74
+ - test.clean
75
+ - test.other
76
+ - train.clean.100
77
+ - train.clean.360
78
+ - train.other.500
79
+
80
+ ### Configurations
81
+
82
+ There are 3 configurations, each which limits the splits the `load_dataset()` function will download.
83
+
84
+ The default configuration is "all".
85
+
86
+ - "dev": only the "dev.clean" split (good for testing the dataset quickly)
87
+ - "clean": contains only "clean" splits
88
+ - "other": contains only "other" splits
89
+ - "all": contains only "all" splits
90
+
91
+ ### Example
92
+
93
+ Loading the `clean` config with only the `train.clean.360` split.
94
+ ```
95
+ load_dataset("blabble-io/libritts", "clean", split="train.clean.100")
96
+ ```
97
+
98
+ Streaming is also supported.
99
+ ```
100
+ load_dataset("blabble-io/libritts", streaming=True)
101
+ ```
102
+
103
+ ### Columns
104
+
105
+ ```
106
+ {
107
+ "audio": datasets.Audio(sampling_rate=24_000),
108
+ "text_normalized": datasets.Value("string"),
109
+ "text_original": datasets.Value("string"),
110
+ "speaker_id": datasets.Value("string"),
111
+ "path": datasets.Value("string"),
112
+ "chapter_id": datasets.Value("string"),
113
+ "id": datasets.Value("string"),
114
+ }
115
+ ```
116
+
117
+ ### Example Row
118
+
119
+ ```
120
+ {
121
+ 'audio': {
122
+ 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav',
123
+ 'array': ...,
124
+ 'sampling_rate': 24000
125
+ },
126
+ 'text_normalized': 'How quickly he disappeared!"',
127
+ 'text_original': 'How quickly he disappeared!"',
128
+ 'speaker_id': '3081',
129
+ 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav',
130
+ 'chapter_id': '166546',
131
+ 'id': '3081_166546_000028_000002'
132
+ }
133
+ ```
134
+
135
+ ## Dataset Details
136
+
137
+ ### Dataset Description
138
+
139
+ - **License:** CC BY 4.0
140
+
141
+ ### Dataset Sources [optional]
142
+
143
+ <!-- Provide the basic links for the dataset. -->
144
+
145
+ - **Homepage:** https://www.openslr.org/60/
146
+ - **Paper:** https://arxiv.org/abs/1904.02882
147
+
148
+ ## Citation
149
+
150
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
151
+
152
+ ```
153
+ @ARTICLE{Zen2019-kz,
154
+ title = "{LibriTTS}: A corpus derived from {LibriSpeech} for
155
+ text-to-speech",
156
+ author = "Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and
157
+ Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui",
158
+ abstract = "This paper introduces a new speech corpus called
159
+ ``LibriTTS'' designed for text-to-speech use. It is derived
160
+ from the original audio and text materials of the
161
+ LibriSpeech corpus, which has been used for training and
162
+ evaluating automatic speech recognition systems. The new
163
+ corpus inherits desired properties of the LibriSpeech corpus
164
+ while addressing a number of issues which make LibriSpeech
165
+ less than ideal for text-to-speech work. The released corpus
166
+ consists of 585 hours of speech data at 24kHz sampling rate
167
+ from 2,456 speakers and the corresponding texts.
168
+ Experimental results show that neural end-to-end TTS models
169
+ trained from the LibriTTS corpus achieved above 4.0 in mean
170
+ opinion scores in naturalness in five out of six evaluation
171
+ speakers. The corpus is freely available for download from
172
+ http://www.openslr.org/60/.",
173
+ month = apr,
174
+ year = 2019,
175
+ copyright = "http://arxiv.org/licenses/nonexclusive-distrib/1.0/",
176
+ archivePrefix = "arXiv",
177
+ primaryClass = "cs.SD",
178
+ eprint = "1904.02882"
179
+ }
180
+ ```