cahya commited on
Commit
1cd4314
1 Parent(s): 068a2db

add source code

Browse files
README.md CHANGED
@@ -1,3 +1,151 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: LibriVox Indonesia 1.0
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ language:
8
+ - ace
9
+ - bal
10
+ - bug
11
+ - id
12
+ - min
13
+ - jav
14
+ - sun
15
  license: cc
16
+ multilinguality:
17
+ - multilingual
18
+ size_categories:
19
+ ace:
20
+ - 1K<n<10K
21
+ bal:
22
+ - 1K<n<10K
23
+ bug:
24
+ - 1K<n<10K
25
+ id:
26
+ - 1K<n<10K
27
+ min:
28
+ - 1K<n<10K
29
+ jav:
30
+ - 1K<n<10K
31
+ sun:
32
+ - 1K<n<10K
33
+ source_datasets:
34
+ - librivox
35
+ task_categories:
36
+ - speech-processing
37
+ task_ids:
38
+ - automatic-speech-recognition
39
  ---
40
+ # Dataset Card for LibriVox Indonesia 1.0
41
+
42
+ ## Table of Contents
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-fields)
50
+ - [Data Splits](#data-splits)
51
+ - [Dataset Creation](#dataset-creation)
52
+ - [Curation Rationale](#curation-rationale)
53
+ - [Source Data](#source-data)
54
+ - [Annotations](#annotations)
55
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
56
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
57
+ - [Social Impact of Dataset](#social-impact-of-dataset)
58
+ - [Discussion of Biases](#discussion-of-biases)
59
+ - [Other Known Limitations](#other-known-limitations)
60
+ - [Additional Information](#additional-information)
61
+ - [Dataset Curators](#dataset-curators)
62
+ - [Licensing Information](#licensing-information)
63
+ - [Citation Information](#citation-information)
64
+ - [Contributions](#contributions)
65
+
66
+ ## Dataset Description
67
+ - **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
68
+ - **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
69
+ - **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
70
+
71
+ ### Dataset Summary
72
+ The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
73
+ domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
74
+ The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
75
+ file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
76
+
77
+ We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
78
+ multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
79
+ for other languages without additional work to train the model.
80
+
81
+ The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
82
+ as we collect them.
83
+
84
+ ### Languages
85
+ ```
86
+ Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
87
+ ```
88
+
89
+ ## Dataset Structure
90
+ ### Data Instances
91
+ A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
92
+ `reader` and `language`.
93
+ ```python
94
+ {
95
+ 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
96
+ 'language': 'sun',
97
+ 'reader': '3174',
98
+ 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
99
+ 'audio': {
100
+ 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
101
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
102
+ 'sampling_rate': 44100
103
+ },
104
+ }
105
+ ```
106
+
107
+ ### Data Fields
108
+ `path` (`string`): The path to the audio file
109
+
110
+ `language` (`string`): The language of the audio file
111
+
112
+ `reader` (`string`): The reader Id in LibriVox
113
+
114
+ `sentence` (`string`): The sentence the user read from the book.
115
+
116
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
117
+
118
+ ### Data Splits
119
+ The speech material has only train split.
120
+
121
+ ## Dataset Creation
122
+ ### Curation Rationale
123
+ [Needs More Information]
124
+ ### Source Data
125
+ #### Initial Data Collection and Normalization
126
+ [Needs More Information]
127
+ #### Who are the source language producers?
128
+ [Needs More Information]
129
+ ### Annotations
130
+ #### Annotation process
131
+ [Needs More Information]
132
+ #### Who are the annotators?
133
+ [Needs More Information]
134
+ ### Personal and Sensitive Information
135
+ [More Information Needed]
136
+ ## Considerations for Using the Data
137
+ ### Social Impact of Dataset
138
+ [More Information Needed]
139
+ ### Discussion of Biases
140
+ [More Information Needed]
141
+ ### Other Known Limitations
142
+ [More Information Needed]
143
+ ## Additional Information
144
+ ### Dataset Curators
145
+ [More Information Needed]
146
+ ### Licensing Information
147
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
148
+ ### Citation Information
149
+ ```
150
+
151
+ ```
data/audio_test.tgz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6db6b18ab2183f8da63bec0e1b1093da570bd35d94bf78884a4629b49d09e839
3
+ size 2800914
data/audio_train.tgz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7074b70fdb63702584170bdd8c50175c70fdd4188e88b3bcbf5ca93ebfe77735
3
+ size 20763976
data/metadata_test.csv.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae98aa3b9e0cc38953fab8fdfef3357681c3348e0328c80e61e0a2ca48aaf8c5
3
+ size 1811
data/metadata_train.csv.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f44f0446d69fc159611f9da152d0a9dc2d3a6efd6e1081b5d10bf47813fc1eb3
3
+ size 9360
languages.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
1
+ LANGUAGES = {
2
+ 'ace': 'Acehnese',
3
+ 'bal': 'Balinese',
4
+ 'bug': 'Bugisnese',
5
+ 'id': 'Indonesian',
6
+ 'min': 'Minangkabau',
7
+ 'jav': 'Javanese',
8
+ 'sun': 'Sundanese',
9
+ 'all': 'All'
10
+ }
librivox-indonesia.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ LibriVox-Indonesia Dataset"""
16
+
17
+ import csv
18
+ import os
19
+
20
+ import datasets
21
+ from datasets.utils.py_utils import size_str
22
+
23
+ from .languages import LANGUAGES
24
+ from .release_stats import STATS
25
+
26
+ _CITATION = """\
27
+ """
28
+
29
+ _HOMEPAGE = "https://huggingface.co/indonesian-nlp/librivox-indonesia"
30
+
31
+ _LICENSE = "https://creativecommons.org/publicdomain/zero/1.0/"
32
+
33
+ _DATA_URL = "https://huggingface.co/datasets/cahya/librivox-indonesia/resolve/main/data"
34
+
35
+
36
+ class LibriVoxIndonesiaConfig(datasets.BuilderConfig):
37
+ """BuilderConfig for LibriVoxIndonesia."""
38
+
39
+ def __init__(self, name, version, **kwargs):
40
+ self.language = kwargs.pop("language", None)
41
+ self.release_date = kwargs.pop("release_date", None)
42
+ self.num_clips = kwargs.pop("num_clips", None)
43
+ self.num_speakers = kwargs.pop("num_speakers", None)
44
+ self.validated_hr = kwargs.pop("validated_hr", None)
45
+ self.total_hr = kwargs.pop("total_hr", None)
46
+ self.size_bytes = kwargs.pop("size_bytes", None)
47
+ self.size_human = size_str(self.size_bytes)
48
+ description = (
49
+ f"LibriVox-Indonesia speech to text dataset in {self.language} released on {self.release_date}. "
50
+ f"The dataset comprises {self.validated_hr} hours of transcribed speech data"
51
+ )
52
+ super(LibriVoxIndonesiaConfig, self).__init__(
53
+ name=name,
54
+ version=datasets.Version(version),
55
+ description=description,
56
+ **kwargs,
57
+ )
58
+
59
+
60
+ class LibriVoxIndonesia(datasets.GeneratorBasedBuilder):
61
+ DEFAULT_CONFIG_NAME = "all"
62
+
63
+ BUILDER_CONFIGS = [
64
+ LibriVoxIndonesiaConfig(
65
+ name=lang,
66
+ version=STATS["version"],
67
+ language=LANGUAGES[lang],
68
+ release_date=STATS["date"],
69
+ num_clips=lang_stats["clips"],
70
+ num_speakers=lang_stats["users"],
71
+ total_hr=float(lang_stats["totalHrs"]) if lang_stats["totalHrs"] else None,
72
+ size_bytes=int(lang_stats["size"]) if lang_stats["size"] else None,
73
+ )
74
+ for lang, lang_stats in STATS["locales"].items()
75
+ ]
76
+
77
+ def _info(self):
78
+ total_languages = len(STATS["locales"])
79
+ total_hours = self.config.total_hr
80
+ description = (
81
+ "LibriVox-Indonesia is a speech dataset generated from LibriVox with only languages from Indonesia."
82
+ f"The dataset currently consists of {total_hours} hours of speech "
83
+ f" in {total_languages} languages, but more voices and languages are always added."
84
+ )
85
+ features = datasets.Features(
86
+ {
87
+ "path": datasets.Value("string"),
88
+ "language": datasets.Value("string"),
89
+ "reader": datasets.Value("string"),
90
+ "sentence": datasets.Value("string"),
91
+ "audio": datasets.features.Audio(sampling_rate=44100)
92
+ }
93
+ )
94
+
95
+ return datasets.DatasetInfo(
96
+ description=description,
97
+ features=features,
98
+ supervised_keys=None,
99
+ homepage=_HOMEPAGE,
100
+ license=_LICENSE,
101
+ citation=_CITATION,
102
+ version=self.config.version,
103
+ )
104
+
105
+ def _split_generators(self, dl_manager):
106
+ """Returns SplitGenerators."""
107
+ dl_manager.download_config.ignore_url_params = True
108
+ audio_train = dl_manager.download(_DATA_URL + "/audio_train.tgz")
109
+ local_extracted_archive_train = dl_manager.extract(audio_train) if not dl_manager.is_streaming else None
110
+ audio_test = dl_manager.download(_DATA_URL + "/audio_test.tgz")
111
+ local_extracted_archive_test = dl_manager.extract(audio_test) if not dl_manager.is_streaming else None
112
+ path_to_clips = "librivox-indonesia"
113
+
114
+ return [
115
+ datasets.SplitGenerator(
116
+ name=datasets.Split.TRAIN,
117
+ gen_kwargs={
118
+ "local_extracted_archive": local_extracted_archive_train,
119
+ "audio_files": dl_manager.iter_archive(audio_train),
120
+ "metadata_path": dl_manager.download_and_extract(_DATA_URL + "/metadata_train.csv.gz"),
121
+ "path_to_clips": path_to_clips,
122
+ },
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TEST,
126
+ gen_kwargs={
127
+ "local_extracted_archive": local_extracted_archive_test,
128
+ "audio_files": dl_manager.iter_archive(audio_test),
129
+ "metadata_path": dl_manager.download_and_extract(_DATA_URL + "/metadata_test.csv.gz"),
130
+ "path_to_clips": path_to_clips,
131
+ },
132
+ ),
133
+ ]
134
+
135
+ def _generate_examples(
136
+ self,
137
+ local_extracted_archive,
138
+ audio_files,
139
+ metadata_path,
140
+ path_to_clips,
141
+ ):
142
+ """Yields examples."""
143
+ print(metadata_path)
144
+ data_fields = list(self._info().features.keys())
145
+ metadata = {}
146
+ with open(metadata_path, "r", encoding="utf-8") as f:
147
+ reader = csv.DictReader(f)
148
+ for row in reader:
149
+ if self.config.name == "all" or self.config.name == row["language"]:
150
+ row["path"] = os.path.join(path_to_clips, row["path"])
151
+ # if data is incomplete, fill with empty values
152
+ for field in data_fields:
153
+ if field not in row:
154
+ row[field] = ""
155
+ metadata[row["path"]] = row
156
+ id_ = 0
157
+ print("example length = %d" % len(metadata))
158
+ # print(metadata)
159
+ for path, f in audio_files:
160
+ print(path)
161
+ if path in metadata:
162
+ result = dict(metadata[path])
163
+ # set the audio feature and the path to the extracted file
164
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
165
+ result["audio"] = {"path": path, "bytes": f.read()}
166
+ result["path"] = path
167
+ yield id_, result
168
+ id_ += 1
release_stats.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ STATS = {
2
+ "name": "Librivox-Indonesia",
3
+ "version": "1.0.0",
4
+ "date": "2022-09-04",
5
+ "locales": {
6
+ "ace": {'reportedSentences': 149, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
7
+ 'avgDurationSecs': 1, 'totalHrs': 1},
8
+ "bal": {'reportedSentences': 175, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
9
+ 'avgDurationSecs': 1, 'totalHrs': 1},
10
+ "bug": {'reportedSentences': 142, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
11
+ 'avgDurationSecs': 1, 'totalHrs': 1},
12
+ "id": {'reportedSentences': 6238, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
13
+ 'avgDurationSecs': 1, 'totalHrs': 1},
14
+ "min": {'reportedSentences': 156, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
15
+ 'avgDurationSecs': 1, 'totalHrs': 1},
16
+ "jav": {'reportedSentences': 801, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
17
+ 'avgDurationSecs': 1, 'totalHrs': 1},
18
+ "sun": {'reportedSentences': 154, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
19
+ 'avgDurationSecs': 1, 'totalHrs': 1},
20
+ "all": {'reportedSentences': 7815, 'duration': 1, 'clips': 1, 'users': 416, 'size': 1,
21
+ 'avgDurationSecs': 1, 'totalHrs': 1},
22
+ },
23
+ 'totalDuration': 1, 'totalHrs': 1
24
+ }
25
+