system HF staff commited on
Commit
c385d9f
0 Parent(s):

Update files from the datasets library (from 1.12.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.12.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +178 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. vivos.py +123 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: VIVOS
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - expert-generated
8
+ languages:
9
+ - vi
10
+ licenses:
11
+ - cc-by-sa-4-0
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - speech-processing
20
+ task_ids:
21
+ - automatic-speech-recognition
22
+ ---
23
+
24
+ # Dataset Card for VIVOS
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card for VIVOS](#dataset-card-for-vivos)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** https://ailab.hcmus.edu.vn/vivos
59
+ - **Repository:** [Needs More Information]
60
+ - **Paper:** [A non-expert Kaldi recipe for Vietnamese Speech Recognition System](https://ailab.hcmus.edu.vn/assets/WLSI3_2016_Luong_non_expert.pdf)
61
+ - **Leaderboard:** [Needs More Information]
62
+ - **Point of Contact:** [AILAB](mailto:ailab@hcmus.edu.vn)
63
+
64
+ ### Dataset Summary
65
+
66
+ VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition task.
67
+
68
+ The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of.
69
+
70
+ We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems.
71
+
72
+ ### Supported Tasks and Leaderboards
73
+
74
+ [Needs More Information]
75
+
76
+ ### Languages
77
+
78
+ Vietnamese
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A typical data point comprises the path to the audio file, called `path` and its transcription, called `sentence`. Some additional information about the speaker and the passage which contains the transcription is provided.
85
+
86
+ ```
87
+ {'speaker_id': 'VIVOSSPK01',
88
+ 'path': '/home/admin/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/vivos/train/waves/VIVOSSPK01/VIVOSSPK01_R001.wav',
89
+ 'sentence': 'KHÁCH SẠN'}
90
+ ```
91
+
92
+ ### Data Fields
93
+
94
+ - speaker_id: An id for which speaker (voice) made the recording
95
+ - path: The path to the audio file
96
+ - sentence: The sentence the user was prompted to speak
97
+
98
+ ### Data Splits
99
+
100
+ The speech material has been subdivided into portions for train and test.
101
+
102
+ Speech was recorded in a quiet environment with high quality microphone, speakers were asked to read one sentence at a time.
103
+
104
+ | | Train | Test |
105
+ | ---------------- | ----- | ----- |
106
+ | Speakers | 46 | 19 |
107
+ | Utterances | 11660 | 760 |
108
+ | Duration | 14:55 | 00:45 |
109
+ | Unique Syllables | 4617 | 1692 |
110
+
111
+ ## Dataset Creation
112
+
113
+ ### Curation Rationale
114
+
115
+ [Needs More Information]
116
+
117
+ ### Source Data
118
+
119
+ #### Initial Data Collection and Normalization
120
+
121
+ [Needs More Information]
122
+
123
+ #### Who are the source language producers?
124
+
125
+ [Needs More Information]
126
+
127
+ ### Annotations
128
+
129
+ #### Annotation process
130
+
131
+ [Needs More Information]
132
+
133
+ #### Who are the annotators?
134
+
135
+ [Needs More Information]
136
+
137
+ ### Personal and Sensitive Information
138
+
139
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
140
+
141
+ ## Considerations for Using the Data
142
+
143
+ ### Social Impact of Dataset
144
+
145
+ [More Information Needed]
146
+
147
+ ### Discussion of Biases
148
+
149
+ [More Information Needed]
150
+
151
+ ### Other Known Limitations
152
+
153
+ [More Information Needed]
154
+
155
+ ## Additional Information
156
+
157
+ ### Dataset Curators
158
+
159
+ The dataset was initially prepared by AILAB, a computer science lab of VNUHCM - University of Science.
160
+
161
+ ### Licensing Information
162
+
163
+ Creative Commons Attribution NonCommercial ShareAlike v4.0 (CC BY-NC-SA 4.0)
164
+
165
+ ### Citation Information
166
+
167
+ ```
168
+ @InProceedings{vivos:2016,
169
+ Address = {Ho Chi Minh, Vietnam}
170
+ title = {VIVOS: 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition},
171
+ author={Prof. Vu Hai Quan},
172
+ year={2016}
173
+ }
174
+ ```
175
+
176
+ ### Contributions
177
+
178
+ Thanks to [@binh234](https://github.com/binh234) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for\nVietnamese Automatic Speech Recognition task.\nThe corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of.\nWe publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems.\n", "citation": "@InProceedings{vivos:2016,\nAddress = {Ho Chi Minh, Vietnam}\ntitle = {VIVOS: 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition},\nauthor={Prof. Vu Hai Quan},\nyear={2016}\n}\n", "homepage": "https://ailab.hcmus.edu.vn/vivos", "license": "cc-by-sa-4.0", "features": {"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "vivos_dataset", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3186233, "num_examples": 11660, "dataset_name": "vivos_dataset"}, "test": {"name": "test", "num_bytes": 193258, "num_examples": 760, "dataset_name": "vivos_dataset"}}, "download_checksums": {"https://ailab.hcmus.edu.vn/assets/vivos.tar.gz": {"num_bytes": 1474408300, "checksum": "147477f7a7702cbafc2ee3808d1c142989d0dbc8d9fce8e07d5f329d5119e4ca"}}, "download_size": 1474408300, "post_processing_size": null, "dataset_size": 3379491, "size_in_bytes": 1477787791}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb1e23106618cb63bd75edf5946355b066ad5cbf551937ebce16195a126a4990
3
+ size 1884
vivos.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ import os
16
+
17
+ import datasets
18
+
19
+
20
+ # Find for instance the citation on arxiv or on the dataset repo/website
21
+ _CITATION = """\
22
+ @InProceedings{vivos:2016,
23
+ Address = {Ho Chi Minh, Vietnam}
24
+ title = {VIVOS: 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition},
25
+ author={Prof. Vu Hai Quan},
26
+ year={2016}
27
+ }
28
+ """
29
+
30
+ _DESCRIPTION = """\
31
+ VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for
32
+ Vietnamese Automatic Speech Recognition task.
33
+ The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of.
34
+ We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems.
35
+ """
36
+
37
+ _HOMEPAGE = "https://ailab.hcmus.edu.vn/vivos"
38
+
39
+ _LICENSE = "cc-by-sa-4.0"
40
+
41
+ _DATA_URL = "https://ailab.hcmus.edu.vn/assets/vivos.tar.gz"
42
+
43
+
44
+ class VivosDataset(datasets.GeneratorBasedBuilder):
45
+ """VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for
46
+ Vietnamese Automatic Speech Recognition task."""
47
+
48
+ VERSION = datasets.Version("1.1.0")
49
+
50
+ # This is an example of a dataset with multiple configurations.
51
+ # If you don't want/need to define several sub-sets in your dataset,
52
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
53
+
54
+ # If you need to make complex sub-parts in the datasets with configurable options
55
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
56
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
57
+
58
+ def _info(self):
59
+ return datasets.DatasetInfo(
60
+ # This is the description that will appear on the datasets page.
61
+ description=_DESCRIPTION,
62
+ features=datasets.Features(
63
+ {
64
+ "speaker_id": datasets.Value("string"),
65
+ "path": datasets.Value("string"),
66
+ "sentence": datasets.Value("string"),
67
+ }
68
+ ),
69
+ supervised_keys=None,
70
+ homepage=_HOMEPAGE,
71
+ license=_LICENSE,
72
+ citation=_CITATION,
73
+ )
74
+
75
+ def _split_generators(self, dl_manager):
76
+ """Returns SplitGenerators."""
77
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
78
+
79
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
80
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
81
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
82
+ dl_path = dl_manager.download_and_extract(_DATA_URL)
83
+ data_dir = os.path.join(dl_path, "vivos")
84
+ train_dir = os.path.join(data_dir, "train")
85
+ test_dir = os.path.join(data_dir, "test")
86
+
87
+ return [
88
+ datasets.SplitGenerator(
89
+ name=datasets.Split.TRAIN,
90
+ # These kwargs will be passed to _generate_examples
91
+ gen_kwargs={
92
+ "filepath": os.path.join(train_dir, "prompts.txt"),
93
+ "path_to_clips": os.path.join(train_dir, "waves"),
94
+ },
95
+ ),
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.TEST,
98
+ # These kwargs will be passed to _generate_examples
99
+ gen_kwargs={
100
+ "filepath": os.path.join(test_dir, "prompts.txt"),
101
+ "path_to_clips": os.path.join(test_dir, "waves"),
102
+ },
103
+ ),
104
+ ]
105
+
106
+ def _generate_examples(
107
+ self,
108
+ filepath,
109
+ path_to_clips, # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
110
+ ):
111
+ """Yields examples as (key, example) tuples."""
112
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
113
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
114
+
115
+ with open(filepath, encoding="utf-8") as f:
116
+ for id_, row in enumerate(f):
117
+ data = row.strip().split(" ", 1)
118
+ speaker_id = data[0].split("_")[0]
119
+ yield id_, {
120
+ "speaker_id": speaker_id,
121
+ "path": os.path.join(path_to_clips, speaker_id, data[0] + ".wav"),
122
+ "sentence": data[1],
123
+ }