Bharat Ramanathan commited on
Commit
35548f7
1 Parent(s): d054971

add loading script and readme

Browse files
Files changed (2) hide show
  1. README.md +145 -0
  2. malayalam_asr_corpus.py +163 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language:
5
+ - ml
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Malayalam ASR Corpus
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - extended|common_voice
17
+ - extended|openslr
18
+ tags: []
19
+ task_categories:
20
+ - automatic-speech-recognition
21
+ task_ids: []
22
+ ---
23
+
24
+ # Dataset Card for [Malayalam Asr Corpus]
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:**
54
+ - **Repository:**
55
+ - **Paper:**
56
+ - **Leaderboard:**
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ [More Information Needed]
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ [More Information Needed]
66
+
67
+ ### Languages
68
+
69
+ [More Information Needed]
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ [More Information Needed]
76
+
77
+ ### Data Fields
78
+
79
+ [More Information Needed]
80
+
81
+ ### Data Splits
82
+
83
+ [More Information Needed]
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ [More Information Needed]
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ [More Information Needed]
96
+
97
+ #### Who are the source language producers?
98
+
99
+ [More Information Needed]
100
+
101
+ ### Annotations
102
+
103
+ #### Annotation process
104
+
105
+ [More Information Needed]
106
+
107
+ #### Who are the annotators?
108
+
109
+ [More Information Needed]
110
+
111
+ ### Personal and Sensitive Information
112
+
113
+ [More Information Needed]
114
+
115
+ ## Considerations for Using the Data
116
+
117
+ ### Social Impact of Dataset
118
+
119
+ [More Information Needed]
120
+
121
+ ### Discussion of Biases
122
+
123
+ [More Information Needed]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [More Information Needed]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ [More Information Needed]
134
+
135
+ ### Licensing Information
136
+
137
+ [More Information Needed]
138
+
139
+ ### Citation Information
140
+
141
+ [More Information Needed]
142
+
143
+ ### Contributions
144
+
145
+ Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
malayalam_asr_corpus.py ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Filtered Malayalam ASR corpus collected from common_voice 11, fleurs, openslr63, and ucla corpora filtered for duration between 3 - 30 secs"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+ _CITATION = """\
24
+ @misc{https://doi.org/10.48550/arxiv.2211.09536,
25
+ doi = {10.48550/ARXIV.2211.09536},
26
+
27
+ url = {https://arxiv.org/abs/2211.09536},
28
+
29
+ author = {Kumar, Gokul Karthik and S, Praveen and Kumar, Pratyush and Khapra, Mitesh M. and Nandakumar, Karthik},
30
+
31
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
32
+
33
+ title = {Towards Building Text-To-Speech Systems for the Next Billion Users},
34
+
35
+ publisher = {arXiv},
36
+
37
+ year = {2022},
38
+
39
+ copyright = {arXiv.org perpetual, non-exclusive license}
40
+ }
41
+
42
+ @inproceedings{commonvoice:2020,
43
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
44
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
45
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
46
+ pages = {4211--4215},
47
+ year = 2020
48
+ }
49
+
50
+ @misc{https://doi.org/10.48550/arxiv.2205.12446,
51
+ doi = {10.48550/ARXIV.2205.12446},
52
+
53
+ url = {https://arxiv.org/abs/2205.12446},
54
+
55
+ author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
56
+
57
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
58
+
59
+ title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
60
+
61
+ publisher = {arXiv},
62
+
63
+ year = {2022},
64
+
65
+ copyright = {Creative Commons Attribution 4.0 International}
66
+ }
67
+
68
+ """
69
+
70
+ _DESCRIPTION = """\
71
+ The corpus contains roughly 10 hours of audio and trasncripts in Malayalam language. The transcripts have beedn de-duplicated using exact match deduplication.
72
+ """
73
+
74
+ _HOMEPAGE = ""
75
+
76
+ _LICENSE = "https://creativecommons.org/licenses/"
77
+
78
+
79
+ _METADATA_URLS = {
80
+ "train": "data/train.jsonl",
81
+ "test": "data/test.jsonl"
82
+ }
83
+ _URLS = {
84
+ "train": "data/train.tar.gz",
85
+ "test": "data/test.tar.gz",
86
+
87
+ }
88
+
89
+ class MalayalamASRCorpus(datasets.GeneratorBasedBuilder):
90
+ """Malayalam ASR Corpus contains transcribed speech corpus for training ASR systems for Malayalam language."""
91
+
92
+ VERSION = datasets.Version("1.1.0")
93
+ def _info(self):
94
+ features = datasets.Features(
95
+ {
96
+ "audio": datasets.Audio(sampling_rate=16_000),
97
+ "path": datasets.Value("string"),
98
+ "sentence": datasets.Value("string"),
99
+ "length": datasets.Value("float")
100
+ }
101
+ )
102
+ return datasets.DatasetInfo(
103
+ description=_DESCRIPTION,
104
+ features=features,
105
+ supervised_keys=("sentence", "label"),
106
+ homepage=_HOMEPAGE,
107
+ license=_LICENSE,
108
+ citation=_CITATION,
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ metadata_paths = dl_manager.download(_METADATA_URLS)
113
+ train_archive = dl_manager.download(_URLS["train"])
114
+ test_archive = dl_manager.download(_URLS["test"])
115
+ local_extracted_train_archive = dl_manager.extract(train_archive) if not dl_manager.is_streaming else None
116
+ local_extracted_test_archive = dl_manager.extract(test_archive) if not dl_manager.is_streaming else None
117
+ test_archive = dl_manager.download(_URLS["test"])
118
+ train_dir = "train"
119
+ test_dir = "test"
120
+
121
+ return [
122
+ datasets.SplitGenerator(
123
+ name=datasets.Split.TRAIN,
124
+ gen_kwargs={
125
+ "metadata_path": metadata_paths["train"],
126
+ "local_extracted_archive": local_extracted_train_archive,
127
+ "path_to_clips": train_dir,
128
+ "audio_files": dl_manager.iter_archive(train_archive),
129
+ },
130
+ ),
131
+ datasets.SplitGenerator(
132
+ name=datasets.Split.TEST,
133
+ gen_kwargs={
134
+ "metadata_path": metadata_paths["test"],
135
+ "local_extracted_archive": local_extracted_test_archive,
136
+ "path_to_clips": test_dir,
137
+ "audio_files": dl_manager.iter_archive(test_archive),
138
+ },
139
+ ),
140
+
141
+ ]
142
+
143
+ def _generate_examples(self, metadata_path, local_extracted_archive, path_to_clips, audio_files):
144
+ """Yields examples as (key, example) tuples."""
145
+ examples = {}
146
+ with open(metadata_path, encoding="utf-8") as f:
147
+ for key, row in enumerate(f):
148
+ data = json.loads(row)
149
+ examples[data["path"]] = data
150
+ inside_clips_dir = False
151
+ id_ = 0
152
+ for path, f in audio_files:
153
+ if path.startswith(path_to_clips):
154
+ inside_clips_dir = True
155
+ if path in examples:
156
+ result = examples[path]
157
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
158
+ result["audio"] = {"path": path, "bytes": f.read()}
159
+ result["path"] = path
160
+ yield id_, result
161
+ id_ += 1
162
+ elif inside_clips_dir:
163
+ break