Datasets:

Languages:
Tamil
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
found
Tags:
License:
Bharat Ramanathan commited on
Commit
ec1b0fb
1 Parent(s): 36ca303

add data and loading scripts

Browse files
.gitattributes CHANGED
@@ -52,3 +52,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ data/test.jsonl filter=lfs diff=lfs merge=lfs -text
56
+ data/train.jsonl filter=lfs diff=lfs merge=lfs -text
57
+ data/test.tar.gz filter=lfs diff=lfs merge=lfs -text
58
+ data/train.tar.gz filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language:
5
+ - ta
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Tamil ASR Corpus
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - extended|common_voice
17
+ - extended|openslr
18
+ tags: []
19
+ task_categories:
20
+ - automatic-speech-recognition
21
+ task_ids: []
22
+ ---
23
+
24
+ # Dataset Card for [Dataset Name]
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:**
54
+ - **Repository:**
55
+ - **Paper:**
56
+ - **Leaderboard:**
57
+ - **Point of Contact:**
58
+
59
+ ### Dataset Summary
60
+
61
+ [More Information Needed]
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ [More Information Needed]
66
+
67
+ ### Languages
68
+
69
+ [More Information Needed]
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ [More Information Needed]
76
+
77
+ ### Data Fields
78
+
79
+ [More Information Needed]
80
+
81
+ ### Data Splits
82
+
83
+ [More Information Needed]
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ [More Information Needed]
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ [More Information Needed]
96
+
97
+ #### Who are the source language producers?
98
+
99
+ [More Information Needed]
100
+
101
+ ### Annotations
102
+
103
+ #### Annotation process
104
+
105
+ [More Information Needed]
106
+
107
+ #### Who are the annotators?
108
+
109
+ [More Information Needed]
110
+
111
+ ### Personal and Sensitive Information
112
+
113
+ [More Information Needed]
114
+
115
+ ## Considerations for Using the Data
116
+
117
+ ### Social Impact of Dataset
118
+
119
+ [More Information Needed]
120
+
121
+ ### Discussion of Biases
122
+
123
+ [More Information Needed]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [More Information Needed]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ [More Information Needed]
134
+
135
+ ### Licensing Information
136
+
137
+ [More Information Needed]
138
+
139
+ ### Citation Information
140
+
141
+ [More Information Needed]
142
+
143
+ ### Contributions
144
+
145
+ Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
data/test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25455961c8109f1a873d2315c3caf727c10a8dc74fcfe83e56f047425a5ae4d4
3
+ size 3070013
data/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcbbe455da01b6cefb01b557294e185fe9bfd7bffddfbef386563f18c6d9a351
3
+ size 88206908
data/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fff6fc5c1ba8f9c4ac151f3194b5310f9ec0e10c77a4b6f69ee965baedc9b61
3
+ size 343343452
data/train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a27977f23db9ac8faf0203f1e30f8b017ead0105c711d27a80d6e1f91044fcc
3
+ size 7302647151
tamil_asr_corpus.py ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Filtered Tamil ASR corpus collected from common_voice 11, fleurs, openslr65, openslr127 and ucla corpora filtered for duration between 5 - 25 secs"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+ _CITATION = """\
24
+ @misc{mile_1,
25
+ doi = {10.48550/ARXIV.2207.13331},
26
+ url = {https://arxiv.org/abs/2207.13331},
27
+ author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
28
+ title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada},
29
+ publisher = {arXiv},
30
+ year = {2022},
31
+ }
32
+
33
+ @misc{mile_2,
34
+ doi = {10.48550/ARXIV.2207.13333},
35
+ url = {https://arxiv.org/abs/2207.13333},
36
+ author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
37
+ title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada},
38
+ publisher = {arXiv},
39
+ year = {2022},
40
+ }
41
+
42
+ @inproceedings{he-etal-2020-open,
43
+ title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
44
+ author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
45
+ booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
46
+ month = may,
47
+ year = {2020},
48
+ address = {Marseille, France},
49
+ publisher = {European Language Resources Association (ELRA)},
50
+ pages = {6494--6503},
51
+ url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
52
+ ISBN = "{979-10-95546-34-4},
53
+ }
54
+
55
+ @misc{https://doi.org/10.48550/arxiv.2211.09536,
56
+ doi = {10.48550/ARXIV.2211.09536},
57
+
58
+ url = {https://arxiv.org/abs/2211.09536},
59
+
60
+ author = {Kumar, Gokul Karthik and S, Praveen and Kumar, Pratyush and Khapra, Mitesh M. and Nandakumar, Karthik},
61
+
62
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
63
+
64
+ title = {Towards Building Text-To-Speech Systems for the Next Billion Users},
65
+
66
+ publisher = {arXiv},
67
+
68
+ year = {2022},
69
+
70
+ copyright = {arXiv.org perpetual, non-exclusive license}
71
+ }
72
+
73
+ @inproceedings{commonvoice:2020,
74
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
75
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
76
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
77
+ pages = {4211--4215},
78
+ year = 2020
79
+ }
80
+
81
+ @misc{https://doi.org/10.48550/arxiv.2205.12446,
82
+ doi = {10.48550/ARXIV.2205.12446},
83
+
84
+ url = {https://arxiv.org/abs/2205.12446},
85
+
86
+ author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
87
+
88
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
89
+
90
+ title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
91
+
92
+ publisher = {arXiv},
93
+
94
+ year = {2022},
95
+
96
+ copyright = {Creative Commons Attribution 4.0 International}
97
+ }
98
+
99
+ """
100
+
101
+ _DESCRIPTION = """\
102
+ The corpus contains roughly 1000 hours of audio and trasncripts in Tamil language. The transcripts have beedn de-duplicated using exact match deduplication.
103
+ """
104
+
105
+ _HOMEPAGE = ""
106
+
107
+ _LICENSE = "https://creativecommons.org/licenses/"
108
+
109
+
110
+ _METADATA_URLS = {
111
+ "train": "data/train.jsonl",
112
+ "test": "data/test.jsonl"
113
+ }
114
+ _URLS = {
115
+ "train": "data/train.tar.gz",
116
+ "test": "data/test.tar.gz",
117
+
118
+ }
119
+
120
+ class TamilASRCorpus(datasets.GeneratorBasedBuilder):
121
+ """Tamil ASR Corpus contains transcribed speech corpus for training ASR systems for Tamil language."""
122
+
123
+ VERSION = datasets.Version("1.1.0")
124
+ def _info(self):
125
+ features = datasets.Features(
126
+ {
127
+ "audio": datasets.Audio(sampling_rate=16_000),
128
+ "path": datasets.Value("string"),
129
+ "sentence": datasets.Value("string"),
130
+ "length": datasets.Value("float")
131
+ }
132
+ )
133
+ return datasets.DatasetInfo(
134
+ description=_DESCRIPTION,
135
+ features=features,
136
+ supervised_keys=("sentence", "label"),
137
+ homepage=_HOMEPAGE,
138
+ license=_LICENSE,
139
+ citation=_CITATION,
140
+ )
141
+
142
+ def _split_generators(self, dl_manager):
143
+ metadata_paths = dl_manager.download(_METADATA_URLS)
144
+ train_archive = dl_manager.download(_URLS["train"])
145
+ test_archive = dl_manager.download(_URLS["test"])
146
+ local_extracted_train_archive = dl_manager.extract(train_archive) if not dl_manager.is_streaming else None
147
+ local_extracted_test_archive = dl_manager.extract(test_archive) if not dl_manager.is_streaming else None
148
+ test_archive = dl_manager.download(_URLS["test"])
149
+ train_dir = "train"
150
+ test_dir = "test"
151
+
152
+ return [
153
+ datasets.SplitGenerator(
154
+ name=datasets.Split.TRAIN,
155
+ gen_kwargs={
156
+ "metadata_path": metadata_paths["train"],
157
+ "local_extracted_archive": local_extracted_train_archive,
158
+ "path_to_clips": train_dir,
159
+ "audio_files": dl_manager.iter_archive(train_archive),
160
+ },
161
+ ),
162
+ datasets.SplitGenerator(
163
+ name=datasets.Split.TEST,
164
+ gen_kwargs={
165
+ "metadata_path": metadata_paths["test"],
166
+ "local_extracted_archive": local_extracted_test_archive,
167
+ "path_to_clips": test_dir,
168
+ "audio_files": dl_manager.iter_archive(test_archive),
169
+ },
170
+ ),
171
+
172
+ ]
173
+
174
+ def _generate_examples(self, metadata_path, local_extracted_archive, path_to_clips, audio_files):
175
+ """Yields examples as (key, example) tuples."""
176
+ examples = {}
177
+ with open(metadata_path, encoding="utf-8") as f:
178
+ for key, row in enumerate(f):
179
+ data = json.loads(row)
180
+ examples[data["path"]] = data
181
+ inside_clips_dir = False
182
+ id_ = 0
183
+ for path, f in audio_files:
184
+ if path.startswith(path_to_clips):
185
+ inside_clips_dir = True
186
+ if path in examples:
187
+ result = examples[path]
188
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
189
+ result["audio"] = {"path": path, "bytes": f.read()}
190
+ result["path"] = path
191
+ yield id_, result
192
+ id_ += 1
193
+ elif inside_clips_dir:
194
+ break
195
+