cahya commited on
Commit
230a13a
1 Parent(s): 6e5e939

add fleurs dataset with text normaliser for indonesian

Browse files
README.md CHANGED
@@ -1,3 +1,310 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ language:
10
+ - afr
11
+ - amh
12
+ - ara
13
+ - asm
14
+ - ast
15
+ - azj
16
+ - bel
17
+ - ben
18
+ - bos
19
+ - cat
20
+ - ceb
21
+ - cmn
22
+ - ces
23
+ - cym
24
+ - dan
25
+ - deu
26
+ - ell
27
+ - eng
28
+ - spa
29
+ - est
30
+ - fas
31
+ - ful
32
+ - fin
33
+ - tgl
34
+ - fra
35
+ - gle
36
+ - glg
37
+ - guj
38
+ - hau
39
+ - heb
40
+ - hin
41
+ - hrv
42
+ - hun
43
+ - hye
44
+ - ind
45
+ - ibo
46
+ - isl
47
+ - ita
48
+ - jpn
49
+ - jav
50
+ - kat
51
+ - kam
52
+ - kea
53
+ - kaz
54
+ - khm
55
+ - kan
56
+ - kor
57
+ - ckb
58
+ - kir
59
+ - ltz
60
+ - lug
61
+ - lin
62
+ - lao
63
+ - lit
64
+ - luo
65
+ - lav
66
+ - mri
67
+ - mkd
68
+ - mal
69
+ - mon
70
+ - mar
71
+ - msa
72
+ - mlt
73
+ - mya
74
+ - nob
75
+ - npi
76
+ - nld
77
+ - nso
78
+ - nya
79
+ - oci
80
+ - orm
81
+ - ory
82
+ - pan
83
+ - pol
84
+ - pus
85
+ - por
86
+ - ron
87
+ - rus
88
+ - bul
89
+ - snd
90
+ - slk
91
+ - slv
92
+ - sna
93
+ - som
94
+ - srp
95
+ - swe
96
+ - swh
97
+ - tam
98
+ - tel
99
+ - tgk
100
+ - tha
101
+ - tur
102
+ - ukr
103
+ - umb
104
+ - urd
105
+ - uzb
106
+ - vie
107
+ - wol
108
+ - xho
109
+ - yor
110
+ - yue
111
+ - zul
112
+ license:
113
+ - cc-by-4.0
114
+ multilinguality:
115
+ - multilingual
116
+ size_categories:
117
+ - 10K<n<100K
118
+ task_categories:
119
+ - automatic-speech-recognition
120
+ task_ids: []
121
+ pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
122
+ (XTREME-S) benchmark is a benchmark designed to evaluate speech representations
123
+ across languages, tasks, domains and data regimes. It covers 102 languages from
124
+ 10+ language families, 3 different domains and 4 task families: speech recognition,
125
+ translation, classification and retrieval.'
126
+ tags:
127
+ - speech-recognition
128
  ---
129
+
130
+ # FLEURS
131
+
132
+ ## Dataset Description
133
+
134
+ - **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
135
+ - **Paper:** [FLEURS: Few-shot Learning Evaluation of
136
+ Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
137
+ - **Total amount of disk used:** ca. 350 GB
138
+
139
+ Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
140
+ We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
141
+
142
+ Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
143
+ used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
144
+
145
+ - **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
146
+ - **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
147
+ - **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
148
+ - **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
149
+ - **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
150
+ - **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
151
+ - **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
152
+
153
+
154
+ ## Supported Tasks
155
+
156
+ ### 1. Speech Recognition (ASR)
157
+
158
+ ```py
159
+ from datasets import load_dataset
160
+
161
+ fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
162
+ # to download all data for multi-lingual fine-tuning uncomment following line
163
+ # fleurs_asr = load_dataset("google/fleurs", "all")
164
+
165
+ # see structure
166
+ print(fleurs_asr)
167
+
168
+ # load audio sample on the fly
169
+ audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
170
+ transcription = fleurs_asr["train"][0]["transcription"] # first transcription
171
+ # use `audio_input` and `transcription` to fine-tune your model for ASR
172
+
173
+ # for analyses see language groups
174
+ all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
175
+ lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
176
+
177
+ all_language_groups[lang_group_id]
178
+ ```
179
+
180
+ ### 2. Language Identification
181
+
182
+ LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
183
+
184
+ ```py
185
+ from datasets import load_dataset
186
+
187
+ fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
188
+
189
+ # see structure
190
+ print(fleurs_langID)
191
+
192
+ # load audio sample on the fly
193
+ audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
194
+ language_class = fleurs_langID["train"][0]["lang_id"] # first id class
195
+ language = fleurs_langID["train"].features["lang_id"].names[language_class]
196
+
197
+ # use audio_input and language_class to fine-tune your model for audio classification
198
+ ```
199
+
200
+ ### 3. Retrieval
201
+
202
+ Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
203
+
204
+ ```py
205
+ from datasets import load_dataset
206
+
207
+ fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
208
+ # to download all data for multi-lingual fine-tuning uncomment following line
209
+ # fleurs_retrieval = load_dataset("google/fleurs", "all")
210
+
211
+ # see structure
212
+ print(fleurs_retrieval)
213
+
214
+ # load audio sample on the fly
215
+ audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
216
+ text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
217
+ text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
218
+
219
+ # use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
220
+ ```
221
+
222
+ Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
223
+
224
+ ## Dataset Structure
225
+
226
+ We show detailed information the example configurations `af_za` of the dataset.
227
+ All other configurations have the same structure.
228
+
229
+ ### Data Instances
230
+
231
+ **af_za**
232
+ - Size of downloaded dataset files: 1.47 GB
233
+ - Size of the generated dataset: 1 MB
234
+ - Total amount of disk used: 1.47 GB
235
+
236
+ An example of a data instance of the config `af_za` looks as follows:
237
+
238
+ ```
239
+ {'id': 91,
240
+ 'num_samples': 385920,
241
+ 'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
242
+ 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
243
+ 'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
244
+ -1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
245
+ 'sampling_rate': 16000},
246
+ 'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
247
+ 'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
248
+ 'gender': 0,
249
+ 'lang_id': 0,
250
+ 'language': 'Afrikaans',
251
+ 'lang_group_id': 3}
252
+ ```
253
+
254
+ ### Data Fields
255
+
256
+ The data fields are the same among all splits.
257
+ - **id** (int): ID of audio sample
258
+ - **num_samples** (int): Number of float values
259
+ - **path** (str): Path to the audio file
260
+ - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
261
+ - **raw_transcription** (str): The non-normalized transcription of the audio file
262
+ - **transcription** (str): Transcription of the audio file
263
+ - **gender** (int): Class id of gender
264
+ - **lang_id** (int): Class id of language
265
+ - **lang_group_id** (int): Class id of language group
266
+
267
+ ### Data Splits
268
+
269
+ Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
270
+
271
+ ## Dataset Creation
272
+
273
+ We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
274
+ train, dev and test respectively.
275
+
276
+ ## Considerations for Using the Data
277
+
278
+ ### Social Impact of Dataset
279
+
280
+ This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
281
+
282
+ ### Discussion of Biases
283
+
284
+ Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
285
+
286
+ ### Other Known Limitations
287
+
288
+ The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
289
+
290
+ ## Additional Information
291
+
292
+ All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
293
+
294
+ ### Citation Information
295
+
296
+ You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
297
+ Please cite the paper when referencing the FLEURS corpus as:
298
+
299
+ ```
300
+ @article{fleurs2022arxiv,
301
+ title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
302
+ author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
303
+ journal={arXiv preprint arXiv:2205.12446},
304
+ url = {https://arxiv.org/abs/2205.12446},
305
+ year = {2022},
306
+ ```
307
+
308
+ ### Contributions
309
+
310
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
data/metadata.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aca40140670aeb810b5b0963b0a6c573e9bd5206c66e2fbab6ff2571f0f3d1b7
3
+ size 64825504
fleurs.py ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The Google and HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ import os
17
+ from collections import OrderedDict
18
+ from text_processor.text_processor import TextProcessor
19
+ import datasets
20
+
21
+ logger = datasets.logging.get_logger(__name__)
22
+
23
+
24
+ """ FLEURS Dataset"""
25
+
26
+ _FLEURS_LANG_TO_ID = OrderedDict([("Afrikaans", "af"), ("Amharic", "am"), ("Arabic", "ar"), ("Armenian", "hy"), ("Assamese", "as"), ("Asturian", "ast"), ("Azerbaijani", "az"), ("Belarusian", "be"), ("Bengali", "bn"), ("Bosnian", "bs"), ("Bulgarian", "bg"), ("Burmese", "my"), ("Catalan", "ca"), ("Cebuano", "ceb"), ("Mandarin Chinese", "cmn_hans"), ("Cantonese Chinese", "yue_hant"), ("Croatian", "hr"), ("Czech", "cs"), ("Danish", "da"), ("Dutch", "nl"), ("English", "en"), ("Estonian", "et"), ("Filipino", "fil"), ("Finnish", "fi"), ("French", "fr"), ("Fula", "ff"), ("Galician", "gl"), ("Ganda", "lg"), ("Georgian", "ka"), ("German", "de"), ("Greek", "el"), ("Gujarati", "gu"), ("Hausa", "ha"), ("Hebrew", "he"), ("Hindi", "hi"), ("Hungarian", "hu"), ("Icelandic", "is"), ("Igbo", "ig"), ("Indonesian", "id"), ("Irish", "ga"), ("Italian", "it"), ("Japanese", "ja"), ("Javanese", "jv"), ("Kabuverdianu", "kea"), ("Kamba", "kam"), ("Kannada", "kn"), ("Kazakh", "kk"), ("Khmer", "km"), ("Korean", "ko"), ("Kyrgyz", "ky"), ("Lao", "lo"), ("Latvian", "lv"), ("Lingala", "ln"), ("Lithuanian", "lt"), ("Luo", "luo"), ("Luxembourgish", "lb"), ("Macedonian", "mk"), ("Malay", "ms"), ("Malayalam", "ml"), ("Maltese", "mt"), ("Maori", "mi"), ("Marathi", "mr"), ("Mongolian", "mn"), ("Nepali", "ne"), ("Northern-Sotho", "nso"), ("Norwegian", "nb"), ("Nyanja", "ny"), ("Occitan", "oc"), ("Oriya", "or"), ("Oromo", "om"), ("Pashto", "ps"), ("Persian", "fa"), ("Polish", "pl"), ("Portuguese", "pt"), ("Punjabi", "pa"), ("Romanian", "ro"), ("Russian", "ru"), ("Serbian", "sr"), ("Shona", "sn"), ("Sindhi", "sd"), ("Slovak", "sk"), ("Slovenian", "sl"), ("Somali", "so"), ("Sorani-Kurdish", "ckb"), ("Spanish", "es"), ("Swahili", "sw"), ("Swedish", "sv"), ("Tajik", "tg"), ("Tamil", "ta"), ("Telugu", "te"), ("Thai", "th"), ("Turkish", "tr"), ("Ukrainian", "uk"), ("Umbundu", "umb"), ("Urdu", "ur"), ("Uzbek", "uz"), ("Vietnamese", "vi"), ("Welsh", "cy"), ("Wolof", "wo"), ("Xhosa", "xh"), ("Yoruba", "yo"), ("Zulu", "zu")])
27
+ _FLEURS_LANG_SHORT_TO_LONG = {v: k for k, v in _FLEURS_LANG_TO_ID.items()}
28
+
29
+
30
+ _FLEURS_LANG = sorted(["af_za", "am_et", "ar_eg", "as_in", "ast_es", "az_az", "be_by", "bn_in", "bs_ba", "ca_es", "ceb_ph", "cmn_hans_cn", "yue_hant_hk", "cs_cz", "cy_gb", "da_dk", "de_de", "el_gr", "en_us", "es_419", "et_ee", "fa_ir", "ff_sn", "fi_fi", "fil_ph", "fr_fr", "ga_ie", "gl_es", "gu_in", "ha_ng", "he_il", "hi_in", "hr_hr", "hu_hu", "hy_am", "id_id", "ig_ng", "is_is", "it_it", "ja_jp", "jv_id", "ka_ge", "kam_ke", "kea_cv", "kk_kz", "km_kh", "kn_in", "ko_kr", "ckb_iq", "ky_kg", "lb_lu", "lg_ug", "ln_cd", "lo_la", "lt_lt", "luo_ke", "lv_lv", "mi_nz", "mk_mk", "ml_in", "mn_mn", "mr_in", "ms_my", "mt_mt", "my_mm", "nb_no", "ne_np", "nl_nl", "nso_za", "ny_mw", "oc_fr", "om_et", "or_in", "pa_in", "pl_pl", "ps_af", "pt_br", "ro_ro", "ru_ru", "bg_bg", "sd_in", "sk_sk", "sl_si", "sn_zw", "so_so", "sr_rs", "sv_se", "sw_ke", "ta_in", "te_in", "tg_tj", "th_th", "tr_tr", "uk_ua", "umb_ao", "ur_pk", "uz_uz", "vi_vn", "wo_sn", "xh_za", "yo_ng", "zu_za"])
31
+ _FLEURS_LONG_TO_LANG = {_FLEURS_LANG_SHORT_TO_LONG["_".join(k.split("_")[:-1]) or k]: k for k in _FLEURS_LANG}
32
+ _FLEURS_LANG_TO_LONG = {v: k for k, v in _FLEURS_LONG_TO_LANG.items()}
33
+
34
+ _FLEURS_GROUP_TO_LONG = OrderedDict({
35
+ "western_european_we": ["Asturian", "Bosnian", "Catalan", "Croatian", "Danish", "Dutch", "English", "Finnish", "French", "Galician", "German", "Greek", "Hungarian", "Icelandic", "Irish", "Italian", "Kabuverdianu", "Luxembourgish", "Maltese", "Norwegian", "Occitan", "Portuguese", "Spanish", "Swedish", "Welsh"],
36
+ "eastern_european_ee": ["Armenian", "Belarusian", "Bulgarian", "Czech", "Estonian", "Georgian", "Latvian", "Lithuanian", "Macedonian", "Polish", "Romanian", "Russian", "Serbian", "Slovak", "Slovenian", "Ukrainian"],
37
+ "central_asia_middle_north_african_cmn": ["Arabic", "Azerbaijani", "Hebrew", "Kazakh", "Kyrgyz", "Mongolian", "Pashto", "Persian", "Sorani-Kurdish", "Tajik", "Turkish", "Uzbek"],
38
+ "sub_saharan_african_ssa": ["Afrikaans", "Amharic", "Fula", "Ganda", "Hausa", "Igbo", "Kamba", "Lingala", "Luo", "Northern-Sotho", "Nyanja", "Oromo", "Shona", "Somali", "Swahili", "Umbundu", "Wolof", "Xhosa", "Yoruba", "Zulu"],
39
+ "south_asian_sa": ["Assamese", "Bengali", "Gujarati", "Hindi", "Kannada", "Malayalam", "Marathi", "Nepali", "Oriya", "Punjabi", "Sindhi", "Tamil", "Telugu", "Urdu"],
40
+ "south_east_asian_sea": ["Burmese", "Cebuano", "Filipino", "Indonesian", "Javanese", "Khmer", "Lao", "Malay", "Maori", "Thai", "Vietnamese"],
41
+ "chinese_japanase_korean_cjk": ["Mandarin Chinese", "Cantonese Chinese", "Japanese", "Korean"],
42
+ })
43
+ _FLEURS_LONG_TO_GROUP = {a: k for k, v in _FLEURS_GROUP_TO_LONG.items() for a in v}
44
+ _FLEURS_LANG_TO_GROUP = {_FLEURS_LONG_TO_LANG[k]: v for k, v in _FLEURS_LONG_TO_GROUP.items()}
45
+
46
+ _ALL_LANG = _FLEURS_LANG
47
+ _ALL_CONFIGS = []
48
+
49
+ for langs in _FLEURS_LANG:
50
+ _ALL_CONFIGS.append(langs)
51
+
52
+ _ALL_CONFIGS.append("all")
53
+
54
+ # TODO(FLEURS)
55
+ _DESCRIPTION = "FLEURS is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages."
56
+ _CITATION = ""
57
+ _HOMEPAGE_URL = ""
58
+
59
+ _DATA_URL = "https://storage.googleapis.com/xtreme_translations/FLEURS102/{}.tar.gz"
60
+ _METADATA_URL = "data/metadata.zip"
61
+
62
+
63
+ class FleursConfig(datasets.BuilderConfig):
64
+ """BuilderConfig for xtreme-s"""
65
+
66
+ def __init__(
67
+ self, name, description, citation, homepage, data_url
68
+ ):
69
+ super(FleursConfig, self).__init__(
70
+ name=self.name,
71
+ version=datasets.Version("2.0.0", ""),
72
+ description=self.description,
73
+ )
74
+ self.name = name
75
+ self.description = description
76
+ self.citation = citation
77
+ self.homepage = homepage
78
+ self.data_url = data_url
79
+
80
+
81
+ def _build_config(name):
82
+ return FleursConfig(
83
+ name=name,
84
+ description=_DESCRIPTION,
85
+ citation=_CITATION,
86
+ homepage=_HOMEPAGE_URL,
87
+ data_url=_DATA_URL,
88
+ )
89
+
90
+
91
+ class Fleurs(datasets.GeneratorBasedBuilder):
92
+
93
+ DEFAULT_WRITER_BATCH_SIZE = 1000
94
+ BUILDER_CONFIGS = [_build_config(name) for name in _ALL_CONFIGS]
95
+
96
+ def _info(self):
97
+ task_templates = None
98
+ langs = _ALL_CONFIGS
99
+ features = datasets.Features(
100
+ {
101
+ "id": datasets.Value("int32"),
102
+ "num_samples": datasets.Value("int32"),
103
+ "path": datasets.Value("string"),
104
+ "audio": datasets.Audio(sampling_rate=16_000),
105
+ "transcription": datasets.Value("string"),
106
+ "raw_transcription": datasets.Value("string"),
107
+ "gender": datasets.ClassLabel(names=["male", "female", "other"]),
108
+ "lang_id": datasets.ClassLabel(names=langs),
109
+ "language": datasets.Value("string"),
110
+ "lang_group_id": datasets.ClassLabel(
111
+ names=list(_FLEURS_GROUP_TO_LONG.keys())
112
+ ),
113
+ }
114
+ )
115
+
116
+ return datasets.DatasetInfo(
117
+ description=self.config.description + "\n" + _DESCRIPTION,
118
+ features=features,
119
+ supervised_keys=("audio", "transcription"),
120
+ homepage=self.config.homepage,
121
+ citation=self.config.citation + "\n" + _CITATION,
122
+ task_templates=task_templates,
123
+ )
124
+
125
+ # Fleurs
126
+ def _split_generators(self, dl_manager):
127
+ data_url_format = self.config.data_url
128
+
129
+ metadata_path = dl_manager.download_and_extract(_METADATA_URL)
130
+
131
+ if self.config.name == "all":
132
+ data_urls = {l: data_url_format.format(l) for l in _FLEURS_LANG}
133
+ else:
134
+ data_urls = {
135
+ self.config.name: data_url_format.format(self.config.name)
136
+ }
137
+
138
+ archive_path = dl_manager.download(data_urls)
139
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
140
+
141
+ archive_iters = {l: dl_manager.iter_archive(v) for l,v in archive_path.items()}
142
+
143
+ audio_path = {l: os.path.join(l, "audio") for l in archive_path.keys()}
144
+
145
+ return [
146
+ datasets.SplitGenerator(
147
+ name=datasets.Split.TRAIN,
148
+ gen_kwargs={
149
+ "local_extracted_archive": local_extracted_archive,
150
+ "archive_iters": archive_iters,
151
+ "audio_path": {
152
+ l: os.path.join(v, "train") for l, v in audio_path.items()
153
+ },
154
+ "text_path": {
155
+ l: os.path.join(metadata_path, "metadata", l, "train.tsv") for l in archive_path.keys()
156
+ },
157
+ },
158
+ ),
159
+ datasets.SplitGenerator(
160
+ name=datasets.Split.VALIDATION,
161
+ gen_kwargs={
162
+ "local_extracted_archive": local_extracted_archive,
163
+ "archive_iters": archive_iters,
164
+ "audio_path": {
165
+ l: os.path.join(v, "dev") for l, v in audio_path.items()
166
+ },
167
+ "text_path": {
168
+ l: os.path.join(metadata_path, "metadata", l, "dev.tsv") for l in archive_path.keys()
169
+ },
170
+ },
171
+ ),
172
+ datasets.SplitGenerator(
173
+ name=datasets.Split.TEST,
174
+ gen_kwargs={
175
+ "local_extracted_archive": local_extracted_archive,
176
+ "archive_iters": archive_iters,
177
+ "audio_path": {
178
+ l: os.path.join(v, "test") for l, v in audio_path.items()
179
+ },
180
+ "text_path": {
181
+ l: os.path.join(metadata_path, "metadata", l, "test.tsv") for l in archive_path.keys()
182
+ },
183
+ },
184
+ ),
185
+ ]
186
+
187
+ def _get_data(self, lines, lang_id):
188
+ tp = TextProcessor()
189
+ data = {}
190
+ gender_to_id = {"MALE": 0, "FEMALE": 1, "OTHER": 2}
191
+ for line in lines:
192
+ if isinstance(line, bytes):
193
+ line = line.decode("utf-8")
194
+ (
195
+ _id,
196
+ file_name,
197
+ raw_transcription,
198
+ transcription,
199
+ _,
200
+ num_samples,
201
+ gender,
202
+ ) = line.strip().split("\t")
203
+
204
+ lang_group = _FLEURS_LANG_TO_GROUP[lang_id]
205
+ raw_transcription = tp.normalize(raw_transcription)
206
+ transcription = tp.normalize(transcription)
207
+
208
+ data[file_name] = {
209
+ "id": int(_id),
210
+ "raw_transcription": raw_transcription,
211
+ "transcription": transcription,
212
+ "num_samples": int(num_samples),
213
+ "gender": gender_to_id[gender],
214
+ "lang_id": _FLEURS_LANG.index(lang_id),
215
+ "language": _FLEURS_LANG_TO_LONG[lang_id],
216
+ "lang_group_id": list(_FLEURS_GROUP_TO_LONG.keys()).index(
217
+ lang_group
218
+ ),
219
+ }
220
+
221
+ return data
222
+
223
+ def _generate_examples(self, local_extracted_archive, archive_iters, audio_path, text_path):
224
+ key = 0
225
+
226
+ for lang_id, archive_iter in archive_iters.items():
227
+ with open(text_path[lang_id], encoding="utf-8") as f:
228
+ lines = f.readlines()
229
+ data = self._get_data(lines, lang_id)
230
+
231
+ for path, f in archive_iter:
232
+ path = path.split("/")[-1]
233
+ if path not in data.keys():
234
+ continue
235
+
236
+ result = data[path]
237
+ extracted_audio_path = (
238
+ os.path.join(local_extracted_archive[lang_id], audio_path[lang_id])
239
+ if local_extracted_archive is not None
240
+ else None
241
+ )
242
+ extracted_audio_path = os.path.join(extracted_audio_path, path) if extracted_audio_path else path
243
+ result["path"] = extracted_audio_path if extracted_audio_path is not None else None
244
+ result["audio"] = {"path": path, "bytes": f.read()}
245
+ yield key, result
246
+ key += 1
text_processor/currency.tsv ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ US$ dollar amerika serikat
2
+ nzd dollar new zealand
3
+ rs rupee
4
+ chf franc swiss
5
+ dkk kroner denmark
6
+ fim markka finland
7
+ aed dirham arab
8
+ czk koruna ceko
9
+ mro ouguiya mauritania
10
+ pkr rupee pakistan
11
+ crc colon costa rica
12
+ hk$ dollar hong kong
13
+ npr rupee nepal
14
+ awg florin aruban
15
+ nok kroner norwegia
16
+ tzs shilling tanzania
17
+ sek kronor swedish
18
+ cyp pounds cypriot
19
+ sar riyal saudi
20
+ cve escudo cape verde
21
+ rsd dinar serbia
22
+ dm mark jerman
23
+ shp pounds saint helena
24
+ php peso philipina
25
+ cad dollar canada
26
+ ssp pounds sudan selatan
27
+ scr rupee seychell
28
+ mvr rufiyaa maldivia
29
+ Rp rupiah
30
+ r real
31
+ $ dollar
32
+ € euro
33
+ £ pounds
34
+ ₩ won
35
+ ¥ yen
text_processor/measurements.tsv ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ sq mi mil kuadrat
2
+ sq ft kaki kuadrat
3
+ kbps kilobit per detik
4
+ mbps megabit per detik
5
+ kcal kilo kalori
6
+ ghz gigahertz
7
+ khz kilohertz
8
+ mhz megahertz
9
+ lbs pound
10
+ rpm revolution per menit
11
+ kwh kilo watt jam
12
+ min menit
13
+ mph mil per jam
14
+ mol mol
15
+ gpa giga pascal
16
+ km² kilometer kuadrat
17
+ km2 kilometer kuadrat
18
+ rad radian
19
+ kgf kilogram force
20
+ mm² millimeter kuadrat
21
+ mm2 millimeter kuadrat
22
+ cm² centimeter kuadrat
23
+ cm2 centimeter kuadrat
24
+ dm³ desimeter kubik
25
+ dm3 desimeter kubik
26
+ amu atomic mass unit
27
+ gwh giga watt jam
28
+ kpa kilopascal
29
+ cwt hundredweight
30
+ atm atmosphere
31
+ bar bar
32
+ km kilometer
33
+ cm centimeter
34
+ mm millimeter
35
+ ha hectare
36
+ mi mil
37
+ m² meter kuadrat
38
+ m2 meter kuadrat
39
+ ft kaki
40
+ hz hertz
41
+ kw kilowatt
42
+ hp tenaga kuda
43
+ mg milligram
44
+ kg kilogram
45
+ lb pound
46
+ mc mega coulomb
47
+ nm nanometer
48
+ mA milli ampere
49
+ m³ meter kubik
50
+ m3 meter kubik
51
+ tw tera watt
52
+ mv milli volt
53
+ mw megawatt
54
+ μm mikrometer
55
+ " inch
56
+ TB terabyte
57
+ cc c c
58
+ da dalton
59
+ db desibel
60
+ ps peta detik
61
+ oz ounce
62
+ hl hecto liter
63
+ μg mikrogram
64
+ pg petagram
65
+ GB gigabyte
66
+ kb kilobit
67
+ ev electron volt
68
+ MB megabyte
69
+ KB kilobyte
70
+ kl kilo liter
71
+ tj tera joule
72
+ kv kilo volt
73
+ mv mega volt
74
+ kn kilonewton
75
+ mm megameter
76
+ au astronomical unit
77
+ yd yard
78
+ lm lumen
79
+ hs hecto detik
80
+ ml milliliter
81
+ gw gigawatt
82
+ ma mega ampere
83
+ kt knot
84
+ ng nano gram
85
+ ns nano detik
86
+ ms mega siemens
87
+ gl giga liter
88
+ μs mikro detik
89
+ da desi ampere
90
+ pa pascal
91
+ ds desi detik
92
+ ms milli detik
93
+ dm desimeter
94
+ mb megabit
95
+ mf mega farad
96
+ bq becquerel
97
+ pb petabit
98
+ cd candela
99
+ tl tera liter
100
+ ms mega detik
101
+ mpa megapascal
102
+ pb peta byte
103
+ gy gray
104
+ sv sievert
105
+ cc c c
106
+ °F derajat fahrenheit
107
+ °f derajat fahrenheit
108
+ °C derajat celsius
109
+ °c derajat celsius
110
+ m meter
111
+ % percent
112
+ v volt
113
+ h jam
114
+ g gram
115
+ s detik
116
+ ω ohm
text_processor/text_processor.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from num2words import num2words
3
+ import os
4
+
5
+
6
+ def get_abs_path(rel_path):
7
+ """
8
+ Get absolute path
9
+ Args:
10
+ rel_path: relative path to this file
11
+
12
+ Returns absolute path
13
+ """
14
+ return os.path.dirname(os.path.abspath(__file__)) + '/' + rel_path
15
+
16
+
17
+ class TextProcessor:
18
+ thousands = ["ratus", "ribu", "juta", "miliar", "milyar", "triliun"]
19
+ months = ["Januari", "Februari", "Maret", "April",
20
+ "Mei", "Juni", "Juli", "Agustus",
21
+ "September", "Oktober", "November", "Desember"]
22
+ measurements_path = get_abs_path("measurements.tsv")
23
+ currencies_path = get_abs_path("currency.tsv")
24
+ timezones_path = get_abs_path("timezones.tsv")
25
+
26
+ def __init__(self):
27
+ self.measurements = {}
28
+ with open(TextProcessor.measurements_path, "r") as file:
29
+ for line in file:
30
+ line = line.strip().split("\t")
31
+ self.measurements[line[0]] = line[1]
32
+
33
+ self.currencies = {}
34
+ with open(TextProcessor.currencies_path, "r") as file:
35
+ for line in file:
36
+ line = line.strip().split("\t")
37
+ self.currencies[line[0]] = line[1]
38
+
39
+ self.timezones = {}
40
+ with open(TextProcessor.timezones_path, "r") as file:
41
+ for line in file:
42
+ line = line.strip().split("\t")
43
+ self.timezones[line[0]] = line[1]
44
+
45
+ self.re_thousands = '|'.join([t for t in TextProcessor.thousands])
46
+ self.re_currencies = r'\b' + re.sub(r'\|([^|$£€¥₩]+)', r'|\\b\1', '|'.join([c for c in self.currencies]))
47
+ self.re_currencies = re.sub(r'([$£€¥₩])', r'\\\1', self.re_currencies)
48
+ self.re_moneys = r'(({}) ?([\d\.\,]+)( ({})?(an)?)?)'.format(self.re_currencies, self.re_thousands)
49
+ self.re_measurements = '|'.join([t for t in self.measurements])
50
+ self.re_measurements = r'(\b([\d\.\,]+) ?({})\b)'.format(self.re_measurements)
51
+ self.re_timezones = '|'.join([c for c in self.timezones])
52
+ self.re_timezones = r'((\d{1,2})[\.:](\d{1,2}) ' + r'\b({})\b)'.format(self.re_timezones)
53
+ self.re_http = r'(https?://(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b[-a-zA-Z0-9()@:%_\+.~#?&//=]*)'
54
+
55
+ @staticmethod
56
+ def is_integer(number):
57
+ try:
58
+ int(number)
59
+ return True
60
+ except ValueError:
61
+ return False
62
+
63
+ @staticmethod
64
+ def is_float(number):
65
+ try:
66
+ float(number)
67
+ return True
68
+ except ValueError:
69
+ return False
70
+
71
+ def normalize(self, text):
72
+ found_errors = False
73
+ # Remove URL
74
+ urls = re.findall(self.re_http, text)
75
+ for url in urls:
76
+ text = text.replace(url[0], "")
77
+
78
+ # Currency
79
+ moneys = re.findall(self.re_moneys, text)
80
+ for money in moneys:
81
+ number = re.sub(',', '.', re.sub(r'\.', '', money[2].strip(" ,.")))
82
+ try:
83
+ if number == "":
84
+ continue
85
+ if self.is_integer(number):
86
+ number = int(number)
87
+ elif self.is_float(number):
88
+ number = float(number)
89
+ else:
90
+ number = re.sub(r'[.,]', "", number)
91
+ number = int(number)
92
+ number = num2words(number, to='cardinal', lang='id')
93
+ text = text.replace(money[0].strip(" ,."), f'{number} {money[3]} {self.currencies[money[1]]}')
94
+ except Exception as error:
95
+ found_errors = True
96
+ print(error)
97
+ print(f'Problem with money: <{text}>: {number}')
98
+
99
+ # Measurements
100
+ units = re.findall(self.re_measurements, text)
101
+ for unit in units:
102
+ number = re.sub(',', '.', re.sub(r'\.', '', unit[1].strip(" ,.")))
103
+ try:
104
+ if number == "":
105
+ continue
106
+ if re.search(r'\.', number):
107
+ number = float(number)
108
+ else:
109
+ number = int(number)
110
+ number = num2words(number, to='cardinal', lang='id')
111
+ text = text.replace(unit[0].strip(" ,."), f'{number} {self.measurements[unit[2]]}')
112
+ except Exception as error:
113
+ found_errors = True
114
+ print(error)
115
+ print(f'Problem with measurements: <{text}>: {number}')
116
+
117
+ # Date
118
+ dates = re.findall(r'(\((\d{1,2})/(\d{1,2})(/(\d+))?\))', text)
119
+ for date in dates:
120
+ try:
121
+ day = num2words(int(date[1]), to='cardinal', lang='id')
122
+ month = int(date[2]) - 1
123
+ if month >= 12:
124
+ month = 0
125
+ month = self.months[month]
126
+ if date[4] != "":
127
+ year = num2words(int(date[4]), to='cardinal', lang='id')
128
+ date_string = f'{day} {month} {year}'
129
+ else:
130
+ date_string = f'{day} {month}'
131
+ text = text.replace(date[0], f' {date_string} ')
132
+ except Exception as error:
133
+ found_errors = True
134
+ print(error)
135
+ print(f'Problem with dates: <{text}>: {date}')
136
+
137
+ # Timezones
138
+ timezones = re.findall(self.re_timezones, text)
139
+ for timezone in timezones:
140
+ try:
141
+ hour = num2words(int(timezone[1]), to='cardinal', lang='id')
142
+ minute = num2words(int(timezone[2]), to='cardinal', lang='id')
143
+ zone = self.timezones[timezone[3]]
144
+ if minute == "nol":
145
+ time_string = f'{hour} {zone}'
146
+ else:
147
+ time_string = f'{hour} lewat {minute} menit {zone}'
148
+ text = text.replace(timezone[0], f'{time_string}')
149
+ except Exception as error:
150
+ found_errors = True
151
+ print(error)
152
+ print(f'Problem with timezones: <{text}>: {timezone}')
153
+
154
+ # Any number
155
+ re_numbers = [r'([\d.,]+)', r'\d+']
156
+ for re_number in re_numbers:
157
+ number_len = 0
158
+ for i in re.finditer(re_number, text):
159
+ start = i.start() + number_len
160
+ end = i.end() + number_len
161
+ number = text[start:end]
162
+ number = re.sub(',', '.', re.sub(r'\.', '', number.strip(" ,.")))
163
+ if number == "":
164
+ continue
165
+ if self.is_integer(number) or self.is_float(number):
166
+ try:
167
+ if self.is_integer(number):
168
+ number = int(number)
169
+ else:
170
+ number = float(number)
171
+ number = num2words(number, to='cardinal', lang="id")
172
+ text = text[:start] + number + text[end:]
173
+ number_len += len(number) - (end - start)
174
+ except Exception as error:
175
+ found_errors = True
176
+ print(error)
177
+ print(f'Problem with number: <{text}>: {number}')
178
+
179
+ text = re.sub(r"\s+", " ", text)
180
+ if found_errors:
181
+ print(f'>>> {text}')
182
+ return text
text_processor/timezones.tsv ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ WITA Waktu Indonesia Tengah
2
+ WIB Waktu Indonesia Barat
3
+ WIT Waktu Indonesia Timur
4
+ GMT Greenwich Mean Time