parquet-converter commited on
Commit
191109d
1 Parent(s): cb8b66a

Update parquet files

Browse files
README.md DELETED
@@ -1,430 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - bn
8
- - en
9
- - fil
10
- - hi
11
- - id
12
- - ja
13
- - km
14
- - lo
15
- - ms
16
- - my
17
- - th
18
- - vi
19
- - zh
20
- license:
21
- - cc-by-4.0
22
- multilinguality:
23
- - multilingual
24
- - translation
25
- size_categories:
26
- - 100K<n<1M
27
- - 10K<n<100K
28
- source_datasets:
29
- - original
30
- task_categories:
31
- - translation
32
- - token-classification
33
- task_ids:
34
- - parsing
35
- paperswithcode_id: alt
36
- pretty_name: Asian Language Treebank
37
- configs:
38
- - alt-en
39
- - alt-jp
40
- - alt-km
41
- - alt-my
42
- - alt-my-transliteration
43
- - alt-my-west-transliteration
44
- - alt-parallel
45
- dataset_info:
46
- - config_name: alt-parallel
47
- features:
48
- - name: SNT.URLID
49
- dtype: string
50
- - name: SNT.URLID.SNTID
51
- dtype: string
52
- - name: url
53
- dtype: string
54
- - name: translation
55
- dtype:
56
- translation:
57
- languages:
58
- - bg
59
- - en
60
- - en_tok
61
- - fil
62
- - hi
63
- - id
64
- - ja
65
- - khm
66
- - lo
67
- - ms
68
- - my
69
- - th
70
- - vi
71
- - zh
72
- splits:
73
- - name: train
74
- num_bytes: 68462384
75
- num_examples: 18094
76
- - name: validation
77
- num_bytes: 3712980
78
- num_examples: 1004
79
- - name: test
80
- num_bytes: 3815633
81
- num_examples: 1019
82
- download_size: 21285784
83
- dataset_size: 75990997
84
- - config_name: alt-en
85
- features:
86
- - name: SNT.URLID
87
- dtype: string
88
- - name: SNT.URLID.SNTID
89
- dtype: string
90
- - name: url
91
- dtype: string
92
- - name: status
93
- dtype: string
94
- - name: value
95
- dtype: string
96
- splits:
97
- - name: train
98
- num_bytes: 10075609
99
- num_examples: 17889
100
- - name: validation
101
- num_bytes: 544739
102
- num_examples: 988
103
- - name: test
104
- num_bytes: 567292
105
- num_examples: 1017
106
- download_size: 2739055
107
- dataset_size: 11187640
108
- - config_name: alt-jp
109
- features:
110
- - name: SNT.URLID
111
- dtype: string
112
- - name: SNT.URLID.SNTID
113
- dtype: string
114
- - name: url
115
- dtype: string
116
- - name: status
117
- dtype: string
118
- - name: value
119
- dtype: string
120
- - name: word_alignment
121
- dtype: string
122
- - name: jp_tokenized
123
- dtype: string
124
- - name: en_tokenized
125
- dtype: string
126
- splits:
127
- - name: train
128
- num_bytes: 21891867
129
- num_examples: 17202
130
- - name: validation
131
- num_bytes: 1181587
132
- num_examples: 953
133
- - name: test
134
- num_bytes: 1175624
135
- num_examples: 931
136
- download_size: 12007999
137
- dataset_size: 24249078
138
- - config_name: alt-my
139
- features:
140
- - name: SNT.URLID
141
- dtype: string
142
- - name: SNT.URLID.SNTID
143
- dtype: string
144
- - name: url
145
- dtype: string
146
- - name: value
147
- dtype: string
148
- splits:
149
- - name: train
150
- num_bytes: 20433275
151
- num_examples: 18088
152
- - name: validation
153
- num_bytes: 1111410
154
- num_examples: 1000
155
- - name: test
156
- num_bytes: 1135209
157
- num_examples: 1018
158
- download_size: 3028302
159
- dataset_size: 22679894
160
- - config_name: alt-km
161
- features:
162
- - name: SNT.URLID
163
- dtype: string
164
- - name: SNT.URLID.SNTID
165
- dtype: string
166
- - name: url
167
- dtype: string
168
- - name: km_pos_tag
169
- dtype: string
170
- - name: km_tokenized
171
- dtype: string
172
- splits:
173
- - name: train
174
- num_bytes: 12015411
175
- num_examples: 18088
176
- - name: validation
177
- num_bytes: 655232
178
- num_examples: 1000
179
- - name: test
180
- num_bytes: 673753
181
- num_examples: 1018
182
- download_size: 2410832
183
- dataset_size: 13344396
184
- - config_name: alt-my-transliteration
185
- features:
186
- - name: en
187
- dtype: string
188
- - name: my
189
- sequence: string
190
- splits:
191
- - name: train
192
- num_bytes: 4249424
193
- num_examples: 84022
194
- download_size: 1232127
195
- dataset_size: 4249424
196
- - config_name: alt-my-west-transliteration
197
- features:
198
- - name: en
199
- dtype: string
200
- - name: my
201
- sequence: string
202
- splits:
203
- - name: train
204
- num_bytes: 7412043
205
- num_examples: 107121
206
- download_size: 2830071
207
- dataset_size: 7412043
208
- ---
209
-
210
- # Dataset Card for Asian Language Treebank (ALT)
211
-
212
- ## Table of Contents
213
- - [Dataset Description](#dataset-description)
214
- - [Dataset Summary](#dataset-summary)
215
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
216
- - [Languages](#languages)
217
- - [Dataset Structure](#dataset-structure)
218
- - [Data Instances](#data-instances)
219
- - [Data Fields](#data-fields)
220
- - [Data Splits](#data-splits)
221
- - [Dataset Creation](#dataset-creation)
222
- - [Curation Rationale](#curation-rationale)
223
- - [Source Data](#source-data)
224
- - [Annotations](#annotations)
225
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
226
- - [Considerations for Using the Data](#considerations-for-using-the-data)
227
- - [Social Impact of Dataset](#social-impact-of-dataset)
228
- - [Discussion of Biases](#discussion-of-biases)
229
- - [Other Known Limitations](#other-known-limitations)
230
- - [Additional Information](#additional-information)
231
- - [Dataset Curators](#dataset-curators)
232
- - [Licensing Information](#licensing-information)
233
- - [Citation Information](#citation-information)
234
- - [Contributions](#contributions)
235
-
236
- ## Dataset Description
237
-
238
- - **Homepage:** https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
239
- - **Leaderboard:**
240
- - **Paper:** [Introduction of the Asian Language Treebank](https://ieeexplore.ieee.org/abstract/document/7918974)
241
- - **Point of Contact:** [ALT info](alt-info@khn.nict.go.jp)
242
-
243
- ### Dataset Summary
244
- The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under [ASEAN IVO](https://www.nict.go.jp/en/asean_ivo/index.html) as described in this Web page.
245
-
246
- The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.
247
-
248
- ### Supported Tasks and Leaderboards
249
-
250
- Machine Translation, Dependency Parsing
251
-
252
-
253
- ### Languages
254
-
255
- It supports 13 language:
256
- * Bengali
257
- * English
258
- * Filipino
259
- * Hindi
260
- * Bahasa Indonesia
261
- * Japanese
262
- * Khmer
263
- * Lao
264
- * Malay
265
- * Myanmar (Burmese)
266
- * Thai
267
- * Vietnamese
268
- * Chinese (Simplified Chinese).
269
-
270
- ## Dataset Structure
271
-
272
- ### Data Instances
273
-
274
- #### ALT Parallel Corpus
275
- ```
276
- {
277
- "SNT.URLID": "80188",
278
- "SNT.URLID.SNTID": "1",
279
- "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
280
- "bg": "[translated sentence]",
281
- "en": "[translated sentence]",
282
- "en_tok": "[translated sentence]",
283
- "fil": "[translated sentence]",
284
- "hi": "[translated sentence]",
285
- "id": "[translated sentence]",
286
- "ja": "[translated sentence]",
287
- "khm": "[translated sentence]",
288
- "lo": "[translated sentence]",
289
- "ms": "[translated sentence]",
290
- "my": "[translated sentence]",
291
- "th": "[translated sentence]",
292
- "vi": "[translated sentence]",
293
- "zh": "[translated sentence]"
294
- }
295
- ```
296
-
297
- #### ALT Treebank
298
- ```
299
- {
300
- "SNT.URLID": "80188",
301
- "SNT.URLID.SNTID": "1",
302
- "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
303
- "status": "draft/reviewed",
304
- "value": "(S (S (BASENP (NNP Italy)) (VP (VBP have) (VP (VP (VP (VBN defeated) (BASENP (NNP Portugal))) (ADVP (RB 31-5))) (PP (IN in) (NP (BASENP (NNP Pool) (NNP C)) (PP (IN of) (NP (BASENP (DT the) (NN 2007) (NNP Rugby) (NNP World) (NNP Cup)) (PP (IN at) (NP (BASENP (NNP Parc) (FW des) (NNP Princes)) (COMMA ,) (BASENP (NNP Paris) (COMMA ,) (NNP France))))))))))) (PERIOD .))"
305
- }
306
- ```
307
-
308
- #### ALT Myanmar transliteration
309
- ```
310
- {
311
- "en": "CASINO",
312
- "my": [
313
- "ကက်စီနို",
314
- "ကစီနို",
315
- "ကာစီနို",
316
- "ကာဆီနို"
317
- ]
318
- }
319
- ```
320
-
321
- ### Data Fields
322
-
323
-
324
- #### ALT Parallel Corpus
325
- - SNT.URLID: URL link to the source article listed in [URL.txt](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt)
326
- - SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from `SNT.URLID`
327
-
328
- and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language
329
-
330
- #### ALT Treebank
331
- - status: it indicates how a sentence is annotated; `draft` sentences are annotated by one annotater and `reviewed` sentences are annotated by two annotater
332
-
333
- The annotatation is different from language to language, please see [their guildlines](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/) for more detail.
334
-
335
- ### Data Splits
336
-
337
- | | train | valid | test |
338
- |-----------|-------|-------|-------|
339
- | # articles | 1698 | 98 | 97 |
340
- | # sentences | 18088 | 1000 | 1018 |
341
-
342
-
343
- ## Dataset Creation
344
-
345
- ### Curation Rationale
346
-
347
- The ALT project was initiated by the [National Institute of Information and Communications Technology, Japan](https://www.nict.go.jp/en/) (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.
348
-
349
-
350
- ### Source Data
351
-
352
- #### Initial Data Collection and Normalization
353
-
354
- [More Information Needed]
355
-
356
- #### Who are the source language producers?
357
-
358
- The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from
359
- * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
360
- * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
361
- * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
362
- * the Institute for Infocomm Research, Singapore (I2R) for Malay
363
- * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
364
- * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
365
-
366
- ### Annotations
367
-
368
- #### Annotation process
369
-
370
- [More Information Needed]
371
-
372
- #### Who are the annotators?
373
-
374
- [More Information Needed]
375
-
376
- ### Personal and Sensitive Information
377
-
378
- [More Information Needed]
379
-
380
- ## Considerations for Using the Data
381
-
382
- ### Social Impact of Dataset
383
-
384
- [More Information Needed]
385
-
386
- ### Discussion of Biases
387
-
388
- [More Information Needed]
389
-
390
- ### Other Known Limitations
391
-
392
- [More Information Needed]
393
-
394
-
395
- ## Additional Information
396
-
397
- ### Dataset Curators
398
-
399
- * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
400
- * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
401
- * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
402
- * the Institute for Infocomm Research, Singapore (I2R) for Malay
403
- * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
404
- * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
405
-
406
- ### Licensing Information
407
-
408
- [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
409
-
410
- ### Citation Information
411
-
412
- Please cite the following if you make use of the dataset:
413
-
414
- Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA.
415
-
416
- BibTeX:
417
- ```
418
- @inproceedings{riza2016introduction,
419
- title={Introduction of the asian language treebank},
420
- author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
421
- booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
422
- pages={1--6},
423
- year={2016},
424
- organization={IEEE}
425
- }
426
- ```
427
-
428
- ### Contributions
429
-
430
- Thanks to [@chameleonTK](https://github.com/chameleonTK) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
alt-en/alt-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:065792cd5add4270bdac3b15b2cde86e267efb32b6cb37c097e31be212e1db56
3
+ size 198030
alt-en/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efa4ea5f74aff44e4f3b36efcb0914dd33a0d5cdb51cde8aefa19623f5b08527
3
+ size 3397282
alt-en/alt-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc785890affd8af42f510c8f3fc6e56f376059a7b72fda160200ca7dc2e5d28e
3
+ size 186499
alt-jp/alt-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e135245cc9a4b20d288b827cfcca28c14ecc8b3e9c548c595be4c8e41ea5f7e0
3
+ size 503983
alt-jp/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab3ff01968024b331d7b196debeae4e53b20d93e2f2843886e565b8dba6789d1
3
+ size 9343481
alt-jp/alt-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2e4a8e24f7e9a936c990ae5be1866568d9d24d2655f78749ca98f64e11f08f0
3
+ size 507899
alt-km/alt-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc85146b127843a605a720422a70074332c7827fd7e280316816715d59770061
3
+ size 226453
alt-km/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d50294b1d7b033345adf8dbf03cda113ba21c6c9c0ad060b6e3879faaf667777
3
+ size 3901918
alt-km/alt-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f55469d031649f04510628cc77b60a8fd39d43e5e33bbe04f602d017eba51279
3
+ size 215722
alt-my-transliteration/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6330e3775d8588ff0671a91411c58856a1f1a393ad6ab0fe76eb01f055e5c17e
3
+ size 2163950
alt-my-west-transliteration/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3995b0bc5de97b93c4c7abab9e53885ab05b4c995cbdce6e13238990e01dfd86
3
+ size 2857510
alt-my/alt-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6db20da47aaf54040eda5a97ac22e3f7f675b9faf2335be11f4982d95f3b4d8
3
+ size 340776
alt-my/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e837cc17e743d53ae4f99f73f8104bb009fc4f05499a1e6b9a43267e070e2379
3
+ size 5903588
alt-my/alt-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b04988cb5c4db15926d326c41ceb45772858b4c2c712d2b6355a0d645d304cb
3
+ size 324658
alt-parallel/alt-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aeb0ff63aaaee828c94cbbbe683082ebe771945b411f04080d5e83c47acd80d
3
+ size 1786536
alt-parallel/alt-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9703ef31b96107a8751f91859939bb9a098883cae8bfb3a0b3e3e6c4bbf18fb
3
+ size 31211166
alt-parallel/alt-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d1179c8eb7061151f5ec8d9f3d6cdb6302c4609a34f558bf30e7286fe7664bb
3
+ size 1710202
alt.py DELETED
@@ -1,423 +0,0 @@
1
- #!/usr/bin/env python
2
- # -*- coding: utf-8 -*-
3
- """Asian Language Treebank (ALT) Project"""
4
-
5
-
6
- import os
7
-
8
- import datasets
9
-
10
-
11
- _CITATION = """\
12
- @inproceedings{riza2016introduction,
13
- title={Introduction of the asian language treebank},
14
- author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
15
- booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
16
- pages={1--6},
17
- year={2016},
18
- organization={IEEE}
19
- }
20
- """
21
-
22
- _HOMEPAGE = "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/"
23
-
24
- _DESCRIPTION = """\
25
- The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).
26
- """
27
-
28
- _URLs = {
29
- "alt": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206.zip",
30
- "alt-en": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip",
31
- "alt-jp": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/Japanese-ALT-20170330.zip",
32
- "alt-my": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-alt-190530.zip",
33
- "alt-my-transliteration": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-en-transliteration.zip",
34
- "alt-my-west-transliteration": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/western-myanmar-transliteration.zip",
35
- "alt-km": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/km-nova-181101.zip",
36
- }
37
-
38
- _SPLIT = {
39
- "train": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt",
40
- "dev": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt",
41
- "test": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt",
42
- }
43
-
44
- _WIKI_URL = "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt"
45
-
46
-
47
- class AltParallelConfig(datasets.BuilderConfig):
48
- """BuilderConfig for ALT."""
49
-
50
- def __init__(self, languages, **kwargs):
51
- """BuilderConfig for ALT.
52
-
53
- Args:
54
- for the `datasets.features.text.TextEncoder` used for the features feature.
55
-
56
- languages: languages that will be used for translation. it should be one of the
57
- **kwargs: keyword arguments forwarded to super.
58
- """
59
-
60
- name = "alt-parallel"
61
-
62
- description = "ALT Parallel Corpus"
63
- super(AltParallelConfig, self).__init__(
64
- name=name,
65
- description=description,
66
- version=datasets.Version("1.0.0", ""),
67
- **kwargs,
68
- )
69
-
70
- available_langs = set(
71
- ["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"]
72
- )
73
- for language in languages:
74
- assert language in available_langs
75
-
76
- self.languages = languages
77
-
78
-
79
- class Alt(datasets.GeneratorBasedBuilder):
80
- """Asian Language Treebank (ALT) Project"""
81
-
82
- BUILDER_CONFIGS = [
83
- AltParallelConfig(
84
- languages=["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"]
85
- ),
86
- datasets.BuilderConfig(name="alt-en", version=datasets.Version("1.0.0"), description="English ALT"),
87
- datasets.BuilderConfig(name="alt-jp", version=datasets.Version("1.0.0"), description="Japanese ALT"),
88
- datasets.BuilderConfig(name="alt-my", version=datasets.Version("1.0.0"), description="Myanmar ALT"),
89
- datasets.BuilderConfig(name="alt-km", version=datasets.Version("1.0.0"), description="Khmer ALT"),
90
- datasets.BuilderConfig(
91
- name="alt-my-transliteration",
92
- version=datasets.Version("1.0.0"),
93
- description="Myanmar-English Transliteration Dataset",
94
- ),
95
- datasets.BuilderConfig(
96
- name="alt-my-west-transliteration",
97
- version=datasets.Version("1.0.0"),
98
- description="Latin-Myanmar Transliteration Dataset",
99
- ),
100
- ]
101
-
102
- DEFAULT_CONFIG_NAME = "alt-parallel"
103
-
104
- def _info(self):
105
- if self.config.name.startswith("alt-parallel"):
106
- features = datasets.Features(
107
- {
108
- "SNT.URLID": datasets.Value("string"),
109
- "SNT.URLID.SNTID": datasets.Value("string"),
110
- "url": datasets.Value("string"),
111
- "translation": datasets.features.Translation(languages=self.config.languages),
112
- }
113
- )
114
- elif self.config.name == "alt-en":
115
- features = datasets.Features(
116
- {
117
- "SNT.URLID": datasets.Value("string"),
118
- "SNT.URLID.SNTID": datasets.Value("string"),
119
- "url": datasets.Value("string"),
120
- "status": datasets.Value("string"),
121
- "value": datasets.Value("string"),
122
- }
123
- )
124
- elif self.config.name == "alt-jp":
125
- features = datasets.Features(
126
- {
127
- "SNT.URLID": datasets.Value("string"),
128
- "SNT.URLID.SNTID": datasets.Value("string"),
129
- "url": datasets.Value("string"),
130
- "status": datasets.Value("string"),
131
- "value": datasets.Value("string"),
132
- "word_alignment": datasets.Value("string"),
133
- "jp_tokenized": datasets.Value("string"),
134
- "en_tokenized": datasets.Value("string"),
135
- }
136
- )
137
- elif self.config.name == "alt-my":
138
- features = datasets.Features(
139
- {
140
- "SNT.URLID": datasets.Value("string"),
141
- "SNT.URLID.SNTID": datasets.Value("string"),
142
- "url": datasets.Value("string"),
143
- "value": datasets.Value("string"),
144
- }
145
- )
146
- elif self.config.name == "alt-my-transliteration":
147
- features = datasets.Features(
148
- {
149
- "en": datasets.Value("string"),
150
- "my": datasets.Sequence(datasets.Value("string")),
151
- }
152
- )
153
- elif self.config.name == "alt-my-west-transliteration":
154
- features = datasets.Features(
155
- {
156
- "en": datasets.Value("string"),
157
- "my": datasets.Sequence(datasets.Value("string")),
158
- }
159
- )
160
- elif self.config.name == "alt-km":
161
- features = datasets.Features(
162
- {
163
- "SNT.URLID": datasets.Value("string"),
164
- "SNT.URLID.SNTID": datasets.Value("string"),
165
- "url": datasets.Value("string"),
166
- "km_pos_tag": datasets.Value("string"),
167
- "km_tokenized": datasets.Value("string"),
168
- }
169
- )
170
- else:
171
- raise
172
-
173
- return datasets.DatasetInfo(
174
- description=_DESCRIPTION,
175
- features=features,
176
- supervised_keys=None,
177
- homepage=_HOMEPAGE,
178
- citation=_CITATION,
179
- )
180
-
181
- def _split_generators(self, dl_manager):
182
- if self.config.name.startswith("alt-parallel"):
183
- data_path = dl_manager.download_and_extract(_URLs["alt"])
184
- else:
185
- data_path = dl_manager.download_and_extract(_URLs[self.config.name])
186
-
187
- if self.config.name == "alt-my-transliteration" or self.config.name == "alt-my-west-transliteration":
188
- return [
189
- datasets.SplitGenerator(
190
- name=datasets.Split.TRAIN,
191
- gen_kwargs={"basepath": data_path, "split": None},
192
- )
193
- ]
194
- else:
195
- data_split = {}
196
- for k in _SPLIT:
197
- data_split[k] = dl_manager.download_and_extract(_SPLIT[k])
198
-
199
- return [
200
- datasets.SplitGenerator(
201
- name=datasets.Split.TRAIN,
202
- gen_kwargs={"basepath": data_path, "split": data_split["train"]},
203
- ),
204
- datasets.SplitGenerator(
205
- name=datasets.Split.VALIDATION,
206
- gen_kwargs={"basepath": data_path, "split": data_split["dev"]},
207
- ),
208
- datasets.SplitGenerator(
209
- name=datasets.Split.TEST,
210
- gen_kwargs={"basepath": data_path, "split": data_split["test"]},
211
- ),
212
- ]
213
-
214
- def _generate_examples(self, basepath, split=None):
215
- allow_urls = {}
216
- if split is not None:
217
- with open(split, encoding="utf-8") as fin:
218
- for line in fin:
219
- sp = line.strip().split("\t")
220
- urlid = sp[0].replace("URL.", "")
221
- allow_urls[urlid] = {"SNT.URLID": urlid, "url": sp[1]}
222
-
223
- data = {}
224
- if self.config.name.startswith("alt-parallel"):
225
- files = self.config.languages
226
-
227
- data = {}
228
- for lang in files:
229
- file_path = os.path.join(basepath, "ALT-Parallel-Corpus-20191206", f"data_{lang}.txt")
230
- fin = open(file_path, encoding="utf-8")
231
- for line in fin:
232
- line = line.strip()
233
- sp = line.split("\t")
234
-
235
- _, urlid, sntid = sp[0].split(".")
236
- if urlid not in allow_urls:
237
- continue
238
-
239
- if sntid not in data:
240
- data[sntid] = {}
241
- data[sntid]["SNT.URLID"] = urlid
242
- data[sntid]["SNT.URLID.SNTID"] = sntid
243
- data[sntid]["url"] = allow_urls[urlid]["url"]
244
- data[sntid]["translation"] = {}
245
-
246
- # Note that Japanese and Myanmar texts have empty sentence fields in this release.
247
- if len(sp) >= 2:
248
- data[sntid]["translation"][lang] = sp[1]
249
-
250
- fin.close()
251
-
252
- elif self.config.name == "alt-en":
253
- data = {}
254
- for fname in ["English-ALT-Draft.txt", "English-ALT-Reviewed.txt"]:
255
- file_path = os.path.join(basepath, "English-ALT-20170107", fname)
256
- fin = open(file_path, encoding="utf-8")
257
- for line in fin:
258
- line = line.strip()
259
- sp = line.split("\t")
260
-
261
- _, urlid, sntid = sp[0].split(".")
262
- if urlid not in allow_urls:
263
- continue
264
-
265
- d = {
266
- "SNT.URLID": urlid,
267
- "SNT.URLID.SNTID": sntid,
268
- "url": allow_urls[urlid]["url"],
269
- "status": None,
270
- "value": None,
271
- }
272
-
273
- d["value"] = sp[1]
274
- if fname == "English-ALT-Draft.txt":
275
- d["status"] = "draft"
276
- else:
277
- d["status"] = "reviewed"
278
-
279
- data[sntid] = d
280
- fin.close()
281
- elif self.config.name == "alt-jp":
282
- data = {}
283
- for fname in ["Japanese-ALT-Draft.txt", "Japanese-ALT-Reviewed.txt"]:
284
- file_path = os.path.join(basepath, "Japanese-ALT-20170330", fname)
285
- fin = open(file_path, encoding="utf-8")
286
- for line in fin:
287
- line = line.strip()
288
- sp = line.split("\t")
289
- _, urlid, sntid = sp[0].split(".")
290
- if urlid not in allow_urls:
291
- continue
292
-
293
- d = {
294
- "SNT.URLID": urlid,
295
- "SNT.URLID.SNTID": sntid,
296
- "url": allow_urls[urlid]["url"],
297
- "value": None,
298
- "status": None,
299
- "word_alignment": None,
300
- "en_tokenized": None,
301
- "jp_tokenized": None,
302
- }
303
-
304
- d["value"] = sp[1]
305
- if fname == "Japanese-ALT-Draft.txt":
306
- d["status"] = "draft"
307
- else:
308
- d["status"] = "reviewed"
309
- data[sntid] = d
310
- fin.close()
311
-
312
- keys = {
313
- "word_alignment": "word-alignment/data_ja.en-ja",
314
- "en_tokenized": "word-alignment/data_ja.en-tok",
315
- "jp_tokenized": "word-alignment/data_ja.ja-tok",
316
- }
317
- for k in keys:
318
- file_path = os.path.join(basepath, "Japanese-ALT-20170330", keys[k])
319
- fin = open(file_path, encoding="utf-8")
320
- for line in fin:
321
- line = line.strip()
322
- sp = line.split("\t")
323
-
324
- # Note that Japanese and Myanmar texts have empty sentence fields in this release.
325
- if len(sp) < 2:
326
- continue
327
-
328
- _, urlid, sntid = sp[0].split(".")
329
- if urlid not in allow_urls:
330
- continue
331
-
332
- if sntid in data:
333
-
334
- data[sntid][k] = sp[1]
335
- fin.close()
336
-
337
- elif self.config.name == "alt-my":
338
- data = {}
339
- for fname in ["data"]:
340
- file_path = os.path.join(basepath, "my-alt-190530", fname)
341
- fin = open(file_path, encoding="utf-8")
342
- for line in fin:
343
- line = line.strip()
344
- sp = line.split("\t")
345
- _, urlid, sntid = sp[0].split(".")
346
- if urlid not in allow_urls:
347
- continue
348
-
349
- data[sntid] = {
350
- "SNT.URLID": urlid,
351
- "SNT.URLID.SNTID": sntid,
352
- "url": allow_urls[urlid]["url"],
353
- "value": sp[1],
354
- }
355
- fin.close()
356
-
357
- elif self.config.name == "alt-km":
358
- data = {}
359
- for fname in ["data_km.km-tag.nova", "data_km.km-tok.nova"]:
360
- file_path = os.path.join(basepath, "km-nova-181101", fname)
361
- fin = open(file_path, encoding="utf-8")
362
- for line in fin:
363
- line = line.strip()
364
- sp = line.split("\t")
365
- _, urlid, sntid = sp[0].split(".")
366
- if urlid not in allow_urls:
367
- continue
368
-
369
- k = "km_pos_tag" if fname == "data_km.km-tag.nova" else "km_tokenized"
370
- if sntid in data:
371
- data[sntid][k] = sp[1]
372
- else:
373
- data[sntid] = {
374
- "SNT.URLID": urlid,
375
- "SNT.URLID.SNTID": sntid,
376
- "url": allow_urls[urlid]["url"],
377
- "km_pos_tag": None,
378
- "km_tokenized": None,
379
- }
380
- data[sntid][k] = sp[1]
381
- fin.close()
382
-
383
- elif self.config.name == "alt-my-transliteration":
384
- file_path = os.path.join(basepath, "my-en-transliteration", "data.txt")
385
- # Need to set errors='ignore' because of the unknown error
386
- # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
387
- # It might due to some issues related to Myanmar alphabets
388
- fin = open(file_path, encoding="utf-8", errors="ignore")
389
- _id = 0
390
- for line in fin:
391
- line = line.strip()
392
-
393
- # I don't know why there are \x00 between |||. They don't show in the editor.
394
- line = line.replace("\x00", "")
395
- sp = line.split("|||")
396
-
397
- # When I read data, it seems to have empty sentence betweem the actual sentence. Don't know why?
398
- if len(sp) < 2:
399
- continue
400
-
401
- data[_id] = {"en": sp[0].strip(), "my": [sp[1].strip()]}
402
- _id += 1
403
- fin.close()
404
- elif self.config.name == "alt-my-west-transliteration":
405
- file_path = os.path.join(basepath, "western-myanmar-transliteration", "321.txt")
406
- # Need to set errors='ignore' because of the unknown error
407
- # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
408
- # It might due to some issues related to Myanmar alphabets
409
- fin = open(file_path, encoding="utf-8", errors="ignore")
410
- _id = 0
411
- for line in fin:
412
- line = line.strip()
413
- line = line.replace("\x00", "")
414
- sp = line.split("|||")
415
-
416
- data[_id] = {"en": sp[0].strip(), "my": [k.strip() for k in sp[1].split("|")]}
417
- _id += 1
418
- fin.close()
419
-
420
- _id = 1
421
- for k in data:
422
- yield _id, data[k]
423
- _id += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"alt-parallel": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-parallel", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 68462384, "num_examples": 18094, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 3712980, "num_examples": 1004, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 3815633, "num_examples": 1019, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206.zip": {"num_bytes": 21105607, "checksum": "05f7b31b517d4c4e074bb7fb57277758c0e3e15d1ad9cfc5727e9bce79b07bbd"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 21285784, "post_processing_size": null, "dataset_size": 75990997, "size_in_bytes": 97276781}, "alt-en": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "status": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10075609, "num_examples": 17889, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 544739, "num_examples": 988, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 567292, "num_examples": 1017, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip": {"num_bytes": 2558878, "checksum": "c1d7dcbbf5548cfad9232c07464ff4bb0cf5fb2cd0c00af53cf5fa02a02594f0"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 2739055, "post_processing_size": null, "dataset_size": 11187640, "size_in_bytes": 13926695}, "alt-jp": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "status": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}, "word_alignment": {"dtype": "string", "id": null, "_type": "Value"}, "jp_tokenized": {"dtype": "string", "id": null, "_type": "Value"}, "en_tokenized": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-jp", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 21891867, "num_examples": 17202, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 1181587, "num_examples": 953, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 1175624, "num_examples": 931, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/Japanese-ALT-20170330.zip": {"num_bytes": 11827822, "checksum": "7749af9f337fcbf09dffffc2d5314ea5757a91ffb199aaa4f027467a3ecd805e"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 12007999, "post_processing_size": null, "dataset_size": 24249078, "size_in_bytes": 36257077}, "alt-my": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20433275, "num_examples": 18088, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 1111410, "num_examples": 1000, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 1135209, "num_examples": 1018, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-alt-190530.zip": {"num_bytes": 2848125, "checksum": "d77ef18364bcb2b149503a5ed77734b07b103bd277f8ed92716555f3deedaf95"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 3028302, "post_processing_size": null, "dataset_size": 22679894, "size_in_bytes": 25708196}, "alt-km": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "km_pos_tag": {"dtype": "string", "id": null, "_type": "Value"}, "km_tokenized": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-km", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12015411, "num_examples": 18088, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 655232, "num_examples": 1000, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 673753, "num_examples": 1018, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/km-nova-181101.zip": {"num_bytes": 2230655, "checksum": "0c6457d4a3327f3dc0b381704cbad71af120e963bfa1cdb06765fa0ed0c9098a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 2410832, "post_processing_size": null, "dataset_size": 13344396, "size_in_bytes": 15755228}, "alt-my-transliteration": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"en": {"dtype": "string", "id": null, "_type": "Value"}, "my": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my-transliteration", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4249424, "num_examples": 84022, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-en-transliteration.zip": {"num_bytes": 1232127, "checksum": "5b348c0f9e92d4699fddb4c64fd7d929eb6f6de6f7ce4d879bf91e8d4a82f063"}}, "download_size": 1232127, "post_processing_size": null, "dataset_size": 4249424, "size_in_bytes": 5481551}, "alt-my-west-transliteration": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"en": {"dtype": "string", "id": null, "_type": "Value"}, "my": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my-west-transliteration", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7412043, "num_examples": 107121, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/western-myanmar-transliteration.zip": {"num_bytes": 2830071, "checksum": "c3f1419022d823791b6d85b259a18ab11d8f8800367d7ec4319e49fc016ec396"}}, "download_size": 2830071, "post_processing_size": null, "dataset_size": 7412043, "size_in_bytes": 10242114}}