Datasets:

Multilinguality:
multilingual
Size Categories:
10K<n<100K
1K<n<10K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
ea3db35
1 Parent(s): 700647c

Convert dataset to Parquet (#4)

Browse files

- Convert dataset to Parquet (cd8cb2dbdbd724c954e04b03748586b54b571581)
- Add arabic data files (eea6a4a1606f4777bd972e6c466ca455e3ba2a26)
- Add chinese data files (c32dce6ec09356c3f54ae2ddf6ccde4a7ecde030)
- Add czech data files (0f6940724607ecee3958925f1cb23c027d1c0aae)
- Add dutch data files (0b65f1ed2d747a576240cb4092c626161c86ea63)
- Add french data files (ddc5b3a971a3ec0d84c6a75458e8d0dcac969fa6)
- Add german data files (2355ac8e32b8c1cbbbdad7a1bc9ad44ec6831932)
- Add hindi data files (9bc9978fefa0624927b2731d1cf9deb337f92228)
- Add indonesian data files (388dd02f242de0e6c63c01abcfa2b85e95b16408)
- Add italian data files (919e7d2a04c444b777d4bf12eb8f5e583019434c)
- Add japanese data files (f150845b9734ef38544a16c030d4a677abac4649)
- Add korean data files (c550a744298751c737743bcf7e8f55f0294f33ee)
- Add portuguese data files (d7d9808c24279d09b1dd16c345dd74f9e2592d52)
- Add russian data files (a77ee278d57072bc52ee7339531cc531c43991f3)
- Add spanish data files (2dba8f024cfd5c6e0176d39eed47ecdf33094395)
- Add thai data files (40dce069d2ba9d3ad1f8c059791087dccf8ac772)
- Add turkish data files (2aa4f7d186a54ef12238c9cc6a8e2d2d3efac573)
- Add vietnamese data files (851acfd6d91040e57b0bdce37de57764b6cc324d)
- Delete loading script (07b31084349cd5ea4e47709288764131db7d6b14)
- Delete data folder (f214d709f9d1ae47e3051fa71c0c0c14e62b01a6)

README.md CHANGED
@@ -36,6 +36,25 @@ task_categories:
36
  task_ids: []
37
  paperswithcode_id: wikilingua
38
  pretty_name: WikiLingua
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  dataset_info:
40
  - config_name: arabic
41
  features:
@@ -55,10 +74,10 @@ dataset_info:
55
  dtype: string
56
  splits:
57
  - name: train
58
- num_bytes: 119116119
59
  num_examples: 9995
60
- download_size: 119358890
61
- dataset_size: 119116119
62
  - config_name: chinese
63
  features:
64
  - name: url
@@ -77,10 +96,10 @@ dataset_info:
77
  dtype: string
78
  splits:
79
  - name: train
80
- num_bytes: 41170689
81
  num_examples: 6541
82
- download_size: 41345464
83
- dataset_size: 41170689
84
  - config_name: czech
85
  features:
86
  - name: url
@@ -99,10 +118,10 @@ dataset_info:
99
  dtype: string
100
  splits:
101
  - name: train
102
- num_bytes: 20816390
103
  num_examples: 2520
104
- download_size: 20894511
105
- dataset_size: 20816390
106
  - config_name: dutch
107
  features:
108
  - name: url
@@ -121,10 +140,10 @@ dataset_info:
121
  dtype: string
122
  splits:
123
  - name: train
124
- num_bytes: 87258040
125
  num_examples: 10862
126
- download_size: 87533442
127
- dataset_size: 87258040
128
  - config_name: english
129
  features:
130
  - name: url
@@ -139,10 +158,10 @@ dataset_info:
139
  dtype: string
140
  splits:
141
  - name: train
142
- num_bytes: 333700114
143
  num_examples: 57945
144
- download_size: 338036185
145
- dataset_size: 333700114
146
  - config_name: french
147
  features:
148
  - name: url
@@ -161,10 +180,10 @@ dataset_info:
161
  dtype: string
162
  splits:
163
  - name: train
164
- num_bytes: 197550376
165
  num_examples: 21690
166
- download_size: 198114157
167
- dataset_size: 197550376
168
  - config_name: german
169
  features:
170
  - name: url
@@ -183,10 +202,10 @@ dataset_info:
183
  dtype: string
184
  splits:
185
  - name: train
186
- num_bytes: 168674340
187
  num_examples: 20103
188
- download_size: 169195050
189
- dataset_size: 168674340
190
  - config_name: hindi
191
  features:
192
  - name: url
@@ -205,10 +224,10 @@ dataset_info:
205
  dtype: string
206
  splits:
207
  - name: train
208
- num_bytes: 63785051
209
  num_examples: 3402
210
- download_size: 63874759
211
- dataset_size: 63785051
212
  - config_name: indonesian
213
  features:
214
  - name: url
@@ -227,10 +246,10 @@ dataset_info:
227
  dtype: string
228
  splits:
229
  - name: train
230
- num_bytes: 136408861
231
  num_examples: 16308
232
- download_size: 136833587
233
- dataset_size: 136408861
234
  - config_name: italian
235
  features:
236
  - name: url
@@ -249,10 +268,10 @@ dataset_info:
249
  dtype: string
250
  splits:
251
  - name: train
252
- num_bytes: 138119527
253
  num_examples: 17673
254
- download_size: 138578956
255
- dataset_size: 138119527
256
  - config_name: japanese
257
  features:
258
  - name: url
@@ -271,10 +290,10 @@ dataset_info:
271
  dtype: string
272
  splits:
273
  - name: train
274
- num_bytes: 40145031
275
  num_examples: 4372
276
- download_size: 40259570
277
- dataset_size: 40145031
278
  - config_name: korean
279
  features:
280
  - name: url
@@ -293,10 +312,10 @@ dataset_info:
293
  dtype: string
294
  splits:
295
  - name: train
296
- num_bytes: 38647614
297
  num_examples: 4111
298
- download_size: 38748961
299
- dataset_size: 38647614
300
  - config_name: portuguese
301
  features:
302
  - name: url
@@ -315,10 +334,10 @@ dataset_info:
315
  dtype: string
316
  splits:
317
  - name: train
318
- num_bytes: 204270845
319
  num_examples: 28143
320
- download_size: 204997686
321
- dataset_size: 204270845
322
  - config_name: russian
323
  features:
324
  - name: url
@@ -337,10 +356,10 @@ dataset_info:
337
  dtype: string
338
  splits:
339
  - name: train
340
- num_bytes: 241924032
341
  num_examples: 18143
342
- download_size: 242377242
343
- dataset_size: 241924032
344
  - config_name: spanish
345
  features:
346
  - name: url
@@ -359,10 +378,10 @@ dataset_info:
359
  dtype: string
360
  splits:
361
  - name: train
362
- num_bytes: 314618618
363
  num_examples: 38795
364
- download_size: 315609530
365
- dataset_size: 314618618
366
  - config_name: thai
367
  features:
368
  - name: url
@@ -381,10 +400,10 @@ dataset_info:
381
  dtype: string
382
  splits:
383
  - name: train
384
- num_bytes: 86982851
385
  num_examples: 5093
386
- download_size: 87104200
387
- dataset_size: 86982851
388
  - config_name: turkish
389
  features:
390
  - name: url
@@ -403,10 +422,10 @@ dataset_info:
403
  dtype: string
404
  splits:
405
  - name: train
406
- num_bytes: 11371821
407
  num_examples: 1512
408
- download_size: 11405793
409
- dataset_size: 11371821
410
  - config_name: vietnamese
411
  features:
412
  - name: url
@@ -425,29 +444,84 @@ dataset_info:
425
  dtype: string
426
  splits:
427
  - name: train
428
- num_bytes: 69868788
429
  num_examples: 6616
430
- download_size: 70024093
431
- dataset_size: 69868788
432
- config_names:
433
- - arabic
434
- - chinese
435
- - czech
436
- - dutch
437
- - english
438
- - french
439
- - german
440
- - hindi
441
- - indonesian
442
- - italian
443
- - japanese
444
- - korean
445
- - portuguese
446
- - russian
447
- - spanish
448
- - thai
449
- - turkish
450
- - vietnamese
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
451
  ---
452
  # Dataset Card for "wiki_lingua"
453
 
36
  task_ids: []
37
  paperswithcode_id: wikilingua
38
  pretty_name: WikiLingua
39
+ config_names:
40
+ - arabic
41
+ - chinese
42
+ - czech
43
+ - dutch
44
+ - english
45
+ - french
46
+ - german
47
+ - hindi
48
+ - indonesian
49
+ - italian
50
+ - japanese
51
+ - korean
52
+ - portuguese
53
+ - russian
54
+ - spanish
55
+ - thai
56
+ - turkish
57
+ - vietnamese
58
  dataset_info:
59
  - config_name: arabic
60
  features:
74
  dtype: string
75
  splits:
76
  - name: train
77
+ num_bytes: 119116075
78
  num_examples: 9995
79
+ download_size: 55808460
80
+ dataset_size: 119116075
81
  - config_name: chinese
82
  features:
83
  - name: url
96
  dtype: string
97
  splits:
98
  - name: train
99
+ num_bytes: 41170645
100
  num_examples: 6541
101
+ download_size: 25187026
102
+ dataset_size: 41170645
103
  - config_name: czech
104
  features:
105
  - name: url
118
  dtype: string
119
  splits:
120
  - name: train
121
+ num_bytes: 20816346
122
  num_examples: 2520
123
+ download_size: 12480761
124
+ dataset_size: 20816346
125
  - config_name: dutch
126
  features:
127
  - name: url
140
  dtype: string
141
  splits:
142
  - name: train
143
+ num_bytes: 87257952
144
  num_examples: 10862
145
+ download_size: 47651076
146
+ dataset_size: 87257952
147
  - config_name: english
148
  features:
149
  - name: url
158
  dtype: string
159
  splits:
160
  - name: train
161
+ num_bytes: 333699946
162
  num_examples: 57945
163
+ download_size: 187189233
164
+ dataset_size: 333699946
165
  - config_name: french
166
  features:
167
  - name: url
180
  dtype: string
181
  splits:
182
  - name: train
183
+ num_bytes: 197550244
184
  num_examples: 21690
185
+ download_size: 105158840
186
+ dataset_size: 197550244
187
  - config_name: german
188
  features:
189
  - name: url
202
  dtype: string
203
  splits:
204
  - name: train
205
+ num_bytes: 168674208
206
  num_examples: 20103
207
+ download_size: 93078076
208
+ dataset_size: 168674208
209
  - config_name: hindi
210
  features:
211
  - name: url
224
  dtype: string
225
  splits:
226
  - name: train
227
+ num_bytes: 63785007
228
  num_examples: 3402
229
+ download_size: 22774620
230
+ dataset_size: 63785007
231
  - config_name: indonesian
232
  features:
233
  - name: url
246
  dtype: string
247
  splits:
248
  - name: train
249
+ num_bytes: 136408773
250
  num_examples: 16308
251
+ download_size: 67658970
252
+ dataset_size: 136408773
253
  - config_name: italian
254
  features:
255
  - name: url
268
  dtype: string
269
  splits:
270
  - name: train
271
+ num_bytes: 138119439
272
  num_examples: 17673
273
+ download_size: 78108134
274
+ dataset_size: 138119439
275
  - config_name: japanese
276
  features:
277
  - name: url
290
  dtype: string
291
  splits:
292
  - name: train
293
+ num_bytes: 40144987
294
  num_examples: 4372
295
+ download_size: 19794488
296
+ dataset_size: 40144987
297
  - config_name: korean
298
  features:
299
  - name: url
312
  dtype: string
313
  splits:
314
  - name: train
315
+ num_bytes: 38647570
316
  num_examples: 4111
317
+ download_size: 20029486
318
+ dataset_size: 38647570
319
  - config_name: portuguese
320
  features:
321
  - name: url
334
  dtype: string
335
  splits:
336
  - name: train
337
+ num_bytes: 204270713
338
  num_examples: 28143
339
+ download_size: 114735912
340
+ dataset_size: 204270713
341
  - config_name: russian
342
  features:
343
  - name: url
356
  dtype: string
357
  splits:
358
  - name: train
359
+ num_bytes: 241923944
360
  num_examples: 18143
361
+ download_size: 111025228
362
+ dataset_size: 241923944
363
  - config_name: spanish
364
  features:
365
  - name: url
378
  dtype: string
379
  splits:
380
  - name: train
381
+ num_bytes: 314618442
382
  num_examples: 38795
383
+ download_size: 170995186
384
+ dataset_size: 314618442
385
  - config_name: thai
386
  features:
387
  - name: url
400
  dtype: string
401
  splits:
402
  - name: train
403
+ num_bytes: 86982807
404
  num_examples: 5093
405
+ download_size: 31944979
406
+ dataset_size: 86982807
407
  - config_name: turkish
408
  features:
409
  - name: url
422
  dtype: string
423
  splits:
424
  - name: train
425
+ num_bytes: 11371777
426
  num_examples: 1512
427
+ download_size: 5964904
428
+ dataset_size: 11371777
429
  - config_name: vietnamese
430
  features:
431
  - name: url
444
  dtype: string
445
  splits:
446
  - name: train
447
+ num_bytes: 69868744
448
  num_examples: 6616
449
+ download_size: 33194150
450
+ dataset_size: 69868744
451
+ configs:
452
+ - config_name: arabic
453
+ data_files:
454
+ - split: train
455
+ path: arabic/train-*
456
+ - config_name: chinese
457
+ data_files:
458
+ - split: train
459
+ path: chinese/train-*
460
+ - config_name: czech
461
+ data_files:
462
+ - split: train
463
+ path: czech/train-*
464
+ - config_name: dutch
465
+ data_files:
466
+ - split: train
467
+ path: dutch/train-*
468
+ - config_name: english
469
+ data_files:
470
+ - split: train
471
+ path: english/train-*
472
+ default: true
473
+ - config_name: french
474
+ data_files:
475
+ - split: train
476
+ path: french/train-*
477
+ - config_name: german
478
+ data_files:
479
+ - split: train
480
+ path: german/train-*
481
+ - config_name: hindi
482
+ data_files:
483
+ - split: train
484
+ path: hindi/train-*
485
+ - config_name: indonesian
486
+ data_files:
487
+ - split: train
488
+ path: indonesian/train-*
489
+ - config_name: italian
490
+ data_files:
491
+ - split: train
492
+ path: italian/train-*
493
+ - config_name: japanese
494
+ data_files:
495
+ - split: train
496
+ path: japanese/train-*
497
+ - config_name: korean
498
+ data_files:
499
+ - split: train
500
+ path: korean/train-*
501
+ - config_name: portuguese
502
+ data_files:
503
+ - split: train
504
+ path: portuguese/train-*
505
+ - config_name: russian
506
+ data_files:
507
+ - split: train
508
+ path: russian/train-*
509
+ - config_name: spanish
510
+ data_files:
511
+ - split: train
512
+ path: spanish/train-*
513
+ - config_name: thai
514
+ data_files:
515
+ - split: train
516
+ path: thai/train-*
517
+ - config_name: turkish
518
+ data_files:
519
+ - split: train
520
+ path: turkish/train-*
521
+ - config_name: vietnamese
522
+ data_files:
523
+ - split: train
524
+ path: vietnamese/train-*
525
  ---
526
  # Dataset Card for "wiki_lingua"
527
 
data/arabic.jsonl.gz → arabic/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c3d0c024d5e024392c7d51c70c2c7a2e3241ed52159c4d5591f1736d15b520d
3
- size 41403753
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2745b66592238430976fcff744e6889348e0a083fb8135991143a5d809fcfc9
3
+ size 55808460
data/chinese.jsonl.gz → chinese/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89a721b220f58ef77b7d570cab4b093b84aa24c77524f6d293f3112687c161a9
3
- size 19099290
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dc2e185c67537ae823ac8ccb19844ed56292c0d6f4abb8736001ab95332191b
3
+ size 25187026
data/czech.jsonl.gz → czech/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:719327825ffc23bd6d7392e8a06db168e18069e35d0f150ba719eb438e1d6e9b
3
- size 8293848
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc35187b90d6db416b0c53ec9e15d8cbbe97331c84d23a91dc10655447b6f931
3
+ size 12480761
data/english.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1cd5a804b1a64763e13c97fa1ad1d22d0263e55bcd71a42d31b340b0c8cb4d29
3
- size 115537674
 
 
 
data/french.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a687aad0b3fe602ae47bb071767a12c39280a8491781987c6f7333507f4ed14e
3
- size 65059668
 
 
 
data/german.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b8a9bdc538934041eac8a31cb331a4b35a2e5151b27f28959507105afbfda2a3
3
- size 58091919
 
 
 
data/hindi.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d70b966b130158943c201f0c7f572ed084514a76685521484c2960712377bf9c
3
- size 14567375
 
 
 
data/indonesian.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6584fe08ee0d2b996cee4b2476191b4eb8c2d58f374478364b1e1edccd85806d
3
- size 41852908
 
 
 
data/italian.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ea94c8c2a1be3b4c67c1eabc3fe03d607570bd754d983cfa9434d5f3b53424e
3
- size 46917823
 
 
 
data/japanese.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:169a5a7fb0c4e166a2e125158bb9c6972c5a76c6d3abfe3f267068c5ef0debcd
3
- size 14416407
 
 
 
data/korean.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:28441564238c7f0830cee9486c4cbc028f3a646e91436556c28a56e6c34aae88
3
- size 14417542
 
 
 
data/portuguese.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:52ac0deeae96eef1a3f89805a3a944f61b1f80ad9b6c0f8511e9c3f54ec4e010
3
- size 71411075
 
 
 
data/russian.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5956df21ca5eaeb2f8cdb549ca51873a659aaecada100d94ae2b711a7d867b01
3
- size 79624829
 
 
 
data/spanish.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5431e36af351461d6acd9da6248943ab054d03f2c8277a151cbd9ae076781c30
3
- size 104218559
 
 
 
data/thai.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8b3c99077a95df6caf10d72016c4570a1bd6914c76ae3d3c41fd3eedf87e75d
3
- size 19741058
 
 
 
data/vietnamese.jsonl.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6735c342ebc3cf347bbd8052c58a26fbf6d2f5154e8ab2acf4726276405f046c
3
- size 22117520
 
 
 
data/dutch.jsonl.gz → dutch/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:241595c7cc4031f8fed3e76502e0bb1a9a9871d097b85ab8a83075ed4f5b4407
3
- size 29400461
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1b1e9c8a6fd372c283c773feb850be6fc1dcce01e9d868e7267508bb02df85d
3
+ size 47651076
english/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ad770ae41bbc5e608115d1edc031054cb9e2db435477ba9b2f0e2e57c5bc52
3
+ size 187189233
french/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e991eb694d808ec7db0f5d2c3ae43c0606479056c35ae4b4943af3a674f15f6
3
+ size 105158840
german/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb1701668ef59472b8880b5985c70eb03b01f256c2d0fba101e175f0bbe1cb2b
3
+ size 93078076
hindi/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41121eaf887df1daae40b500d31435ed20480b9ace43a31040a659d2c8f32f28
3
+ size 22774620
indonesian/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27739987cf8644b77bf782856b82be2c909f3c7b06b75708475d9d87e4e67202
3
+ size 67658970
italian/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bef4d0fdc15f18741603081d7b0cd3274c9f75b974f0e97bbeb96c49c1821f5d
3
+ size 78108134
japanese/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2832440410262a221d349578d5eb70dcb405b907cb99743578234ee9f8783d4
3
+ size 19794488
korean/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2178d57b4546ac1ddce8777be5326f98a1bbbf45bd97716bb74e1a0a5b794983
3
+ size 20029486
portuguese/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8576ee866decc2ee9b58242927730d10792c91ae6c6f56dbea24d1fef9d73314
3
+ size 114735912
russian/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c6d18e02707ecfbc1306e0773e276ca556e797be1e6c874988799ae5784ccfc
3
+ size 111025228
spanish/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abfdda4ff04884b84a0bb499385e83f0465b1d20582e631a8ca81bd17da693d0
3
+ size 170995186
thai/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7bc76020f72867db7d2b0f72c8bc553b6af3c128012b2b0ea2a8c96c07f1b9c
3
+ size 31944979
data/turkish.jsonl.gz → turkish/train-00000-of-00001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a3d5eb7d9bb943fc5d6c1765d7740d88c2aae78aa5ad32b6b5f49e55020a350
3
- size 3877836
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:823e383173e29f12923e91165d6141d9e003c186d5fa1d89a1119b180ccd5c51
3
+ size 5964904
vietnamese/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0679307729dd3678ff80e0478bcc0132c4d5285bb92b7d5d3ac95659bb93505
3
+ size 33194150
wiki_lingua.py DELETED
@@ -1,168 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """WikiLingua."""
16
-
17
-
18
- import json
19
-
20
- import datasets
21
-
22
-
23
- # Find for instance the citation on arxiv or on the dataset repo/website
24
- _CITATION = """\
25
- @inproceedings{ladhak-etal-2020-wikilingua,
26
- title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
27
- author = "Ladhak, Faisal and
28
- Durmus, Esin and
29
- Cardie, Claire and
30
- McKeown, Kathleen",
31
- booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
32
- month = nov,
33
- year = "2020",
34
- address = "Online",
35
- publisher = "Association for Computational Linguistics",
36
- url = "https://aclanthology.org/2020.findings-emnlp.360",
37
- doi = "10.18653/v1/2020.findings-emnlp.360",
38
- pages = "4034--4048",
39
- }
40
- """
41
-
42
- _DESCRIPTION = """\
43
- WikiLingua is a large-scale multilingual dataset for the evaluation of
44
- cross-lingual abstractive summarization systems. The dataset includes ~770k
45
- article and summary pairs in 18 languages from WikiHow. The gold-standard
46
- article-summary alignments across languages was done by aligning the images
47
- that are used to describe each how-to step in an article.
48
- """
49
-
50
- _HOMEPAGE = "https://github.com/esdurmus/Wikilingua"
51
-
52
- _LICENSE = "CC BY-NC-SA 3.0"
53
-
54
- # Download link
55
- _URL = "data/{language}.jsonl.gz"
56
- _LANGUAGES = [
57
- "arabic",
58
- "chinese",
59
- "czech",
60
- "dutch",
61
- "english",
62
- "french",
63
- "german",
64
- "hindi",
65
- "indonesian",
66
- "italian",
67
- "japanese",
68
- "korean",
69
- "portuguese",
70
- "russian",
71
- "spanish",
72
- "thai",
73
- "turkish",
74
- "vietnamese",
75
- ]
76
-
77
-
78
- class WikiLingua(datasets.GeneratorBasedBuilder):
79
- """WikiLingua dataset."""
80
-
81
- VERSION = datasets.Version("1.1.1")
82
-
83
- BUILDER_CONFIGS = [
84
- datasets.BuilderConfig(
85
- name=lang,
86
- version=datasets.Version("1.1.1"),
87
- description=f"A subset of article-summary in {lang.capitalize()}",
88
- )
89
- for lang in _LANGUAGES
90
- ]
91
-
92
- DEFAULT_CONFIG_NAME = "english"
93
-
94
- def _info(self):
95
- if self.config.name == "english":
96
- features = datasets.Features(
97
- {
98
- "url": datasets.Value("string"),
99
- "article": datasets.Sequence(
100
- {
101
- "section_name": datasets.Value("string"),
102
- "document": datasets.Value("string"),
103
- "summary": datasets.Value("string"),
104
- }
105
- ),
106
- }
107
- )
108
- else:
109
- features = datasets.Features(
110
- {
111
- "url": datasets.Value("string"),
112
- "article": datasets.Sequence(
113
- {
114
- "section_name": datasets.Value("string"),
115
- "document": datasets.Value("string"),
116
- "summary": datasets.Value("string"),
117
- "english_url": datasets.Value("string"),
118
- "english_section_name": datasets.Value("string"),
119
- }
120
- ),
121
- }
122
- )
123
-
124
- return datasets.DatasetInfo(
125
- # This is the description that will appear on the datasets page.
126
- description=_DESCRIPTION,
127
- # This defines the different columns of the dataset and their types
128
- features=features, # Here we define them above because they are different between the two configurations
129
- # Homepage of the dataset for documentation
130
- homepage=_HOMEPAGE,
131
- # License for the dataset if available
132
- license=_LICENSE,
133
- # Citation for the dataset
134
- citation=_CITATION,
135
- )
136
-
137
- def _split_generators(self, dl_manager):
138
- """Returns SplitGenerators."""
139
- filepath = dl_manager.download_and_extract(_URL.format(language=self.config.name))
140
- return [
141
- datasets.SplitGenerator(
142
- name=datasets.Split.TRAIN,
143
- # These kwargs will be passed to _generate_examples
144
- gen_kwargs={
145
- "filepath": filepath,
146
- },
147
- ),
148
- ]
149
-
150
- def _process_article(self, article):
151
- """Parse the article and convert into list of dict"""
152
- processed_article = []
153
- for key, value in article.items():
154
- row = {"section_name": key, "document": value["document"], "summary": value["summary"]}
155
-
156
- if self.config.name != "english":
157
- row["english_url"] = value["english_url"]
158
- row["english_section_name"] = value["english_section_name"]
159
- processed_article.append(row)
160
-
161
- return processed_article
162
-
163
- def _generate_examples(self, filepath):
164
- """Yields examples."""
165
- with open(filepath, "rb") as f:
166
- for id_, line in enumerate(f):
167
- row = json.loads(line)
168
- yield id_, {"url": row["url"], "article": self._process_article(row["article"])}