Files changed (1) hide show
  1. README.md +101 -62
README.md CHANGED
@@ -9,14 +9,21 @@ language:
9
  - bjn
10
  - bug
11
  - gor
 
12
  - id
13
  - jv
14
- - mis
 
 
15
  - min
16
  - ms
 
17
  - nia
 
18
  - su
19
  - tet
 
 
20
  license:
21
  - cc-by-sa-3.0
22
  - gfdl
@@ -35,6 +42,7 @@ tags:
35
  - Wikipedia
36
  - Southeast Asia (SEA)
37
  - Dialect
 
38
  - SEA-related Languages
39
  - SEA Local Languages
40
  dataset_info:
@@ -366,6 +374,33 @@ license: cc-by-sa-3.0
366
  ---
367
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
369
  # **FAQS**
370
  ### What are the available languages provided in dataset and from which country?
371
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
@@ -385,79 +420,83 @@ You may check the following tables to understand the current coverage of this da
385
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
386
 
387
  #### 2. Table of Languages and Countries of its speakers
388
- | Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (bytes) |
389
- :---: | :---: | :---: | :--- | ---: | ---: |
390
- | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4867838 |
391
- | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 17366080 |
392
- | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6655378 |
393
- | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 2072609 |
394
- | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5989252 |
395
- | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11994 | 103146669 |
396
- | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1100932403 |
397
- | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 69774853 |
398
- | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 5014 | 15240262 |
399
- | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1612542 |
400
- | map_bms | Banyumasan <br />(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 5060989 |
401
- | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3296 | 47321734 |
402
- | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 116376870 |
403
- | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 410443550 |
404
- | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 109310 | 313370839 |
405
- | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1938121 |
406
- | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13945 | 33754296 |
407
- | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 47410439 |
408
- | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1452716 |
409
- | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159719 | 1012930269 |
410
- | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1288680 | 1603057632 |
411
 
412
 
413
- Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
 
414
 
415
- ### How do I extract new Wikipedia Dataset of SEA languages?
416
- You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
417
 
418
- ### How do I extract new Wikipedia Dataset of SEA languages?
419
- You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
420
 
421
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
422
  The data available in here are processed with following flows:
423
- 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
424
- 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
425
 
426
- # Getting Started #
427
- ### To read the datasets directly ###
428
- Use one of the following code chunks to load it from HuggingFace Hub:
429
- You can refer to the 2nd args of ```config name``` using the following script
430
- ```
431
- dataset = load_dataset(
432
- "sabilmakbar/sea_wiki",
433
- "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
434
- )
435
- ```
436
- Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
437
- ```
438
- dataset = load_dataset(
439
- "sabilmakbar/sea_wiki",
440
- lang = "id", # see README for complete lang choices
441
- date_stamp="20230901"
442
- )
443
- ```
444
- Or you can provide a ```country``` params with similar fashion to ```lang``` args(providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
445
- ```
446
- dataset = load_dataset(
447
- "sabilmakbar/sea_wiki",
448
- lang = "id", # see the splits for complete lang choices
449
- date_stamp="20230901"
450
- )
451
- ```
452
 
453
  ### To replicate the whole dataset generation process ###
454
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
 
455
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
456
- 3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now, esp for Python 3.10.x)
457
- 4. Run this ```sh``` script for extractions from Wikimedia Dump:<\b>
458
- ```sh extract_raw_wiki_data_sea.sh```.
459
- 5. Run this ```sh``` script of deduplication:<\b>
460
- ```sh dedup_raw_wiki_data_sea.sh```.
 
 
 
461
 
462
  ## Citation Info:
463
  ```
 
9
  - bjn
10
  - bug
11
  - gor
12
+ - km
13
  - id
14
  - jv
15
+ - lo
16
+ - mad
17
+ - mnw
18
  - min
19
  - ms
20
+ - my
21
  - nia
22
+ - shn
23
  - su
24
  - tet
25
+ - th
26
+ - vi
27
  license:
28
  - cc-by-sa-3.0
29
  - gfdl
 
42
  - Wikipedia
43
  - Southeast Asia (SEA)
44
  - Dialect
45
+ - Banyumasan Dialect of Javanese (Ngapak)
46
  - SEA-related Languages
47
  - SEA Local Languages
48
  dataset_info:
 
374
  ---
375
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
376
 
377
+ # Getting Started #
378
+ ### To read the datasets directly ###
379
+ Use one of the following code chunks to load it from HuggingFace Hub:
380
+ You can refer to the 2nd args of ```config name``` using the following script
381
+ ```
382
+ dataset = load_dataset(
383
+ "sabilmakbar/sea_wiki",
384
+ "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
385
+ )
386
+ ```
387
+ Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
388
+ ```
389
+ dataset = load_dataset(
390
+ "sabilmakbar/sea_wiki",
391
+ lang = "id", # see README for complete lang choices
392
+ date_stamp="20230901"
393
+ )
394
+ ```
395
+ Or you can provide a ```country``` params with similar fashion to ```lang``` args (providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
396
+ ```
397
+ dataset = load_dataset(
398
+ "sabilmakbar/sea_wiki",
399
+ lang = "id", # see the splits for complete lang choices
400
+ date_stamp="20230901"
401
+ )
402
+ ```
403
+
404
  # **FAQS**
405
  ### What are the available languages provided in dataset and from which country?
406
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
 
420
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
421
 
422
  #### 2. Table of Languages and Countries of its speakers
423
+ | Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (MiB rounded) |
424
+ | :---: | :---: | :---: | :--- | ---: | ---: |
425
+ | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4.64 |
426
+ | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 16.56 |
427
+ | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6.35 |
428
+ | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 1.98 |
429
+ | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5.71 |
430
+ | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11994 | 98.37 |
431
+ | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1049.93 |
432
+ | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 66.54 |
433
+ | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 5014 | 14.53 |
434
+ | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1.54 |
435
+ | map_bms | Banyumasan <br>(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 4.83 |
436
+ | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3296 | 45.13 |
437
+ | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 110.99 |
438
+ | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 391.43 |
439
+ | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 109310 | 298.85 |
440
+ | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1.85 |
441
+ | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13945 | 32.19 |
442
+ | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 45.21 |
443
+ | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1.39 |
444
+ | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159719 | 966.00 |
445
+ | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Vietnamese_language) | 1288680 | 1528.79 |
446
 
447
 
448
+ #### 3. Table of Token Statistics for Covered Languages
449
+ The token statistics is generated using ```tiktoken``` using encoder for GPT-4.
450
 
451
+ | Lang Code | Total Token | Avg Token per Article | Min Token | Max Token | Token Deciles List |
452
+ | :---: | ---: | ---: | ---: | ---: | :--- |
453
+ | ace | 1,370,829 | 105.61899992295247 | 3 | 9,659 | [38.0, 52.0, 54.0, 69.0, 76.0, 84.0, 90.0, 123.0, 126.0] |
454
+ | ban | 5,924,610 | 287.44893503469024 | 5 | 24,364 | [97.0, 144.0, 165.0, 187.0, 209.0, 245.0, 276.0, 315.0, 421.0] |
455
+ | bjn | 1,935,505 | 184.28115776444827 | 2 | 30,170 | [36.0, 38.0, 39.0, 40.0, 42.0, 51.0, 82.0, 151.0, 367.0] |
456
+ | bug | 553,693 | 55.54147858360919 | 1 | 13,951 | [31.0, 42.0, 43.0, 46.0, 48.0, 50.0, 52.0, 55.0, 57.0] |
457
+ | gor | 1,575,766 | 103.05860039241334 | 2 | 5,525 | [55.0, 58.0, 60.0, 62.0, 64.0, 66.0, 69.0, 75.0, 96.0] |
458
+ | id | 325,411,713 | 491.22975561670967 | 1 | 198,597 | [54.0, 93.0, 123.0, 145.0, 180.0, 226.0, 332.0, 543.0, 1068.0] |
459
+ | jv | 23,528,314 | 321.95284619594963 | 2 | 342,156 | [48.0, 60.0, 75.0, 88.0, 117.0, 175.0, 270.0, 420.0, 772.0] |
460
+ | km | 54,559,721 | 4,758.391854177568 | 1 | 1,110,771 | [160.0, 293.0, 452.0, 693.0, 1032.0, 1609.0, 2644.0, 4745.0, 9607.0] |
461
+ | lo | 9,395,636 | 1,918.6514192362672 | 3 | 107,154 | [134.0, 184.2, 285.0, 494.0, 658.0, 894.6, 1258.0, 1971.2, 4153.8] |
462
+ | mad | 611,736 | 513.2013422818792 | 14 | 17,093 | [80.1, 110.2, 135.0, 161.0, 194.0, 242.0, 302.7, 531.4, 1167.1] |
463
+ | map_bms | 1,307,244 | 110.41844750401216 | 1 | 20,629 | [20.0, 21.0, 22.0, 24.0, 30.0, 35.0, 36.0, 38.0, 111.0] |
464
+ | min | 33,114,184 | 146.54109358681606 | 3 | 58,387 | [81.0, 91.0, 96.0, 108.0, 119.0, 135.0, 156.0, 168.0, 170.0] |
465
+ | mnw | 31,595,647 | 9,659.3234484867 | 6 | 1,450,765 | [425.0, 601.0, 629.0, 682.0, 763.0, 2103.0, 4255.0, 7724.0, 14517.0] |
466
+ | ms | 121,343,673 | 348.64363228892813 | 1 | 68,545 | [32.0, 40.0, 49.0, 63.0, 105.0, 138.0, 216.0, 362.0, 788.0] |
467
+ | my | 189,439,447 | 1,740.8673761015998 | 10 | 1,376,658 | [164.0, 269.0, 350.0, 508.0, 559.0, 578.0, 605.0, 892.4, 3369.0] |
468
+ | nia | 795,527 | 464.134772462077 | 8 | 18,650 | [59.0, 61.0, 63.0, 65.0, 67.0, 86.0, 239.1, 623.4, 1249.7] |
469
+ | shn | 23,125,637 | 1,692.6977748499487 | 2 | 204,094 | [460.0, 480.0, 585.0, 679.0, 715.0, 740.0, 756.0, 780.0, 1580.9] |
470
+ | su | 14,710,124 | 239.07627297697022 | 1 | 99,456 | [41.0, 43.0, 45.0, 49.0, 70.0, 146.0, 216.0, 219.0, 419.0] |
471
+ | tet | 487,016 | 332.6612021857924 | 4 | 24,287 | [30.3, 47.0, 66.9, 101.0, 164.0, 177.0, 187.0, 248.6, 604.4] |
472
+ | th | 330,964,733 | 2,072.8566695476807 | 1 | 289,150 | [231.0, 390.0, 546.0, 727.0, 969.0, 1276.0, 1741.0, 2533.0, 4361.0] |
473
+ | vi | 546,481,258 | 424.3163404275143 | 3 | 246,463 | [46.0, 64.0, 71.0, 80.0, 86.0, 92.0, 120.0, 240.0, 824.0] |
474
 
475
+ Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
 
476
 
477
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
478
  The data available in here are processed with following flows:
479
+ 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for unavailable informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
480
+ 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars/UTF-8 chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
481
 
482
+ ### How do I extract new Wikipedia Dataset of SEA languages?
483
+ You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own.
484
+
485
+ ### How do I extract new Wikipedia Dataset of SEA languages?
486
+ You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract. Please note that this dataset is extensible to any languages of your choice.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
487
 
488
  ### To replicate the whole dataset generation process ###
489
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
490
+
491
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
492
+
493
+ 3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now)
494
+
495
+ 4. Run this ```sh``` script for extractions from Wikiedia HF using ```sh extract_raw_wiki_data_sea.sh```<br>
496
+ This script will run [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to construct the Wiki Dataset.
497
+
498
+ 5. Run this ```sh``` script for deduplications from extracted data in Step 4 using ```sh dedup_raw_wiki_data_sea.sh```<br>
499
+ This script will run [_```dedup_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) to do Wiki Dataset Clenasing. Please note that the cleansing process can be language/dialect specific.
500
 
501
  ## Citation Info:
502
  ```