sabilmakbar commited on
Commit
a5d8f0d
1 Parent(s): e63d35a

Update README.md and Add Script to Count Token Stats (#3)

Browse files

- Update README and add count_data_stats.py (bebdc61e3d45c7399458179feaee25cf1f8abe84)

Files changed (2) hide show
  1. README.md +92 -61
  2. count_data_stats.py +40 -0
README.md CHANGED
@@ -374,6 +374,33 @@ license: cc-by-sa-3.0
374
  ---
375
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
376
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
377
  # **FAQS**
378
  ### What are the available languages provided in dataset and from which country?
379
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
@@ -393,79 +420,83 @@ You may check the following tables to understand the current coverage of this da
393
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
394
 
395
  #### 2. Table of Languages and Countries of its speakers
396
- | Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (bytes) |
397
- :---: | :---: | :---: | :--- | ---: | ---: |
398
- | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4867838 |
399
- | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 17366080 |
400
- | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6655378 |
401
- | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 2072609 |
402
- | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5989252 |
403
- | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11994 | 103146669 |
404
- | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1100932403 |
405
- | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 69774853 |
406
- | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 5014 | 15240262 |
407
- | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1612542 |
408
- | map_bms | Banyumasan <br />(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 5060989 |
409
- | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3296 | 47321734 |
410
- | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 116376870 |
411
- | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 410443550 |
412
- | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 109310 | 313370839 |
413
- | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1938121 |
414
- | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13945 | 33754296 |
415
- | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 47410439 |
416
- | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1452716 |
417
- | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159719 | 1012930269 |
418
- | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1288680 | 1603057632 |
419
 
420
 
421
- Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
 
422
 
423
- ### How do I extract new Wikipedia Dataset of SEA languages?
424
- You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
425
 
426
- ### How do I extract new Wikipedia Dataset of SEA languages?
427
- You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
428
 
429
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
430
  The data available in here are processed with following flows:
431
- 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
432
- 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
433
 
434
- # Getting Started #
435
- ### To read the datasets directly ###
436
- Use one of the following code chunks to load it from HuggingFace Hub:
437
- You can refer to the 2nd args of ```config name``` using the following script
438
- ```
439
- dataset = load_dataset(
440
- "sabilmakbar/sea_wiki",
441
- "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
442
- )
443
- ```
444
- Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
445
- ```
446
- dataset = load_dataset(
447
- "sabilmakbar/sea_wiki",
448
- lang = "id", # see README for complete lang choices
449
- date_stamp="20230901"
450
- )
451
- ```
452
- Or you can provide a ```country``` params with similar fashion to ```lang``` args(providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
453
- ```
454
- dataset = load_dataset(
455
- "sabilmakbar/sea_wiki",
456
- lang = "id", # see the splits for complete lang choices
457
- date_stamp="20230901"
458
- )
459
- ```
460
 
461
  ### To replicate the whole dataset generation process ###
462
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
 
463
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
464
- 3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now, esp for Python 3.10.x)
465
- 4. Run this ```sh``` script for extractions from Wikimedia Dump:<\b>
466
- ```sh extract_raw_wiki_data_sea.sh```.
467
- 5. Run this ```sh``` script of deduplication:<\b>
468
- ```sh dedup_raw_wiki_data_sea.sh```.
 
 
 
469
 
470
  ## Citation Info:
471
  ```
 
374
  ---
375
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
376
 
377
+ # Getting Started #
378
+ ### To read the datasets directly ###
379
+ Use one of the following code chunks to load it from HuggingFace Hub:
380
+ You can refer to the 2nd args of ```config name``` using the following script
381
+ ```
382
+ dataset = load_dataset(
383
+ "sabilmakbar/sea_wiki",
384
+ "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
385
+ )
386
+ ```
387
+ Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
388
+ ```
389
+ dataset = load_dataset(
390
+ "sabilmakbar/sea_wiki",
391
+ lang = "id", # see README for complete lang choices
392
+ date_stamp="20230901"
393
+ )
394
+ ```
395
+ Or you can provide a ```country``` params with similar fashion to ```lang``` args (providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
396
+ ```
397
+ dataset = load_dataset(
398
+ "sabilmakbar/sea_wiki",
399
+ lang = "id", # see the splits for complete lang choices
400
+ date_stamp="20230901"
401
+ )
402
+ ```
403
+
404
  # **FAQS**
405
  ### What are the available languages provided in dataset and from which country?
406
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
 
420
  | vnm | Vietnam | [Wiki Link](https://en.wikipedia.org/wiki/Vietnam) |
421
 
422
  #### 2. Table of Languages and Countries of its speakers
423
+ | Lang Code | Lang Name | Country Codes Spoken | Wiki Info | Total Data | Total Size (MiB rounded) |
424
+ | :---: | :---: | :---: | :--- | ---: | ---: |
425
+ | ace | Acehnese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4.64 |
426
+ | ban | Balinese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 16.56 |
427
+ | bjn | Banjarese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6.35 |
428
+ | bug | Buginese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 1.98 |
429
+ | gor | Gorontalo | idn | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5.71 |
430
+ | km | Khmer | khm | [Wiki Link](https://en.wikipedia.org/wiki/Khmer_language) | 11994 | 98.37 |
431
+ | id | Indonesian | idn | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1049.93 |
432
+ | jv | Javanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 66.54 |
433
+ | lo | Lao | lao | [Wiki Link](https://en.wikipedia.org/wiki/Lao_language) | 5014 | 14.53 |
434
+ | mad | Madurese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Madurese_language) | 1192 | 1.54 |
435
+ | map_bms | Banyumasan <br>(Dialect of Javanese) | idn | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 4.83 |
436
+ | mnw | Mon | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Mon_language) | 3296 | 45.13 |
437
+ | min | Minangkabau | idn | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 110.99 |
438
+ | ms | Malay | mys, sgp, brn, idn | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 391.43 |
439
+ | my | Burmese | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Burmese_language) | 109310 | 298.85 |
440
+ | nia | Nias | idn | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1.85 |
441
+ | shn | Shan | mmr | [Wiki Link](https://en.wikipedia.org/wiki/Shan_language) | 13945 | 32.19 |
442
+ | su | Sundanese | idn | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 45.21 |
443
+ | tet | Tetum | tls, idn | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1.39 |
444
+ | th | Thai | tha | [Wiki Link](https://en.wikipedia.org/wiki/Thai_language) | 159719 | 966.00 |
445
+ | vi | Vietnamese | vnm | [Wiki Link](https://en.wikipedia.org/wiki/Vietnamese_language) | 1288680 | 1528.79 |
446
 
447
 
448
+ #### 3. Table of Token Statistics for Covered Languages
449
+ The token statistics is generated using ```tiktoken``` using encoder for GPT-4.
450
 
451
+ | Lang Code | Total Token | Avg Token per Article | Min Token | Max Token | Token Deciles List |
452
+ | :---: | ---: | ---: | ---: | ---: | :--- |
453
+ | ace | 1,370,829 | 105.61899992295247 | 3 | 9,659 | [38.0, 52.0, 54.0, 69.0, 76.0, 84.0, 90.0, 123.0, 126.0] |
454
+ | ban | 5,924,610 | 287.44893503469024 | 5 | 24,364 | [97.0, 144.0, 165.0, 187.0, 209.0, 245.0, 276.0, 315.0, 421.0] |
455
+ | bjn | 1,935,505 | 184.28115776444827 | 2 | 30,170 | [36.0, 38.0, 39.0, 40.0, 42.0, 51.0, 82.0, 151.0, 367.0] |
456
+ | bug | 553,693 | 55.54147858360919 | 1 | 13,951 | [31.0, 42.0, 43.0, 46.0, 48.0, 50.0, 52.0, 55.0, 57.0] |
457
+ | gor | 1,575,766 | 103.05860039241334 | 2 | 5,525 | [55.0, 58.0, 60.0, 62.0, 64.0, 66.0, 69.0, 75.0, 96.0] |
458
+ | id | 325,411,713 | 491.22975561670967 | 1 | 198,597 | [54.0, 93.0, 123.0, 145.0, 180.0, 226.0, 332.0, 543.0, 1068.0] |
459
+ | jv | 23,528,314 | 321.95284619594963 | 2 | 342,156 | [48.0, 60.0, 75.0, 88.0, 117.0, 175.0, 270.0, 420.0, 772.0] |
460
+ | km | 54,559,721 | 4,758.391854177568 | 1 | 1,110,771 | [160.0, 293.0, 452.0, 693.0, 1032.0, 1609.0, 2644.0, 4745.0, 9607.0] |
461
+ | lo | 9,395,636 | 1,918.6514192362672 | 3 | 107,154 | [134.0, 184.2, 285.0, 494.0, 658.0, 894.6, 1258.0, 1971.2, 4153.8] |
462
+ | mad | 611,736 | 513.2013422818792 | 14 | 17,093 | [80.1, 110.2, 135.0, 161.0, 194.0, 242.0, 302.7, 531.4, 1167.1] |
463
+ | map_bms | 1,307,244 | 110.41844750401216 | 1 | 20,629 | [20.0, 21.0, 22.0, 24.0, 30.0, 35.0, 36.0, 38.0, 111.0] |
464
+ | min | 33,114,184 | 146.54109358681606 | 3 | 58,387 | [81.0, 91.0, 96.0, 108.0, 119.0, 135.0, 156.0, 168.0, 170.0] |
465
+ | mnw | 31,595,647 | 9,659.3234484867 | 6 | 1,450,765 | [425.0, 601.0, 629.0, 682.0, 763.0, 2103.0, 4255.0, 7724.0, 14517.0] |
466
+ | ms | 121,343,673 | 348.64363228892813 | 1 | 68,545 | [32.0, 40.0, 49.0, 63.0, 105.0, 138.0, 216.0, 362.0, 788.0] |
467
+ | my | 189,439,447 | 1,740.8673761015998 | 10 | 1,376,658 | [164.0, 269.0, 350.0, 508.0, 559.0, 578.0, 605.0, 892.4, 3369.0] |
468
+ | nia | 795,527 | 464.134772462077 | 8 | 18,650 | [59.0, 61.0, 63.0, 65.0, 67.0, 86.0, 239.1, 623.4, 1249.7] |
469
+ | shn | 23,125,637 | 1,692.6977748499487 | 2 | 204,094 | [460.0, 480.0, 585.0, 679.0, 715.0, 740.0, 756.0, 780.0, 1580.9] |
470
+ | su | 14,710,124 | 239.07627297697022 | 1 | 99,456 | [41.0, 43.0, 45.0, 49.0, 70.0, 146.0, 216.0, 219.0, 419.0] |
471
+ | tet | 487,016 | 332.6612021857924 | 4 | 24,287 | [30.3, 47.0, 66.9, 101.0, 164.0, 177.0, 187.0, 248.6, 604.4] |
472
+ | th | 330,964,733 | 2,072.8566695476807 | 1 | 289,150 | [231.0, 390.0, 546.0, 727.0, 969.0, 1276.0, 1741.0, 2533.0, 4361.0] |
473
+ | vi | 546,481,258 | 424.3163404275143 | 3 | 246,463 | [46.0, 64.0, 71.0, 80.0, 86.0, 92.0, 120.0, 240.0, 824.0] |
474
 
475
+ Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
 
476
 
477
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
478
  The data available in here are processed with following flows:
479
+ 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for unavailable informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
480
+ 2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars/UTF-8 chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
481
 
482
+ ### How do I extract new Wikipedia Dataset of SEA languages?
483
+ You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own.
484
+
485
+ ### How do I extract new Wikipedia Dataset of SEA languages?
486
+ You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract. Please note that this dataset is extensible to any languages of your choice.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
487
 
488
  ### To replicate the whole dataset generation process ###
489
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
490
+
491
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
492
+
493
+ 3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now)
494
+
495
+ 4. Run this ```sh``` script for extractions from Wikiedia HF using ```sh extract_raw_wiki_data_sea.sh```<br>
496
+ This script will run [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to construct the Wiki Dataset.
497
+
498
+ 5. Run this ```sh``` script for deduplications from extracted data in Step 4 using ```sh dedup_raw_wiki_data_sea.sh```<br>
499
+ This script will run [_```dedup_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) to do Wiki Dataset Clenasing. Please note that the cleansing process can be language/dialect specific.
500
 
501
  ## Citation Info:
502
  ```
count_data_stats.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing as mp
2
+
3
+ import numpy as np
4
+ from datasets import load_dataset
5
+
6
+ import tiktoken
7
+
8
+ def num_tokens_from_string(string: str):
9
+ """Returns the number of tokens in a text string."""
10
+ num_tokens = len(encoding.encode(string))
11
+ return num_tokens
12
+
13
+ def cnt_token_in_hf_wiki_dset(data):
14
+ data["token_cnt"] = num_tokens_from_string(data["text"])
15
+ return data
16
+
17
+ if __name__ == "__main__":
18
+
19
+ dataset = load_dataset("sabilmakbar/sea_wiki")
20
+
21
+ encoding = tiktoken.encoding_for_model('gpt-4')
22
+
23
+ stat_dict = {}
24
+ for split, dset in dataset.items():
25
+ dset_text = dset.select_columns(['text'])
26
+ print(f"Counting total token in split lang: {split}")
27
+ dset_text = dset_text.map(cnt_token_in_hf_wiki_dset, num_proc=max(mp.cpu_count()-2,1))
28
+ token_data = list(dset_text["token_cnt"])
29
+ total_token = sum(token_data)
30
+ avg_token = sum(token_data)/len(token_data)
31
+ min_token = min(token_data)
32
+ max_token = max(token_data)
33
+ deciles = np.percentile(token_data, np.arange(10, 100, 10)).tolist()
34
+ stat_dict[split] = {"total": total_token, "avg": avg_token, "min": min_token, "max": max_token, "deciles": deciles}
35
+
36
+ # for markdown table format
37
+ print("| Lang Code | Total Token | Avg Token per Article | Min Token | Max Token | Token Deciles List |")
38
+ print("| :---: | ---: | ---: | ---: | ---: | :--- |")
39
+ for key, data in stat_dict.items():
40
+ print(f"| {key} | {data['total']:,} | {data['avg']:,} | {data['min']:,} | {data['max']:,} | {[round(num,2) for num in data['deciles']]} |")