The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

SEA Wikipedia Data Repository


Welcome to SEA Wikipedia Data Repository. The datasets are extracted from Wikipedia HF and processed using the scripts available in this repository for reproducibility purpose. Since Wikipedia iteslf has license cc-by-sa 4.0, we decided to follow this instead of Wikipedia HF data has of cc-by-sa 3.0 since it gives more rights to initial author/contributor.

Getting Started

To read the datasets directly

Use one of the following code chunks to load it from HuggingFace Hub: You can refer to the 2nd args of config name using the following script

dataset = load_dataset(
  "sabilmakbar/sea_wiki",
  "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
)

Or you can provide both lang and date_stamp (or just lang only by assuming the date_stamp will take the newest one)

dataset = load_dataset(
  "sabilmakbar/sea_wiki",
  lang = "id", # see README for complete lang choices
  date_stamp="20230901"
)

Or you can provide a country params with similar fashion to lang args (providing both country and lang will prioritize the lang kwarg)

dataset = load_dataset(
  "sabilmakbar/sea_wiki",
  lang = "id", # see the splits for complete lang choices
  date_stamp="20230901"
)

FAQS

What are the available languages provided in dataset and from which country?

You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume). All tables are sorted by the leftmost column.

1. Table of Countries and its Country Code

Country Code Country Name Wiki Info
brn Brunei Wiki Link
idn Indonesia Wiki Link
khm Cambodia Wiki Link
lao Laos Wiki Link
mmr Myanmar Wiki Link
mys Malaysia Wiki Link
phl Philippines Wiki Link
sgp Singapore Wiki Link
tha Thailand Wiki Link
tls East Timor Wiki Link
vnm Vietnam Wiki Link

2. Table of Languages and Countries of its speakers

ISO 639-3 Lang Code Dataset Lang Code Lang Name Country Codes Spoken Wiki Info Total Data Total Size (MiB rounded)
ace ace Acehnese idn Wiki Link 12979 4.72
ban ban Balinese idn Wiki Link 20611 17.19
bcl bcl Central Bicolano phl Wiki Link 14079 19.05
bjn bjn Banjarese idn Wiki Link 10503 6.47
bug bug Buginese idn Wiki Link 9969 2.08
bur my Burmese mmr Wiki Link 108819 298.49
cbk cbk_zam Zamboanga Chavacano/Chavacano phl Wiki Link 2242 1.51
ceb ceb Cebuano phl Wiki Link 5815254 4,145.16
gor gor Gorontalo idn Wiki Link 15290 5.93
ilo ilo Ilokano phl Wiki Link 15369 15.94
ind id Indonesian idn Wiki Link 662443 1,066.10
jav jv Javanese idn Wiki Link 73080 68.66
khm km Khmer khm Wiki Link 11466 97.94
lao lo Lao lao Wiki Link 4897 14.22
mad mad Madurese idn Wiki Link 1192 1.54
may ms Malay mys, sgp, brn, idn Wiki Link 348045 395.57
min min Minangkabau idn Wiki Link 225972 111.31
mnw mnw Mon mmr Wiki Link 3271 45.05
nia nia Nias idn Wiki Link 1714 2.05
pag pag Pangasinan phl Wiki Link 1108 0.73
pam pam Kapampangan phl Wiki Link 8932 7.83
shn shn Shan mmr Wiki Link 13662 32.06
sun su Sundanese idn Wiki Link 61529 45.31
tam ta Tamil mys, sgp Wiki Link 160580 771.58
tgl tl Tagalog phl Wiki Link 45121 81.34
tha th Thai tha Wiki Link 159666 965.95
tet tet Tetum tls, idn Wiki Link 1464 1.38
vie vi Vietnamese vnm Wiki Link 1287912 1,528.58
war war Waray phl Wiki Link 1266204 433.22
(dialect) map_bms Banyumasan
(Dialect of Javanese)
idn Wiki Link 11839 4.83

3. Table of Token Statistics for Covered Languages

The token statistics is generated using tiktoken using encoder for GPT-4.

Dataset Lang Code Total Token Avg Token per Article Min Token Max Token Token Deciles List
ace 1,370,829 105.61899992295247 3 9,659 [38.0, 52.0, 54.0, 69.0, 76.0, 84.0, 90.0, 123.0, 126.0]
ban 5,924,610 287.44893503469024 5 24,364 [97.0, 144.0, 165.0, 187.0, 209.0, 245.0, 276.0, 315.0, 421.0]
bcl 6,234,838 442.8466510405569 2 54,049 [55.0, 95.0, 143.0, 179.0, 226.0, 304.0, 419.0, 587.0, 917.2]
bjn 1,935,505 184.28115776444827 2 30,170 [36.0, 38.0, 39.0, 40.0, 42.0, 51.0, 82.0, 151.0, 367.0]
bug 553,693 55.54147858360919 1 13,951 [31.0, 42.0, 43.0, 46.0, 48.0, 50.0, 52.0, 55.0, 57.0]
cbk_zam 402,703 179.6177520071365 2 6,494 [35.0, 41.2, 56.0, 69.0, 90.0, 120.0, 138.0, 155.0, 294.9]
ceb 1,319,601,771 226.92074516435568 4 221,802 [93.0, 108.0, 123.0, 136.0, 163.0, 207.0, 278.0, 377.0, 426.0]
gor 1,575,766 103.05860039241334 2 5,525 [55.0, 58.0, 60.0, 62.0, 64.0, 66.0, 69.0, 75.0, 96.0]
id 325,411,713 491.22975561670967 1 198,597 [54.0, 93.0, 123.0, 145.0, 180.0, 226.0, 332.0, 543.0, 1068.0]
ilo 5,593,491 363.94632051532307 17 18,202 [59.0, 80.0, 91.0, 111.0, 152.0, 213.0, 303.0, 461.0, 856.0]
jv 23,528,314 321.95284619594963 2 342,156 [48.0, 60.0, 75.0, 88.0, 117.0, 175.0, 270.0, 420.0, 772.0]
km 54,559,721 4,758.391854177568 1 1,110,771 [160.0, 293.0, 452.0, 693.0, 1032.0, 1609.0, 2644.0, 4745.0, 9607.0]
lo 9,395,636 1,918.6514192362672 3 107,154 [134.0, 184.2, 285.0, 494.0, 658.0, 894.6, 1258.0, 1971.2, 4153.8]
mad 611,736 513.2013422818792 14 17,093 [80.1, 110.2, 135.0, 161.0, 194.0, 242.0, 302.7, 531.4, 1167.1]
map_bms 1,307,244 110.41844750401216 1 20,629 [20.0, 21.0, 22.0, 24.0, 30.0, 35.0, 36.0, 38.0, 111.0]
min 33,114,184 146.54109358681606 3 58,387 [81.0, 91.0, 96.0, 108.0, 119.0, 135.0, 156.0, 168.0, 170.0]
mnw 31,595,647 9,659.3234484867 6 1,450,765 [425.0, 601.0, 629.0, 682.0, 763.0, 2103.0, 4255.0, 7724.0, 14517.0]
ms 121,343,673 348.64363228892813 1 68,545 [32.0, 40.0, 49.0, 63.0, 105.0, 138.0, 216.0, 362.0, 788.0]
my 189,439,447 1,740.8673761015998 10 1,376,658 [164.0, 269.0, 350.0, 508.0, 559.0, 578.0, 605.0, 892.4, 3369.0]
nia 795,527 464.134772462077 8 18,650 [59.0, 61.0, 63.0, 65.0, 67.0, 86.0, 239.1, 623.4, 1249.7]
pag 222,366 200.6913357400722 5 10,143 [31.0, 51.0, 73.0, 110.0, 118.0, 120.0, 127.0, 181.0, 355.8]
pam 2,269,091 254.04064039408868 1 14,912 [38.0, 56.0, 78.0, 108.0, 121.0, 150.0, 193.0, 289.0, 525.8]
shn 23,125,637 1,692.6977748499487 2 204,094 [460.0, 480.0, 585.0, 679.0, 715.0, 740.0, 756.0, 780.0, 1580.9]
su 14,710,124 239.07627297697022 1 99,456 [41.0, 43.0, 45.0, 49.0, 70.0, 146.0, 216.0, 219.0, 419.0]
ta 376,043,508 2,341.782961763607 15 177,054 [543.0, 700.0, 824.0, 1001.0, 1153.0, 1465.0, 1992.0, 2911.0, 4652.0]
tet 487,016 332.6612021857924 4 24,287 [30.3, 47.0, 66.9, 101.0, 164.0, 177.0, 187.0, 248.6, 604.4]
th 330,964,733 2,072.8566695476807 1 289,150 [231.0, 390.0, 546.0, 727.0, 969.0, 1276.0, 1741.0, 2533.0, 4361.0]
tl 27,789,730 615.8934864032269 7 60,728 [73.0, 116.0, 161.0, 214.0, 281.0, 360.0, 465.0, 666.0, 1136.0]
vi 546,481,913 424.3161900813099 3 246,463 [46.0, 64.0, 71.0, 80.0, 86.0, 92.0, 120.0, 240.0, 824.0]
war 117,438,315 92.74833676090108 1 25,689 [60.0, 77.0, 81.0, 84.0, 87.0, 90.0, 94.0, 99.0, 110.0]

Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!

How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?

The data available in here are processed with following flows:

  1. Raw data is being deduplicated on title and text (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for unavailable informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
  2. Furthermore, the title and text data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars/UTF-8 chars validated). The source code can be found on this Github Repo SEA Wiki Github Source Code

How do I extract new Wikipedia Dataset of SEA languages?

Please refer to the corresponding Github Repo for more detailed info SEA Wiki Github Source Code

Citation Info:

@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"}
@ONLINE{wikipedia-hf,
    title  = "Huggingface Wikipedia Dataset",
    url    = "https://huggingface.co/datasets/wikipedia"}
Downloads last month
7
Edit dataset card