sabilmakbar commited on
Commit
a056553
1 Parent(s): fd13b02

Fix wrong html char

Browse files
Files changed (1) hide show
  1. README.md +35 -35
README.md CHANGED
@@ -374,6 +374,33 @@ license: cc-by-sa-3.0
374
  ---
375
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
376
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
377
  # **FAQS**
378
  ### What are the available languages provided in dataset and from which country?
379
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
@@ -420,52 +447,25 @@ You may check the following tables to understand the current coverage of this da
420
 
421
  Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
422
 
423
- ### How do I extract new Wikipedia Dataset of SEA languages?
424
- You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
425
-
426
- ### How do I extract new Wikipedia Dataset of SEA languages?
427
- You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
428
-
429
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
430
  The data available in here are processed with following flows:
431
- 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
432
  2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
433
 
434
- # Getting Started #
435
- ### To read the datasets directly ###
436
- Use one of the following code chunks to load it from HuggingFace Hub:
437
- You can refer to the 2nd args of ```config name``` using the following script
438
- ```
439
- dataset = load_dataset(
440
- "sabilmakbar/sea_wiki",
441
- "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
442
- )
443
- ```
444
- Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
445
- ```
446
- dataset = load_dataset(
447
- "sabilmakbar/sea_wiki",
448
- lang = "id", # see README for complete lang choices
449
- date_stamp="20230901"
450
- )
451
- ```
452
- Or you can provide a ```country``` params with similar fashion to ```lang``` args(providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
453
- ```
454
- dataset = load_dataset(
455
- "sabilmakbar/sea_wiki",
456
- lang = "id", # see the splits for complete lang choices
457
- date_stamp="20230901"
458
- )
459
- ```
460
 
461
  ### To replicate the whole dataset generation process ###
462
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
463
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
464
  3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now, esp for Python 3.10.x)
465
  4. Run this ```sh``` script for extractions from Wikimedia Dump:
466
- ```sh extract_raw_wiki_data_sea.sh```
467
  5. Run this ```sh``` script of deduplication:
468
- ```sh dedup_raw_wiki_data_sea.sh```
469
 
470
  ## Citation Info:
471
  ```
 
374
  ---
375
  Welcome to SEA Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
376
 
377
+ # Getting Started #
378
+ ### To read the datasets directly ###
379
+ Use one of the following code chunks to load it from HuggingFace Hub:
380
+ You can refer to the 2nd args of ```config name``` using the following script
381
+ ```
382
+ dataset = load_dataset(
383
+ "sabilmakbar/sea_wiki",
384
+ "seawiki_dedup_all" # a config name, can be "seawiki_dedup_all" or "seawiki_with_countries_all", or "seawiki_with_countries_dedup_all" , defaults to "seawiki_dedup_all"
385
+ )
386
+ ```
387
+ Or you can provide both ```lang``` and ```date_stamp``` (or just lang only by assuming the ```date_stamp``` will take the newest one)
388
+ ```
389
+ dataset = load_dataset(
390
+ "sabilmakbar/sea_wiki",
391
+ lang = "id", # see README for complete lang choices
392
+ date_stamp="20230901"
393
+ )
394
+ ```
395
+ Or you can provide a ```country``` params with similar fashion to ```lang``` args(providing both ```country``` and ```lang``` will prioritize the ```lang``` kwarg)
396
+ ```
397
+ dataset = load_dataset(
398
+ "sabilmakbar/sea_wiki",
399
+ lang = "id", # see the splits for complete lang choices
400
+ date_stamp="20230901"
401
+ )
402
+ ```
403
+
404
  # **FAQS**
405
  ### What are the available languages provided in dataset and from which country?
406
  You may check the following tables to understand the current coverage of this dataset (languages, countries, data size & volume).
 
447
 
448
  Some other languages in SEA that are already exists its Wiki Index at Wikimedia might be missing from this list. Any lang update PR is greatly appreciated!
449
 
 
 
 
 
 
 
450
  ### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
451
  The data available in here are processed with following flows:
452
+ 1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for unavailable informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
453
  2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```dedup_raw_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) script to understand its implementation.
454
 
455
+ ### How do I extract new Wikipedia Dataset of SEA languages?
456
+ You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_sea.sh```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data_sea.sh) to extract it on your own.
457
+
458
+ ### How do I extract new Wikipedia Dataset of SEA languages?
459
+ You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract. Please note that this dataset is extensible to any languages of your choice.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
460
 
461
  ### To replicate the whole dataset generation process ###
462
  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
463
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
464
  3. Force install ```multiprocess==0.70.15``` by using ```pip install multiprocess==0.70.15``` to avoid [this issue](https://github.com/huggingface/datasets/issues/5613#issuecomment-1703169594) (there's no other workaround for now, esp for Python 3.10.x)
465
  4. Run this ```sh``` script for extractions from Wikimedia Dump:
466
+ ```sh extract_raw_wiki_data_sea.sh```. This script will run [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/extract_raw_wiki_data.py) to construct the Wiki Dataset
467
  5. Run this ```sh``` script of deduplication:
468
+ ```sh dedup_raw_wiki_data_sea.sh```. This script will run [_```dedup_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/sea_wiki/blob/main/dedup_raw_wiki_data.py) to do Wiki Dataset Clenasing. Please note that the cleansing process can be language/dialect specific.
469
 
470
  ## Citation Info:
471
  ```