Quentin Lhoest commited on
Commit
05042a5
1 Parent(s): ed847b4

Release: 2.0.0

Browse files

Commit from https://github.com/huggingface/datasets/commit/983f46ddae2f5b253db2b3c5691d38c75241cadb

Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -670,15 +670,31 @@ The datasets are built from the Wikipedia dump
670
  contains the content of one full Wikipedia article with cleaning to strip
671
  markdown and unwanted sections (references, etc.).
672
 
673
- The articles have been parsed using the ``mwparserfromhell`` tool.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
674
 
675
  ### Supported Tasks and Leaderboards
676
 
677
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
678
 
679
  ### Languages
680
 
681
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
682
 
683
  ## Dataset Structure
684
 
@@ -686,6 +702,7 @@ We show detailed information for up to 5 configurations of the dataset.
686
 
687
  ### Data Instances
688
 
 
689
 
690
  #### 20200501.en
691
 
@@ -809,10 +826,8 @@ Here are the sizes for several configurations:
809
  title = "Wikimedia Downloads",
810
  url = "https://dumps.wikimedia.org"
811
  }
812
-
813
  ```
814
 
815
-
816
  ### Contributions
817
 
818
  Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
670
  contains the content of one full Wikipedia article with cleaning to strip
671
  markdown and unwanted sections (references, etc.).
672
 
673
+ The articles are parsed using the ``mwparserfromhell`` tool.
674
+
675
+ To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first:
676
+
677
+ ```
678
+ pip install apache_beam mwparserfromhell
679
+ ```
680
+
681
+ Then can load any subset of Wikipedia per language and per date this way:
682
+
683
+ ```python
684
+ from datasets import load_dataset
685
+
686
+ load_dataset("wikipedia", language="sw", date="20220120")
687
+ ```
688
+
689
+ You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
690
 
691
  ### Supported Tasks and Leaderboards
692
 
693
+ The dataset is generally used for Language Modeling.
694
 
695
  ### Languages
696
 
697
+ You can find the list of languages [here](https://en.wikipedia.org/wiki/List_of_Wikipedias).
698
 
699
  ## Dataset Structure
700
 
 
702
 
703
  ### Data Instances
704
 
705
+ Some subsets of Wikipedia have already been processed by Hugging face, as you can see below:
706
 
707
  #### 20200501.en
708
 
 
826
  title = "Wikimedia Downloads",
827
  url = "https://dumps.wikimedia.org"
828
  }
 
829
  ```
830
 
 
831
  ### Contributions
832
 
833
  Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.