Quentin Lhoest commited on
Commit
858f724
1 Parent(s): 04eb92e

Release: 2.0.0

Browse files

Commit from https://github.com/huggingface/datasets/commit/983f46ddae2f5b253db2b3c5691d38c75241cadb

Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -686,15 +686,31 @@ The datasets are built from the Wikipedia dump
686
  contains the content of one full Wikipedia article with cleaning to strip
687
  markdown and unwanted sections (references, etc.).
688
 
689
- The articles have been parsed using the ``mwparserfromhell`` tool.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
690
 
691
  ### Supported Tasks and Leaderboards
692
 
693
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
694
 
695
  ### Languages
696
 
697
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
698
 
699
  ## Dataset Structure
700
 
@@ -702,6 +718,7 @@ We show detailed information for up to 5 configurations of the dataset.
702
 
703
  ### Data Instances
704
 
 
705
 
706
  #### 20200501.en
707
 
@@ -825,10 +842,8 @@ Here are the sizes for several configurations:
825
  title = "Wikimedia Downloads",
826
  url = "https://dumps.wikimedia.org"
827
  }
828
-
829
  ```
830
 
831
-
832
  ### Contributions
833
 
834
  Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
686
  contains the content of one full Wikipedia article with cleaning to strip
687
  markdown and unwanted sections (references, etc.).
688
 
689
+ The articles are parsed using the ``mwparserfromhell`` tool.
690
+
691
+ To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first:
692
+
693
+ ```
694
+ pip install apache_beam mwparserfromhell
695
+ ```
696
+
697
+ Then can load any subset of Wikipedia per language and per date this way:
698
+
699
+ ```python
700
+ from datasets import load_dataset
701
+
702
+ load_dataset("wikipedia", language="sw", date="20220120")
703
+ ```
704
+
705
+ You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
706
 
707
  ### Supported Tasks and Leaderboards
708
 
709
+ The dataset is generally used for Language Modeling.
710
 
711
  ### Languages
712
 
713
+ You can find the list of languages [here](https://en.wikipedia.org/wiki/List_of_Wikipedias).
714
 
715
  ## Dataset Structure
716
 
 
718
 
719
  ### Data Instances
720
 
721
+ Some subsets of Wikipedia have already been processed by Hugging face, as you can see below:
722
 
723
  #### 20200501.en
724
 
 
842
  title = "Wikimedia Downloads",
843
  url = "https://dumps.wikimedia.org"
844
  }
 
845
  ```
846
 
 
847
  ### Contributions
848
 
849
  Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.