mariosasko commited on
Commit
da2e620
1 Parent(s): 2ee917b

Remove the BeamRunner note from README

Browse files
Files changed (1) hide show
  1. README.md +4 -10
README.md CHANGED
@@ -766,12 +766,9 @@ The datasets are built from the Wikipedia dump
766
  contains the content of one full Wikipedia article with cleaning to strip
767
  markdown and unwanted sections (references, etc.).
768
 
769
- The articles are parsed using the ``mwparserfromhell`` tool.
770
-
771
- To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first:
772
-
773
  ```
774
- pip install apache_beam mwparserfromhell
775
  ```
776
 
777
  Then, you can load any subset of Wikipedia per language and per date this way:
@@ -779,11 +776,8 @@ Then, you can load any subset of Wikipedia per language and per date this way:
779
  ```python
780
  from datasets import load_dataset
781
 
782
- load_dataset("wikipedia", language="sw", date="20220120", beam_runner=...)
783
- ```
784
- where you can pass as `beam_runner` any Apache Beam supported runner for (distributed) data processing
785
- (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)).
786
- Pass "DirectRunner" to run it on your machine.
787
 
788
  You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
789
 
 
766
  contains the content of one full Wikipedia article with cleaning to strip
767
  markdown and unwanted sections (references, etc.).
768
 
769
+ The articles are parsed using the ``mwparserfromhell`` tool, which can be installed with:
 
 
 
770
  ```
771
+ pip install mwparserfromhell
772
  ```
773
 
774
  Then, you can load any subset of Wikipedia per language and per date this way:
 
776
  ```python
777
  from datasets import load_dataset
778
 
779
+ load_dataset("wikipedia", language="sw", date="20220120")
780
+ ```
 
 
 
781
 
782
  You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
783