The dataset viewer is not available for this dataset.
The dataset tries to import a module that is not installed.
Error code:   DatasetModuleNotInstalledError
Exception:    ImportError
Message:      To be able to use olm/wikipedia, you need to install the following dependency: mwparserfromhell.
Please install it using 'pip install mwparserfromhell' for instance.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/", line 65, in compute_config_names_response
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1846, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1812, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1441, in get_module
                  local_imports = _download_additional_modules(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 345, in _download_additional_modules
                  raise ImportError(
              ImportError: To be able to use olm/wikipedia, you need to install the following dependency: mwparserfromhell.
              Please install it using 'pip install mwparserfromhell' for instance.

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for Wikipedia

This repo is a fork of the original Hugging Face Wikipedia repo here. The difference is that this fork does away with the need for apache-beam, and this fork is very fast if you have a lot of CPUs on your machine. It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96. This fork is also used in the OLM Project to pull and process up-to-date wikipedia snapshots.

Dataset Summary

Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump ( with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).

The articles are parsed using the mwparserfromhell tool, and we use multiprocess for parallelization.

To load this dataset you need to install these first:

pip install mwparserfromhell==0.6.4 multiprocess==0.70.13

Then, you can load any subset of Wikipedia per language and per date this way:

from datasets import load_dataset

load_dataset("olm/wikipedia", language="en", date="20220920")

You can find the full list of languages and dates here.

Supported Tasks and Leaderboards

The dataset is generally used for Language Modeling.


You can find the list of languages here.

Dataset Structure

Data Instances

An example looks as follows:

{'id': '1',
 'url': '',
 'title': 'April',
 'text': 'April is the fourth month...'

Data Fields

The data fields are the same among all configurations:

  • id (str): ID of the article.
  • url (str): URL of the article.
  • title (str): Title of the article.
  • text (str): Text content of the article.

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed


Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

Most of Wikipedia's text and many of its images are co-licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA) and the GNU Free Documentation License (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text.

Citation Information

    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = ""
Downloads last month

Models trained or fine-tuned on olm/wikipedia