Back to all datasets
Dataset: wikipedia 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

				
Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wikipedia")

Description

Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).

Citation

@ONLINE {wikidump,
    author = {Wikimedia Foundation},
    title  = {Wikimedia Downloads},
    url    = {https://dumps.wikimedia.org}
}

Models trained or fine-tuned on wikipedia