Back to all datasets
Dataset: wikipedia 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wikipedia")


Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump ( with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).


@ONLINE {wikidump,
    author = {Wikimedia Foundation},
    title  = {Wikimedia Downloads},
    url    = {}

Models trained or fine-tuned on wikipedia