How to load this dataset directly with the
π€/datasets
library:
from datasets import load_dataset dataset = load_dataset("wiki40b")
None yet. Start fine-tuning now =)
Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.
We show detailed information for up to 5 configurations of the dataset.
An example of 'train' looks as follows.
The data fields are the same among all splits.
wikidata_id
: a string
feature.text
: a string
feature.version_id
: a string
feature.name | train | validation | test |
---|---|---|---|
en | 2926536 | 163597 | 162274 |