Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.
We show detailed information for up to 5 configurations of the dataset.
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 9988.05 MB
- Total amount of disk used: 9988.05 MB
An example of 'train' looks as follows.
The data fields are the same among all splits.