The dataset preview is not available for this split.
Status code: 400 Exception: ValueError Message: Builder wiki40b is not streamable.
Need help to make the dataset viewer work? Open an issue for direct support.
Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 9988.05 MB
- Total amount of disk used: 9988.05 MB
An example of 'train' looks as follows.
The data fields are the same among all splits.