Back to all datasets
Dataset: wiki40b 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

				
Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wiki40b")

Description

Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.

Citation


	
	

Models trained or fine-tuned on wiki40b

None yet. Start fine-tuning now =)