Back to all datasets
Dataset: wiki_snippets 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/nlp library:

				
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("wiki_snippets")

Description

Wikipedia version split into plain text snippets for dense semantic indexing.

Citation

@ONLINE {wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"
}

Models trained or fine-tuned on wiki_snippets

None yet. Start fine-tuning now =)