Back to all datasets
Dataset: wiki_split 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wiki_split")


One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.


  title = {{Learning To Split and Rephrase From Wikipedia Edit History}},
  author = {Botha, Jan A and Faruqui, Manaal and Alex, John and Baldridge, Jason and Das, Dipanjan},
  booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
  pages = {to appear},
  note = {arXiv preprint arXiv:1808.09468},
  year = {2018}

Models trained or fine-tuned on wiki_split

None yet. Start fine-tuning now =)