Back to all datasets
Dataset: wikitext 🏷
Update on GitHub

How to load this dataset directly with the 🤗/nlp library:

			
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("wikitext")

Description

The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.

Citation

@InProceedings{wikitext,
    author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}
    year={2016}
}

Models trained or fine-tuned on wikitext

None yet. Start fine-tuning now =)