Back to all datasets
Dataset: wikitext 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wikitext")


The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.


    author={Stephen, Merity and Caiming ,Xiong and James, Bradbury and Richard Socher}

Models trained or fine-tuned on wikitext

None yet. Start fine-tuning now =)