Back to all datasets
Dataset: lm1b 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/nlp library:

			
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("lm1b")

Description

A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.

Citation

@article{DBLP:journals/corr/ChelbaMSGBK13,
  author    = {Ciprian Chelba and
               Tomas Mikolov and
               Mike Schuster and
               Qi Ge and
               Thorsten Brants and
               Phillipp Koehn},
  title     = {One Billion Word Benchmark for Measuring Progress in Statistical Language
               Modeling},
  journal   = {CoRR},
  volume    = {abs/1312.3005},
  year      = {2013},
  url       = {http://arxiv.org/abs/1312.3005},
  archivePrefix = {arXiv},
  eprint    = {1312.3005},
  timestamp = {Mon, 13 Aug 2018 16:46:16 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/ChelbaMSGBK13},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Models trained or fine-tuned on lm1b

None yet. Start fine-tuning now =)