The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for One Billion Word Language Model Benchmark

Dataset Summary

A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

plain_text

  • Size of downloaded dataset files: 1.79 GB
  • Size of the generated dataset: 4.28 GB
  • Total amount of disk used: 6.07 GB

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "text": "While athletes in different professions dealt with doping scandals and other controversies , Woods continued to do what he did best : dominate the field of professional golf and rake in endorsements ."
}

Data Fields

The data fields are the same among all splits.

plain_text

  • text: a string feature.

Data Splits

name train test
plain_text 30301028 306688

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

The dataset doesn't contain annotations.

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

[More Information Needeate this repository accordingly.

Citation Information

@misc{chelba2014billion,
      title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling}, 
      author={Ciprian Chelba and Tomas Mikolov and Mike Schuster and Qi Ge and Thorsten Brants and Phillipp Koehn and Tony Robinson},
      year={2014},
      eprint={1312.3005},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Contributions

Thanks to @patrickvonplaten, @lewtun, @jplu, @thomwolf for adding this dataset.

Downloads last month
64

Spaces using billion-word-benchmark/lm1b 5