Datasets:
Update enwik8.py
Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL codebase, and is additionally mentioned
in the Sparse Transformers paper and the Compressive Transformers paper. This split is nearly universal among language modeling papers.
Currently, one may obtain the splits by manual wrangling of the data yielded by the enwik8-raw BuilderConfig.
However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it
into three tensors, and wrap each in a separate dataset.
This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with SeqIO,
where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading with TFDS.
Since using separate train/validation/test splits is near-universal practice in machine learning, and since the community
has settled on a standard split for Enwik8 years ago, I see no reason why that split should not be settled on by HuggingFace Datasets.
I have removed the other BuilderConfigs and SplitGenerators because they lead to unnecessary complications when combined
with the _generate_examples code, and because there is no reason for anyone to be training on the data used for validation or testing.
Anyone who wishes to do so can merge the splits together after the fact.
Since this is a breaking change, I have bumped the major version of this dataset.
Whats the status on this?