metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26076989556
num_examples: 33536113
download_size: 17380043798
dataset_size: 26076989556
Dataset Card for "wikipedia20220301en-bookcorpusopen-chunked-shuffled"
num_examples: 33.5 million
download_size: 15.3 GB
dataset_size: 26.1 GB
This dataset combines wikipedia20220301.en and bookcorpusopen,
and splits the data into smaller chunks, of size ~820 chars
(such that each item will be at least ~128 tokens for the average tokenizer).
The order of the items in this dataset has been shuffled,
meaning it's faster to iterate over sequentially than when using dataset.shuffle
.
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars.
The dataset has been normalized into lower case, with accents and non-english characters removed.
Items with less than 200 chars or more than 1000 chars have been removed.