|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 26076989556 |
|
num_examples: 33536113 |
|
download_size: 15221565467 |
|
dataset_size: 26076989556 |
|
--- |
|
# Dataset Card for "chunked-wikipedia20220301en-bookcorpusopen" |
|
|
|
``` |
|
num_examples: 33.5 million |
|
download_size: 15.3 GB |
|
dataset_size: 26.1 GB |
|
``` |
|
|
|
This dataset combines [wikipedia20220301.en](https://huggingface.co/datasets/wikipedia) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen), |
|
and splits the data into smaller chunks, of size ~820 chars |
|
(such that each item will be at least ~128 tokens for the average tokenizer). |
|
The logic only splits on spaces, so the chunks are likely to be slightly larger than 820 chars. |
|
The dataset has been normalized into lower case, with accents and non-english characters removed. |
|
Items with less than 200 chars or more than 1000 chars have been removed. |
|
The data has not been shuffled (you can either use `dataset.shuffle(...)`, |
|
or download the shuffled version [here](https://huggingface.co/datasets/sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen), |
|
which will be faster to iterate over). |
|
|
|
This dataset is processed for convenience, at the expense of losing some percentage of the tokens due to truncation, |
|
(assuming the training minibatches are truncated to 128 tokens). |