dataset_info: | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 36961083473 | |
num_examples: 136338653 | |
download_size: 13895887135 | |
dataset_size: 36961083473 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted. | |
Original datasets: | |
- https://huggingface.co/datasets/bookcorpus | |
- https://huggingface.co/datasets/wikipedia Variant: 20220301.en |