metadata
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 6928291896
num_examples: 845326
- name: validation
num_bytes: 3212832
num_examples: 392
- name: test
num_bytes: 67985820
num_examples: 8295
download_size: 3171829950
dataset_size: 6999490548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Minipile tokenized with LLamaTokenizer for 2048 context length. Roughly 1.7 Billion tokens.