---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 5007987636.0
num_examples: 152813
download_size: 2195659342
dataset_size: 5007987636.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MiniPile Tokenized for Llama 3 (~1.2b tokens)
This is a pre-tokenized version of the [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile) dataset using the Llama3 tokenizer.
## Licensing
Identical to original MiniPile dataset.
## Tokenization details
Tokenization was done using [SAELens](https://github.com/jbloomAus/SAELens) (3.11.0) with these settings:
- Context size: 8192
- Shuffled: yes
- Begin batch token: "bos"
- Begin sequence token: none
- Sequence separator token: "eos"