metadata
language_creators:
- found
language:
- en
license: odc-by
source_datasets:
- c4
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 416597133
num_examples: 311731
download_size: 249034256
dataset_size: 416597133
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based on 1k samples, within the first 3M samples of C4. The top scoring sample datasets for each benchmark are then filtered again for top 30% scores and combined and exact-match de-duplicated. Then the top 3% scores are removed because they likely have exact large n-token matches by chance such as exact dates or times that aren't actually relevant to the data.
This is meant to fascilitate a high-quality short continuation of pretraining for language models.