File size: 1,151 Bytes
96c2f0e 6f9a032 96c2f0e 6f9a032 96c2f0e 6f9a032 03bf2e0 6f9a032 03bf2e0 6f9a032 96c2f0e 35dca29 16dc7b2 35dca29 16dc7b2 35dca29 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language_creators:
- found
language:
- en
license: odc-by
source_datasets:
- c4
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 406518157.4620394
num_examples: 302379
download_size: 245358543
dataset_size: 406518157.4620394
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
on 1k samples, within the first 3M samples of C4. The top scoring sample
datasets for each benchmark are then filtered again for top 30% scores and
combined and exact-match de-duplicated. ~~Then the top 3% scores are removed
because they likely have exact large n-token matches by chance such as exact
dates or times that aren't actually relevant to the data.~~ (todo)
This is meant to fascilitate a high-quality short continuation of pretraining
for language models. |