language_creators: | |
- found | |
language: | |
- en | |
license: odc-by | |
source_datasets: | |
- c4 | |
task_categories: | |
- text-generation | |
- fill-mask | |
task_ids: | |
- language-modeling | |
- masked-language-modeling | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: score | |
dtype: float64 | |
splits: | |
- name: train | |
num_bytes: 406518157.4620394 | |
num_examples: 302379 | |
download_size: 245358543 | |
dataset_size: 406518157.4620394 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the | |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based | |
on 1k samples, within the first 3M samples of C4. The top scoring sample | |
datasets for each benchmark are then filtered again for top 30% scores and | |
combined and exact-match de-duplicated. ~~Then the top 3% scores are removed | |
because they likely have exact large n-token matches by chance such as exact | |
dates or times that aren't actually relevant to the data.~~ (todo) | |
This is meant to fascilitate a high-quality short continuation of pretraining | |
for language models. |