|
--- |
|
language_creators: |
|
- found |
|
language: |
|
- en |
|
license: odc-by |
|
source_datasets: |
|
- c4 |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: score |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 373897649.51453334 |
|
num_examples: 278115 |
|
download_size: 242478448 |
|
dataset_size: 373897649.51453334 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# crumb/c4-benchfilter-nano |
|
|
|
A 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data. |
|
|
|
The estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the |
|
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based |
|
on 1k samples, within the first 3M samples of C4. The top scoring sample |
|
datasets for each benchmark are then filtered again for top 30% scores and |
|
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed |
|
because they likely have exact large n-token matches by chance such as exact |
|
dates or times that aren't actually relevant to the data.\* |
|
|
|
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training. |