|
--- |
|
language_creators: |
|
- found |
|
language: |
|
- en |
|
license: odc-by |
|
source_datasets: |
|
- c4 |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: score |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 406516813.06260204 |
|
num_examples: 302378 |
|
download_size: 245347331 |
|
dataset_size: 406516813.06260204 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# crumb/c4-benchfilter-nano |
|
|
|
A derivation of the first 3M samples from the C4 dataset. |
|
|
|
The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the |
|
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based |
|
on 1k samples, within the first 3M samples of C4. The top scoring sample |
|
datasets for each benchmark are then filtered again for top 30% scores and |
|
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 20 characters long are removed |
|
because they likely have exact large n-token matches by chance such as exact |
|
dates or times that aren't actually relevant to the data.\* |
|
|
|
\*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training. Another option is filtering out the shorter samples because they seem to be more likely to contain the exact string-matches and don't contribute to the data mixture as much anyway. |