crumb commited on
Commit
1adc936
1 Parent(s): 6d705cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -30,13 +30,17 @@ configs:
30
  - split: train
31
  path: data/train-*
32
  ---
 
 
 
 
 
33
  The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
34
  selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
35
  on 1k samples, within the first 3M samples of C4. The top scoring sample
36
  datasets for each benchmark are then filtered again for top 30% scores and
37
  combined and exact-match de-duplicated. Then the top 3% scores are removed
38
  because they likely have exact large n-token matches by chance such as exact
39
- dates or times that aren't actually relevant to the data.
40
 
41
- This is meant to fascilitate a high-quality short continuation of pretraining
42
- for language models.
 
30
  - split: train
31
  path: data/train-*
32
  ---
33
+
34
+ # crumb/c4-benchfilter-nano
35
+
36
+ A derivation of the first 3M samples from the C4 dataset.
37
+
38
  The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
39
  selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
40
  on 1k samples, within the first 3M samples of C4. The top scoring sample
41
  datasets for each benchmark are then filtered again for top 30% scores and
42
  combined and exact-match de-duplicated. Then the top 3% scores are removed
43
  because they likely have exact large n-token matches by chance such as exact
44
+ dates or times that aren't actually relevant to the data.\*
45
 
46
+ \*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.