crumb commited on
Commit
d8cd8bf
1 Parent(s): 458c2d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -39,8 +39,8 @@ The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
39
  selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
40
  on 1k samples, within the first 3M samples of C4. The top scoring sample
41
  datasets for each benchmark are then filtered again for top 30% scores and
42
- combined and exact-match de-duplicated. Then the top 3% scores and samples less than 20 characters long are removed
43
  because they likely have exact large n-token matches by chance such as exact
44
  dates or times that aren't actually relevant to the data.\*
45
 
46
- \*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training. Another option is filtering out the shorter samples because they seem to be more likely to contain the exact string-matches and don't contribute to the data mixture as much anyway.
 
39
  selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
40
  on 1k samples, within the first 3M samples of C4. The top scoring sample
41
  datasets for each benchmark are then filtered again for top 30% scores and
42
+ combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed
43
  because they likely have exact large n-token matches by chance such as exact
44
  dates or times that aren't actually relevant to the data.\*
45
 
46
+ \*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.