Update README.md
Browse files
README.md
CHANGED
@@ -39,8 +39,8 @@ The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
|
|
39 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
40 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
41 |
datasets for each benchmark are then filtered again for top 30% scores and
|
42 |
-
combined and exact-match de-duplicated. Then the top 3% scores and samples less than
|
43 |
because they likely have exact large n-token matches by chance such as exact
|
44 |
dates or times that aren't actually relevant to the data.\*
|
45 |
|
46 |
-
\*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|
|
|
39 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
40 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
41 |
datasets for each benchmark are then filtered again for top 30% scores and
|
42 |
+
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed
|
43 |
because they likely have exact large n-token matches by chance such as exact
|
44 |
dates or times that aren't actually relevant to the data.\*
|
45 |
|
46 |
+
\*Upon further examination, some of these samples are still present throughout the data, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|