Update README.md
Browse files
README.md
CHANGED
@@ -32,9 +32,9 @@ The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
|
|
32 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
33 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
34 |
datasets for each benchmark are then filtered again for top 30% scores and
|
35 |
-
combined and exact-match de-duplicated. Then the top 3% scores are removed
|
36 |
because they likely have exact large n-token matches by chance such as exact
|
37 |
-
dates or times that aren't actually relevant to the data
|
38 |
|
39 |
This is meant to fascilitate a high-quality short continuation of pretraining
|
40 |
for language models.
|
|
|
32 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
33 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
34 |
datasets for each benchmark are then filtered again for top 30% scores and
|
35 |
+
combined and exact-match de-duplicated. ~~Then the top 3% scores are removed
|
36 |
because they likely have exact large n-token matches by chance such as exact
|
37 |
+
dates or times that aren't actually relevant to the data.~~ (todo)
|
38 |
|
39 |
This is meant to fascilitate a high-quality short continuation of pretraining
|
40 |
for language models.
|