Update README.md
Browse files
README.md
CHANGED
@@ -34,9 +34,9 @@ The estimated top 10% of highest n-token (mean 3,4,5) overlaps for each of the
|
|
34 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
35 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
36 |
datasets for each benchmark are then filtered again for top 30% scores and
|
37 |
-
combined and exact-match de-duplicated.
|
38 |
because they likely have exact large n-token matches by chance such as exact
|
39 |
-
dates or times that aren't actually relevant to the data
|
40 |
|
41 |
This is meant to fascilitate a high-quality short continuation of pretraining
|
42 |
for language models.
|
|
|
34 |
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
|
35 |
on 1k samples, within the first 3M samples of C4. The top scoring sample
|
36 |
datasets for each benchmark are then filtered again for top 30% scores and
|
37 |
+
combined and exact-match de-duplicated. Then the top 3% scores are removed
|
38 |
because they likely have exact large n-token matches by chance such as exact
|
39 |
+
dates or times that aren't actually relevant to the data.
|
40 |
|
41 |
This is meant to fascilitate a high-quality short continuation of pretraining
|
42 |
for language models.
|