Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,8 @@ configs:
|
|
29 |
data_files:
|
30 |
- split: train
|
31 |
path: data/train-*
|
|
|
|
|
32 |
---
|
33 |
|
34 |
# crumb/c4-benchfilter-nano
|
@@ -43,4 +45,4 @@ combined and exact-match de-duplicated. Then the top 3% scores and samples less
|
|
43 |
because they likely have exact large n-token matches by chance such as exact
|
44 |
dates or times that aren't actually relevant to the data.\*
|
45 |
|
46 |
-
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|
|
|
29 |
data_files:
|
30 |
- split: train
|
31 |
path: data/train-*
|
32 |
+
size_categories:
|
33 |
+
- 100K<n<1M
|
34 |
---
|
35 |
|
36 |
# crumb/c4-benchfilter-nano
|
|
|
45 |
because they likely have exact large n-token matches by chance such as exact
|
46 |
dates or times that aren't actually relevant to the data.\*
|
47 |
|
48 |
+
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|