Datasets:
yury-zyphra
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -87,8 +87,8 @@ norm_weights = [0.4038, 0.0316, 0.0585, 0.5061]
|
|
87 |
ds = datasets.interleave_datasets([ds_dclm, ds_zyda, ds_dolma, ds_fwe], probabilities=norm_weights, stopping_strategy="all_exhausted")
|
88 |
```
|
89 |
|
90 |
-
### (Smaller) sample
|
91 |
-
Along with the configs above
|
92 |
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB, 91.2M documents).
|
93 |
|
94 |
This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly.
|
|
|
87 |
ds = datasets.interleave_datasets([ds_dclm, ds_zyda, ds_dolma, ds_fwe], probabilities=norm_weights, stopping_strategy="all_exhausted")
|
88 |
```
|
89 |
|
90 |
+
### (Smaller) sample version
|
91 |
+
Along with the configs above, you can also download a smaller version of the dataset with the following config:
|
92 |
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB, 91.2M documents).
|
93 |
|
94 |
This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly.
|