Datasets:
yury-zyphra
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -59,25 +59,41 @@ We preserved the schemas of original component datasets, meaning that every comp
|
|
59 |
|
60 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
61 |
|
62 |
-
Only `nemo_id` and `text` are common columns between the components. Select those for every component first, and only then
|
63 |
|
64 |
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
|
65 |
|
66 |
Commands to download individual components:
|
67 |
-
- DCLM: `
|
68 |
-
- Zyda: `
|
69 |
-
- Dolma-CC: `
|
70 |
-
- Fineweb-Edu: `
|
71 |
|
72 |
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training.
|
73 |
-
We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
### (Smaller) sample versions
|
76 |
Along with the configs above dump, you can also download a smaller version of the dataset with the following config:
|
77 |
-
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB)
|
78 |
|
79 |
This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly.
|
80 |
|
|
|
81 |
|
82 |
## Breakdown by component
|
83 |
|
|
|
59 |
|
60 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
61 |
|
62 |
+
Only `nemo_id` and `text` are common columns between the components. Select those for every component first, and only then interleave the datasets with optimal weights (see example at the bottom of this section).
|
63 |
|
64 |
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
|
65 |
|
66 |
Commands to download individual components:
|
67 |
+
- DCLM: `ds_dclm = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
|
68 |
+
- Zyda: `ds_zyda = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")`
|
69 |
+
- Dolma-CC: `ds_dolma = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")`
|
70 |
+
- Fineweb-Edu: `ds_fwe = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")`
|
71 |
|
72 |
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training.
|
73 |
+
We found the following optimal weights by number of tokens (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
74 |
+
|
75 |
+
Below you will find an example of how to get proper dataset object.
|
76 |
+
It demonstrates how to select only `nemo_id` and `text` columns, and then interleave the dataset with probabilities computed from the weights above.
|
77 |
+
One needs to be careful with weights normalization, as `interleave_datasets()` returns documents, while our weights are token-wise. We provide precomputed document-wise weights in the example below.
|
78 |
+
To stream the dataset, add `streaming=True` to the `load_dataset()` commands.
|
79 |
+
|
80 |
+
```
|
81 |
+
common_columns = ["nemo_id", "text"]
|
82 |
+
ds_dclm = ds_dclm.select_columns(common_columns)
|
83 |
+
ds_zyda = ds_zyda.select_columns(common_columns)
|
84 |
+
ds_dolma = ds_dolma.select_columns(common_columns)
|
85 |
+
ds_fwe = ds_zyda.select_columns(common_columns)
|
86 |
+
norm_weights = [0.4038, 0.0316, 0.0585, 0.5061]
|
87 |
+
ds = datasets.interleave_datasets([ds_dclm, ds_zyda, ds_dolma, ds_fwe], probabilities=norm_weights, stopping_strategy="all_exhausted")
|
88 |
+
```
|
89 |
|
90 |
### (Smaller) sample versions
|
91 |
Along with the configs above dump, you can also download a smaller version of the dataset with the following config:
|
92 |
+
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB, 91.2M documents).
|
93 |
|
94 |
This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly.
|
95 |
|
96 |
+
`ds_sample = datasets.load_dataset("Zyphra/Zyda-2", name="sample-100BT", split="train")`
|
97 |
|
98 |
## Breakdown by component
|
99 |
|