Datasets:
qanthony-z
commited on
Commit
•
6bb6917
1
Parent(s):
b129baa
unify naming
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
license: odc-by
|
3 |
-
pretty_name: Zyda2
|
4 |
task_categories:
|
5 |
- text-generation
|
6 |
language:
|
@@ -34,17 +34,17 @@ configs:
|
|
34 |
|
35 |
<!-- Provide a quick summary of the dataset. -->
|
36 |
|
37 |
-
|
38 |
|
39 |
-
To construct
|
40 |
|
41 |
-
An early version of
|
42 |
|
43 |
-
According to our evaluations,
|
44 |
|
45 |
|
46 |
<center>
|
47 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/YfOOh2JqRgkeHP1gHSSt9.png" width="600" alt="
|
48 |
</center>
|
49 |
|
50 |
|
@@ -56,13 +56,13 @@ Since we preserved the schemas of original component datasets, attempting to dow
|
|
56 |
|
57 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
58 |
|
59 |
-
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda2
|
60 |
|
61 |
Commands to download individual components:
|
62 |
-
- DCLM: `ds = datasets.load_dataset("Zyphra/Zyda2
|
63 |
-
- Zyda: `ds = datasets.load_dataset("Zyphra/Zyda2
|
64 |
-
- Dolma-CC: `ds = datasets.load_dataset("Zyphra/Zyda2
|
65 |
-
- Fineweb-Edu: `ds = datasets.load_dataset("Zyphra/Zyda2
|
66 |
|
67 |
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to appropriate weights during training.
|
68 |
We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
@@ -110,7 +110,7 @@ DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
|
|
110 |
FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
|
111 |
|
112 |
<center>
|
113 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="
|
114 |
</center>
|
115 |
|
116 |
#### Personal and Sensitive Information
|
|
|
1 |
---
|
2 |
license: odc-by
|
3 |
+
pretty_name: Zyda2
|
4 |
task_categories:
|
5 |
- text-generation
|
6 |
language:
|
|
|
34 |
|
35 |
<!-- Provide a quick summary of the dataset. -->
|
36 |
|
37 |
+
Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
|
38 |
|
39 |
+
To construct Zyda2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda2 outperforms all its constituent datasets in resulting model quality.
|
40 |
|
41 |
+
An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
|
42 |
|
43 |
+
According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
|
44 |
|
45 |
|
46 |
<center>
|
47 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/YfOOh2JqRgkeHP1gHSSt9.png" width="600" alt="Zyda2 evaluation scores">
|
48 |
</center>
|
49 |
|
50 |
|
|
|
56 |
|
57 |
To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
|
58 |
|
59 |
+
Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda2--repo-type dataset`
|
60 |
|
61 |
Commands to download individual components:
|
62 |
+
- DCLM: `ds = datasets.load_dataset("Zyphra/Zyda2", name="dclm_crossdeduped", split="train")`
|
63 |
+
- Zyda: `ds = datasets.load_dataset("Zyphra/Zyda2", name="zyda_crossdeduped-filtered", split="train")`
|
64 |
+
- Dolma-CC: `ds = datasets.load_dataset("Zyphra/Zyda2", name="dolma-cc_crossdeduped-filtered", split="train")`
|
65 |
+
- Fineweb-Edu: `ds = datasets.load_dataset("Zyphra/Zyda2", name="fwe3", split="train")`
|
66 |
|
67 |
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to appropriate weights during training.
|
68 |
We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
|
|
110 |
FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
|
111 |
|
112 |
<center>
|
113 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="Zyda2 dataset composition">
|
114 |
</center>
|
115 |
|
116 |
#### Personal and Sensitive Information
|