qanthony-z commited on
Commit
c408126
1 Parent(s): 6bb6917

Zyda2 --> Zyda-2

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: odc-by
3
- pretty_name: Zyda2
4
  task_categories:
5
  - text-generation
6
  language:
@@ -30,21 +30,21 @@ configs:
30
  path: data/fwe3/*/*
31
  ---
32
 
33
- # Zyda2-5T
34
 
35
  <!-- Provide a quick summary of the dataset. -->
36
 
37
- Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
38
 
39
- To construct Zyda2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda2 outperforms all its constituent datasets in resulting model quality.
40
 
41
- An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
42
 
43
- According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
44
 
45
 
46
  <center>
47
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/YfOOh2JqRgkeHP1gHSSt9.png" width="600" alt="Zyda2 evaluation scores">
48
  </center>
49
 
50
 
@@ -56,13 +56,13 @@ Since we preserved the schemas of original component datasets, attempting to dow
56
 
57
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
58
 
59
- Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda2--repo-type dataset`
60
 
61
  Commands to download individual components:
62
- - DCLM: `ds = datasets.load_dataset("Zyphra/Zyda2", name="dclm_crossdeduped", split="train")`
63
- - Zyda: `ds = datasets.load_dataset("Zyphra/Zyda2", name="zyda_crossdeduped-filtered", split="train")`
64
- - Dolma-CC: `ds = datasets.load_dataset("Zyphra/Zyda2", name="dolma-cc_crossdeduped-filtered", split="train")`
65
- - Fineweb-Edu: `ds = datasets.load_dataset("Zyphra/Zyda2", name="fwe3", split="train")`
66
 
67
  In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to appropriate weights during training.
68
  We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
@@ -99,7 +99,7 @@ Our Zyda1 and Dolma-CC versions also have two additional columns corresponding t
99
 
100
  ### Source Data
101
 
102
- Zyda2 is comprised of four high quality open-source datasets:
103
 
104
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
105
 
@@ -110,7 +110,7 @@ DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
110
  FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
111
 
112
  <center>
113
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="Zyda2 dataset composition">
114
  </center>
115
 
116
  #### Personal and Sensitive Information
 
1
  ---
2
  license: odc-by
3
+ pretty_name: Zyda-2
4
  task_categories:
5
  - text-generation
6
  language:
 
30
  path: data/fwe3/*/*
31
  ---
32
 
33
+ # Zyda-2
34
 
35
  <!-- Provide a quick summary of the dataset. -->
36
 
37
+ Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
38
 
39
+ To construct Zyda-2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.
40
 
41
+ An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
42
 
43
+ According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
44
 
45
 
46
  <center>
47
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/YfOOh2JqRgkeHP1gHSSt9.png" width="600" alt="Zyda-2 evaluation scores">
48
  </center>
49
 
50
 
 
56
 
57
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
58
 
59
+ Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2--repo-type dataset`
60
 
61
  Commands to download individual components:
62
+ - DCLM: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
63
+ - Zyda: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")`
64
+ - Dolma-CC: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")`
65
+ - Fineweb-Edu: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")`
66
 
67
  In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to appropriate weights during training.
68
  We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
 
99
 
100
  ### Source Data
101
 
102
+ Zyda-2 is comprised of four high quality open-source datasets:
103
 
104
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
105
 
 
110
  FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
111
 
112
  <center>
113
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="Zyda-2 dataset composition">
114
  </center>
115
 
116
  #### Personal and Sensitive Information