Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -40,7 +40,7 @@ To construct Zyda-2, we took the best open-source datasets available: [Zyda](htt
40
 
41
  An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
42
 
43
- According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
44
 
45
 
46
  <center>
@@ -51,11 +51,11 @@ According to our evaluations, Zyda-2 is the most performant per-token open datas
51
  For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
52
 
53
  ## How to download
54
- Since we preserved the schemas of original component datasets, attempting to dowlnoad the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.
55
 
56
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
57
 
58
- Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2--repo-type dataset`
59
 
60
  Commands to download individual components:
61
  - DCLM: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
@@ -71,11 +71,11 @@ We found the following optimal weights (in the sense of weights in the resultant
71
 
72
  | Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
73
  | --- | --- | --- | --- |
74
- | dclm-crossdeduped | 8469.4 | 2,590.5 | 3,348.942 |
75
  | zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 |
76
  | dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 |
77
- | fwe3 | 3490.5 | 1,279.1 | 1,319.2 |
78
- | Total | 13080.5 | 4,562.8 | 5,070.2 |
79
 
80
  ### Dataset Description
81
 
@@ -90,9 +90,9 @@ We found the following optimal weights (in the sense of weights in the resultant
90
 
91
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
92
 
93
- Each component has they're own individual schema. Please, consult with their respective sources for exact information.
94
 
95
- However, in all components the document text is in `text` column, and unique document document id is in `nemo_id` column.
96
 
97
  Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`.
98
 
@@ -114,7 +114,7 @@ FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-sc
114
 
115
  #### Personal and Sensitive Information
116
 
117
- As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
118
 
119
  ## Bias, Risks, and Limitations
120
 
@@ -139,3 +139,4 @@ If you use our dataset to train a model, please cite us at:
139
  day = {15}
140
  }
141
  ```
 
 
40
 
41
  An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
42
 
43
+ According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
44
 
45
 
46
  <center>
 
51
  For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
52
 
53
  ## How to download
54
+ Since we preserved the schemas of original component datasets, attempting to download the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.
55
 
56
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
57
 
58
+ Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
59
 
60
  Commands to download individual components:
61
  - DCLM: `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
 
71
 
72
  | Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
73
  | --- | --- | --- | --- |
74
+ | dclm-crossdeduped | 8,469.4 | 2,590.5 | 3,348.942 |
75
  | zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 |
76
  | dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 |
77
+ | fwe3 | 3,490.5 | 1,279.1 | 1,319.2 |
78
+ | Total | 13,080.5 | 4,562.8 | 5,070.2 |
79
 
80
  ### Dataset Description
81
 
 
90
 
91
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
92
 
93
+ Each component has their own individual schema. Please, consult with their respective sources for exact information.
94
 
95
+ However, in all components the document text is in the `text` column, and the unique document id is in the `nemo_id` column.
96
 
97
  Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`.
98
 
 
114
 
115
  #### Personal and Sensitive Information
116
 
117
+ As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
118
 
119
  ## Bias, Risks, and Limitations
120
 
 
139
  day = {15}
140
  }
141
  ```
142
+