File size: 7,166 Bytes
6e2848c
 
c408126
61a8b4b
 
 
 
 
 
 
 
 
 
 
723f833
61a8b4b
 
723f833
 
61a8b4b
 
723f833
 
61a8b4b
 
723f833
61a8b4b
 
 
 
6e2848c
 
c408126
6e2848c
 
 
c408126
aae9101
9d35e2a
6e2848c
8206757
6e2848c
d3429a1
aae9101
6e2848c
18e7c59
a0f17a0
18e7c59
d349287
6e2848c
9d8239b
6e2848c
 
d3429a1
61a8b4b
b129baa
61a8b4b
d3429a1
61a8b4b
 
c408126
 
 
 
61a8b4b
45121eb
61a8b4b
6e2848c
 
 
 
61a8b4b
 
d3429a1
61a8b4b
 
d3429a1
 
6e2848c
 
 
 
 
 
 
 
 
 
 
 
 
 
d3429a1
6e2848c
d3429a1
6e2848c
a0f17a0
6e2848c
 
 
c408126
6e2848c
a0f17a0
6e2848c
a0f17a0
6e2848c
 
 
61a8b4b
6e2848c
18e7c59
c408126
18e7c59
6e2848c
 
 
d3429a1
6e2848c
 
 
 
 
 
 
 
 
 
 
 
 
 
51d0e3c
 
 
 
 
 
 
 
6e2848c
 
d3429a1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
license: odc-by
pretty_name: Zyda-2
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*/*
  - config_name: dclm_crossdeduped
    data_files:
      - split: train
        path: data/dclm_crossdeduped/*/*
  - config_name: zyda_crossdeduped-filtered 
    data_files:
      - split: train
        path: data/zyda_crossdeduped-filtered /*/*
  - config_name: dolma-cc_crossdeduped-filtered
    data_files:
      - split: train
        path: data/dolma-cc_crossdeduped-filtered/*
  - config_name: fwe3
    data_files:
      - split: train
        path: data/fwe3/*/*
---

# Zyda-2

<!-- Provide a quick summary of the dataset. -->

Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.

To construct Zyda-2, we took the best open-source datasets available: [Zyda](https://huggingface.co/datasets/Zyphra/Zyda), [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0), and [Dolma](https://huggingface.co/datasets/allenai/dolma). Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.

An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.

According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder). 


<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65455aca468722e935103b17/-nxHBcU38QJ-MNdKXPiYS.png" width="600" alt="Zyda-2 evaluation scores">
</center>


For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).

## How to download
Since we preserved the schemas of original component datasets, attempting to download the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.

To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.

Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`

Commands to download individual components:
 - DCLM:  `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")`
 - Zyda:  `ds = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")`
 - Dolma-CC:  `ds = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")`
 - Fineweb-Edu:  `ds = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")`

In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training.
We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.


## Breakdown by component

| Component                    | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
| --- | --- | --- | --- |
| dclm-crossdeduped              |   8,469.4                  |   2,590.5            |   3,348.942                |
| zyda-crossdeduped-filtered     |   452.4                    |   247.7              |   163.6                    |
| dolma_cc-crossdeduped-filtered |   668.2                    |   445.6              |   238.4                    |
| fwe3                           |   3,490.5                  |   1,279.1            |   1,319.2                  |
| Total                          |   13,080.5                 |   4,562.8            |   5,070.2                  |

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

- **Curated by:** Zyphra
- **Language(s) (NLP):** Primarily English
- **License:** Open Data Commons License


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Each component has their own individual schema. Please, consult with their respective sources for exact information.

However, in all components the document text is in the `text` column, and the unique document id is in the `nemo_id` column.

Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`. 

### Source Data

Zyda-2 is comprised of four high quality open-source datasets:

Zyda-1: https://huggingface.co/datasets/Zyphra/Zyda

Dolma-CC v1.7: https://huggingface.co/datasets/allenai/dolma

DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0

FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2

<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="Zyda-2 dataset composition">
</center>

#### Personal and Sensitive Information

As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.

## Bias, Risks, and Limitations

As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content. 

## Licensing Information

We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.

## Citation

If you use our dataset to train a model, please cite us at:

```
@misc{zyphra_nvidia_2024,
    author = {Yury Tokpanov, Paolo Glorioso, Ayush Dattagupta, Vibhu Jawa, Ryan Wolf, Vikranth Jeyakumar, Arham Mehta, Quentin Anthony, Beren Millidge},
    title = {Building {Zyda-2}, a 5 {Trillion} {Token} {High-Quality} {Dataset}, with {NVIDIA} {NeMo} {Curator}},
    url = {https://www.zyphra.com/post/building-zyda-2},
    publisher = {Zyphra},
    year = {2024},
    month = {October},
    day = {15}
}
```