Datasets:
license: odc-by
pretty_name: Zyda-2
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
- config_name: default
data_files:
- split: train
path: data/*/*/*
- config_name: dclm_crossdeduped
data_files:
- split: train
path: data/dclm_crossdeduped/*/*
- config_name: zyda_crossdeduped-filtered
data_files:
- split: train
path: data/zyda_crossdeduped-filtered /*/*
- config_name: dolma-cc_crossdeduped-filtered
data_files:
- split: train
path: data/dolma-cc_crossdeduped-filtered/*
- config_name: fwe3
data_files:
- split: train
path: data/fwe3/*/*
Zyda-2
Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
To construct Zyda-2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, and Dolma. Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series of models which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as Starcoder.
For more information, please see our technical blog.
How to download
Since we preserved the schemas of original component datasets, attempting to download the whole dataset using datasets.load_dataset()
might fail during the stage of generating a split.
To download the whole dataset we recommend to either clone the repository, or, if you must use the datasets.load_dataset()
, download individual components separately.
Example command to clone the repository using huggingface-cli: huggingface-cli download Zyphra/Zyda-2 --repo-type dataset
Commands to download individual components:
- DCLM:
ds = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")
- Zyda:
ds = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")
- Dolma-CC:
ds = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")
- Fineweb-Edu:
ds = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training. We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
Breakdown by component
Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
---|---|---|---|
dclm-crossdeduped | 8,469.4 | 2,590.5 | 3,348.942 |
zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 |
dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 |
fwe3 | 3,490.5 | 1,279.1 | 1,319.2 |
Total | 13,080.5 | 4,562.8 | 5,070.2 |
Dataset Description
- Curated by: Zyphra
- Language(s) (NLP): Primarily English
- License: Open Data Commons License
Dataset Structure
Each component has their own individual schema. Please, consult with their respective sources for exact information.
However, in all components the document text is in the text
column, and the unique document id is in the nemo_id
column.
Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): quality_prob
and quality_pred
.
Source Data
Zyda-2 is comprised of four high quality open-source datasets:
Zyda-1: https://huggingface.co/datasets/Zyphra/Zyda
Dolma-CC v1.7: https://huggingface.co/datasets/allenai/dolma
DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2
Personal and Sensitive Information
As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
Citation
If you use our dataset to train a model, please cite us at:
@misc{zyphra_nvidia_2024,
author = {Yury Tokpanov, Paolo Glorioso, Ayush Dattagupta, Vibhu Jawa, Ryan Wolf, Vikranth Jeyakumar, Arham Mehta, Quentin Anthony, Beren Millidge},
title = {Building {Zyda-2}, a 5 {Trillion} {Token} {High-Quality} {Dataset}, with {NVIDIA} {NeMo} {Curator}},
url = {https://www.zyphra.com/post/building-zyda-2},
publisher = {Zyphra},
year = {2024},
month = {October},
day = {15}
}