Zyda-2 / README.md
qanthony-z's picture
Zyda2 --> Zynemo
58d2207 verified
|
raw
history blame
3.73 kB
metadata
license: odc-by

Zynemo-5T

Zynemo is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zynemo comprises diverse sources of web data, highly educational content, math, code, and scientific papers.

To construct Zynemo, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zynemo significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zynemo outperforms all its constituent datasets in resulting model quality.

An early version of Zynemo was used as the primary dataset for phase 1 pretraining of our Zamba2 series of models which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zynemo as a pretraining dataset.

According to our evaluations, Zynemo is the most performant per-token open dataset available. Zynemo excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as Starcoder.

// TODO Ablation scores key plots

For more information, please see our technical blog (-/TODO LINK)

How to download

// TODO YURY

Breakdown by component

// TODO YURY

Dataset Description

  • Curated by: Zyphra
  • Language(s) (NLP): Primarily English
  • License: Open Data Commons License

Dataset Structure

// TODO IS THIS CORRECT YURY?

Dataset fields:

  • text: contains actual text for training
  • source: component the text is coming from
  • filtering_features: precomputed values of different features that were used for filtering (converted to json string)
  • source_other: metadata from the source dataset (converted to json string)

Source Data

Zynemo is comprised of four high quality open-source datasets:

Zyda1: https://huggingface.co/datasets/Zyphra/Zyda

Dolma-1.7-cc https://huggingface.co/datasets/allenai/dolma

DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0

FineWeb-Edu-2 https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu

// Pie chart of composition -- YURY!

Personal and Sensitive Information

As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.

Bias, Risks, and Limitations

As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.

Licensing Information

We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.

Citation

If you use our dataset to train a model, please cite us at:

@misc{tokpanov2024zyda,
      title={Zyda: A 1.3T Dataset for Open Language Modeling}, 
      author={Yury Tokpanov and Beren Millidge and Paolo Glorioso and Jonathan Pilault and Adam Ibrahim and James Whittington and Quentin Anthony},
      year={2024},
      eprint={2406.01981},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}