qanthony-z commited on
Commit
58d2207
1 Parent(s): aae9101

Zyda2 --> Zynemo

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -2,17 +2,17 @@
2
  license: odc-by
3
  ---
4
 
5
- # Zyda2-5T
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
9
- Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
10
 
11
- To construct Zyda2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda2 outperforms all its constituent datasets in resulting model quality.
12
 
13
- An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
14
 
15
- According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
16
 
17
 
18
  // TODO Ablation scores key plots
@@ -51,7 +51,7 @@ Dataset fields:
51
 
52
  ### Source Data
53
 
54
- Zyda2 is comprised of four high quality open-source datasets:
55
 
56
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
57
 
 
2
  license: odc-by
3
  ---
4
 
5
+ # Zynemo-5T
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
9
+ Zynemo is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zynemo comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
10
 
11
+ To construct Zynemo, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zynemo significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zynemo outperforms all its constituent datasets in resulting model quality.
12
 
13
+ An early version of Zynemo was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zynemo as a pretraining dataset.
14
 
15
+ According to our evaluations, Zynemo is the most performant per-token open dataset available. Zynemo excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
16
 
17
 
18
  // TODO Ablation scores key plots
 
51
 
52
  ### Source Data
53
 
54
+ Zynemo is comprised of four high quality open-source datasets:
55
 
56
  Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
57