Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
qanthony-z commited on
Commit
28294c7
1 Parent(s): b76dad7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -54,15 +54,15 @@ configs:
54
 
55
  <!-- Provide a quick summary of the dataset. -->
56
 
57
- Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
58
 
59
- An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
60
 
61
- Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
62
 
63
  Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
64
 
65
- According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks and tying with fineweb otherwise.
66
 
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
@@ -70,7 +70,7 @@ According to our evaluations, Zyda is the most performant per-token open dataset
70
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/bQHcmodK-R8Ikb0UiI_QT.png" width="800" alt="Zyda performance across steps.">
71
  </center>
72
 
73
- These results are an aggregate scores of classic language modelling evaluations (piqa, winogrande, openbookqa, arc-easy, arc-challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
74
 
75
 
76
  ## How to download
@@ -148,7 +148,7 @@ For the filtering stage, we utilized a set of hand-crafted and tuned filters der
148
 
149
  For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
150
 
151
- For full details on our data processing see the technical report.
152
 
153
 
154
  #### Personal and Sensitive Information
 
54
 
55
  <!-- Provide a quick summary of the dataset. -->
56
 
57
+ Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
58
 
59
+ An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset.
60
 
61
+ Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the [Pile](https://arxiv.org/abs/2101.00027) for 300B tokens.
62
 
63
  Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
64
 
65
+ According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
66
 
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
 
70
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/bQHcmodK-R8Ikb0UiI_QT.png" width="800" alt="Zyda performance across steps.">
71
  </center>
72
 
73
+ These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
74
 
75
 
76
  ## How to download
 
148
 
149
  For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
150
 
151
+ For full details on our data processing, see the [Zyda technical report] (TODO LINK).
152
 
153
 
154
  #### Personal and Sensitive Information