Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
2648f81
1 Parent(s): 7183e06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -62,7 +62,7 @@ Models trained on Zyda significantly outperform models of the Pythia suite train
62
 
63
  Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
64
 
65
- According to our evaluations, Zyda is the most performant per-token open dataset available.
66
 
67
 
68
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png)
@@ -106,8 +106,10 @@ Dataset fields:
106
  - `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
107
  - `source_other`: metadata from the source dataset (converted to json string)
108
 
 
 
 
109
 
110
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/dQV8zNTNCx1xMMT-iupY6.png)
111
 
112
  ### Source Data
113
 
 
62
 
63
  Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
64
 
65
+ According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks and tying with fineweb otherwise.
66
 
67
 
68
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png)
 
106
  - `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
107
  - `source_other`: metadata from the source dataset (converted to json string)
108
 
109
+ <center>
110
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/dQV8zNTNCx1xMMT-iupY6.png)
111
+ </center>
112
 
 
113
 
114
  ### Source Data
115