Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
a5a3ff8
1 Parent(s): 28294c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -66,8 +66,10 @@ According to our evaluations, Zyda is the most performant per-token open dataset
66
 
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
 
 
69
  <center>
70
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/bQHcmodK-R8Ikb0UiI_QT.png" width="800" alt="Zyda performance across steps.">
71
  </center>
72
 
73
  These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
 
66
 
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
69
+ ![image/png]()
70
+
71
  <center>
72
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="800" alt="Zyda performance across steps.">
73
  </center>
74
 
75
  These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.