Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
b76dad7
1 Parent(s): c46ef38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -67,7 +67,7 @@ According to our evaluations, Zyda is the most performant per-token open dataset
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
69
  <center>
70
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png" width="800" alt="Zyda performance across steps.">
71
  </center>
72
 
73
  These results are an aggregate scores of classic language modelling evaluations (piqa, winogrande, openbookqa, arc-easy, arc-challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
 
67
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/fXaQAOBDJpoaAr1clfTel.png) -->
68
 
69
  <center>
70
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/bQHcmodK-R8Ikb0UiI_QT.png" width="800" alt="Zyda performance across steps.">
71
  </center>
72
 
73
  These results are an aggregate scores of classic language modelling evaluations (piqa, winogrande, openbookqa, arc-easy, arc-challenge) across time for a 1.4B model trained on 50B tokens of each dataset.