Datasets:
Datasets comparison
Hi, do you have a comparison of the performance of filtered Dolma-CC and Zyda1 to DCLM and FineWeb-Edu on the same setup?
However, we found that removing Zyda-1 and Dolma-CC entirely worsened performance, demonstrating that although their total token count is small, adding these datasets brings much-needed diversity of sources to the Zyda-2 dataset
Or could you share the per-benchmark evaluations of this experiment to see which metrics they impact?
Thanks!
Hi,
You can find this info in out blog post: https://www.zyphra.com/post/building-zyda-2. Figure 1 is the comparison on the aggregate score, while Table 1 shows individual eval tasks.
Best,
Yury
Ah, I just realized, we haven't actually released those. In our blog, DCLM+FWE mix has equal weights of components (same as we did for Zyda-2), but DCLM in that mix is not cross-deduped against FWE-2 (in contrast to Zyda-2). So, it's only a proxy for what you're asking.
Sorry, didn't notice you were asking about filtered Zyda-1 and Dolma-CC (I focused more on the Zyda-2 without Zyda-1 and Dolma-CC components). In the table above they are original datasets, indeed.
The table contains results from the annealing experiments. We didn't do annealing on filtered Zyda-1 and Dolma-CC separately. For those we trained 1.4B parameter transformer for 50B tokens from scratch (this is the setup we used initially before switching to annealing). We do have comparisons with DCLM and FWE on this setup, and both filtered Zyda-1 and Dolma-CC are worse than DCLM and FWE.
The numbers for "from scratch" experiments wouldn't be comparable with the numbers from the annealing experiments though. E.g. 1.4B transformer trained for 50B tokens is not enough to lift MMLU from random level, so we even skipped doing this eval for this setup - this was one of the reasons we switched to annealing. I could dig up results of "from scratch" experiments, if you're interested.