Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research Paper • 2402.00159 • Published Jan 31 • 55
Paloma Collection Dataset and baseline models for Paloma, a benchmark of language model fit to 585 textual domains • 8 items • Updated Feb 1 • 13