metadata
license: apache-2.0
datasets:
- gair-prox/RedPajama-pro
language:
- en
tags:
- llama
RedPJ-ProX-0.3B
RedPJ-ProX-0.3B is a tiny language model. It was and trained on the RedPajama-V2-pro for 25B tokens.
Evaluations
ProX models are evaluated over 10 language model benchmarks in zero-shot setting.
ArC-c | ARC-e | CSQA | HellaS | MMLU | OBQA | PiQA | SIQA | WinoG | SciQ | AVG | |
---|---|---|---|---|---|---|---|---|---|---|---|
raw | 22.6 | 41.9 | 29.7 | 32.8 | 26.2 | 26.4 | 62.2 | 39.3 | 51.3 | 63.3 | 39.6 |
ours | 25.9 | 47.5 | 29.2 | 36.7 | 28.1 | 30.2 | 64.6 | 38.0 | 51.7 | 71.4 | 42.3 |
Citation
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}