File size: 1,371 Bytes
3939c19 5e08f20 3939c19 d15396c 3939c19 327f5e1 bc2868f 327f5e1 639bfcb 327f5e1 3939c19 5726f1b 3939c19 bc2868f 3939c19 ff58052 79dc659 ef5ec0f 3939c19 f403dd5 3939c19 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
license: apache-2.0
datasets:
- gair-prox/RedPajama-pro
language:
- en
tags:
- llama
---
# RedPJ-ProX-0.3B
<p align="center">
<img src="prox-teaser.png">
</p>
[ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/gair-prox/RedPJ-ProX-0.3B) | [Data](https://huggingface.co/datasets/gair-prox/RedPajama-pro) | [Code](https://github.com/GAIR-NLP/program-every-example)
**RedPJ-ProX-0.3B** is a tiny language model. It was and trained on the [RedPajama-V2-pro](https://huggingface.co/datasets/gair-prox/RedPajama-pro) for 25B tokens.
## Evaluations
ProX models are evaluated over 10 language model benchmarks in zero-shot setting.
| | ArC-c | ARC-e | CSQA | HellaS | MMLU | OBQA | PiQA | SIQA | WinoG | SciQ | AVG |
|-----------------------|-------|-------|-------|-----------|-------|-------|-------|-------|-------|-------|------|
| raw | 22.6 | 41.9 | 29.7 | 32.8 | 26.2 | 26.4 | 62.2 | 39.3 | 51.3 | 63.3 | 39.6 |
| ours | 25.9 | 47.5 | 29.2 | 36.7 | 28.1 | 30.2 | 64.6 | 38.0 | 51.7 | 71.4 | 42.3 |
### Citation
```
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}
``` |