RichardErkhov commited on
Commit
f07d1df
β€’
1 Parent(s): 65a0704

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ rho-math-7b-v0.1 - GGUF
11
+ - Model creator: https://huggingface.co/microsoft/
12
+ - Original model: https://huggingface.co/microsoft/rho-math-7b-v0.1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [rho-math-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [rho-math-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [rho-math-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [rho-math-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [rho-math-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [rho-math-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [rho-math-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [rho-math-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [rho-math-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [rho-math-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [rho-math-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [rho-math-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [rho-math-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [rho-math-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [rho-math-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [rho-math-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [rho-math-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [rho-math-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [rho-math-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [rho-math-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [rho-math-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-7b-v0.1-gguf/blob/main/rho-math-7b-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ license: mit
45
+ tags:
46
+ - nlp
47
+ - math
48
+ language:
49
+ - en
50
+ pipeline_tag: text-generation
51
+ ---
52
+
53
+
54
+ <h1 align="center">
55
+ Rho-1: Not All Tokens Are What You Need
56
+ </h1>
57
+
58
+
59
+ <p align="center">
60
+ <a href="https://arxiv.org/abs/2404.07965"><b>[πŸ“œ Arxiv]</b></a> β€’
61
+ <a href="https://huggingface.co/papers/2404.07965"><b>[πŸ’¬ HF Paper]</b></a> β€’
62
+ <a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[πŸ€— Models]</b></a> β€’
63
+ <a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
64
+ </p>
65
+
66
+ <p align="center">
67
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
68
+ <br>
69
+ <em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
70
+ </p>
71
+
72
+
73
+ ## πŸ”₯ News
74
+
75
+ - [2024/04/12] πŸ”₯πŸ”₯πŸ”₯ Rho-Math-v0.1 models released at πŸ€— HuggingFace!
76
+ - [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively β€” matching DeepSeekMath with only 3\% of the pretraining tokens.
77
+ - [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
78
+ - [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
79
+ - [2024/04/11] Rho-1 paper and repo released.
80
+
81
+
82
+
83
+ ## πŸ’‘ Introduction
84
+
85
+ Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
86
+
87
+
88
+ ### Selective Lanugage Modeling (SLM)
89
+
90
+ <p align="center">
91
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
92
+ <br>
93
+ <em>Figure 2:
94
+ <b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
95
+ <b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
96
+ <b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
97
+ </p>
98
+
99
+ <p align="center">
100
+ <img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
101
+ <br>
102
+ <em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
103
+ SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
104
+ It involves three steps:
105
+ (Step 1) Initially, train a reference model on high-quality data.
106
+ (Step 2) Then, score each token's loss in a corpus using the reference model.
107
+ (Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
108
+ </p>
109
+
110
+ <!-- results: -->
111
+
112
+ ### Evaluation Results
113
+
114
+ Base models (Few-shot CoT):
115
+
116
+ | **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
117
+ |:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
118
+ | 1-2B Base Models | | | | | | | | |
119
+ | Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
120
+ | Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
121
+ | DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
122
+ | [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
123
+ | >= 7B Base Models | | | | | | | | |
124
+ | Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
125
+ | Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
126
+ | LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
127
+ | InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
128
+ | DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
129
+ | [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
130
+
131
+
132
+ [Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
133
+
134
+ | **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
135
+ |------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
136
+ | gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
137
+ | gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
138
+ | Open-Source Small Models | | | | | | | | | |
139
+ | MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
140
+ | ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
141
+ | ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
142
+ | DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
143
+ | [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
144
+ | [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
145
+
146
+
147
+ ## πŸš€ Quick Start
148
+
149
+
150
+ ### Evaluation
151
+
152
+ ```sh
153
+ git clone git@github.com:microsoft/rho.git
154
+ cd rho-1/math-evaluation-harness
155
+ ```
156
+
157
+ Base model few-shot evaluation:
158
+
159
+ ```sh
160
+ bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
161
+ ```
162
+
163
+ SFT model (code-interpreter) evaluation:
164
+
165
+ ```sh
166
+ bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
167
+ ```
168
+
169
+ Our reproduced outputs are provided in `rho-1/outputs.zip`.
170
+
171
+
172
+
173
+ ## β˜•οΈ Citation
174
+
175
+ If you find this repository helpful, please consider citing our paper:
176
+
177
+ ```
178
+ @misc{lin2024rho1,
179
+ title={Rho-1: Not All Tokens Are What You Need},
180
+ author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
181
+ year={2024},
182
+ eprint={2404.07965},
183
+ archivePrefix={arXiv},
184
+ primaryClass={cs.CL}
185
+ }
186
+ ```
187
+
188
+