Update README.md
Browse files
README.md
CHANGED
@@ -7,15 +7,15 @@ license: apache-2.0
|
|
7 |
|
8 |
- π [Blog]()
|
9 |
- π€ [PRIME Collection](https://huggingface.co/PRIME-RL)
|
10 |
-
- π€ [RL Data]()
|
11 |
|
12 |
## Introduction
|
13 |
|
14 |
-
Eurus-2-7B-PRIME is trained using **PRIME** (**P**rocess **R**einforcement through **IM**plicit r**E**ward) method,
|
15 |
|
16 |
<img src="./figures/prm.gif" alt="prm" style="zoom: 33%;" />
|
17 |
|
18 |
-
As shown in the animation above, in PRIME, the policy model and PRM are both initialized with the SFT model. For each RL iteration, the policy model first generates rollouts. Then, the [implicit PRM](https://arxiv.org/abs/2412.01981) and outcome verifier score the rollouts, and the implicit PRM
|
19 |
|
20 |
The PRIME implementation pseudocode is as follows:
|
21 |
|
@@ -60,18 +60,18 @@ We apply tailored prompts for coding and math task:
|
|
60 |
|
61 |
## Evaluation
|
62 |
|
63 |
-
Through PRIME, we successfully
|
64 |
|
65 |
The final results are presented below:
|
66 |
|
67 |
-
| | **Eurus-2-7B-PRIME** |
|
68 |
-
| ------------- | -------------------- |
|
69 |
-
| AIME 2024 | **
|
70 |
-
| MATH-500 |
|
71 |
-
| AMC | **
|
72 |
-
| Minerva Math | **
|
73 |
-
| OlympiadBench |
|
74 |
-
| Avg. | **
|
75 |
|
76 |
![image-20241230162026156](./figures/performance.jpg)
|
77 |
|
@@ -83,7 +83,7 @@ We achieved this with only 1/10 data and model resources compared with Qwen-Math
|
|
83 |
| SFT Data | **230K (open-source)** | 2.5M (open-source and in-house) |
|
84 |
| RM Data | **0** | 618K (in-house) |
|
85 |
| RM | **Eurus-2-7B-SFT** | Qwen2.5-Math-RM (72B) |
|
86 |
-
| RL Data | **
|
87 |
|
88 |
|
89 |
|
|
|
7 |
|
8 |
- π [Blog]()
|
9 |
- π€ [PRIME Collection](https://huggingface.co/PRIME-RL)
|
10 |
+
- π€ [RL Data](https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data)
|
11 |
|
12 |
## Introduction
|
13 |
|
14 |
+
Eurus-2-7B-PRIME is trained using **PRIME** (**P**rocess **R**einforcement through **IM**plicit r**E**ward) method, an open-source solution for online reinforcement learning (RL) with process rewards, to advance reasoning abilities of language models beyond imitation or distillation. It starts with [Eurus-2-7B-SFT](https://huggingface.co/PRIME-RL/Eurus-2-7B-SFT) and trains on [Eurus-2-RL-Data](https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data).
|
15 |
|
16 |
<img src="./figures/prm.gif" alt="prm" style="zoom: 33%;" />
|
17 |
|
18 |
+
As shown in the animation above, in PRIME, the policy model and PRM are both initialized with the SFT model. For each RL iteration, the policy model first generates rollouts. Then, the [implicit PRM](https://arxiv.org/abs/2412.01981) and outcome verifier score the rollouts, and the implicit PRM gets updated on the rollouts with the outcome reward. Finally, the outcome reward \\(r_o\\) and process reward \\(r_p\\) are combined and used to update the policy model.
|
19 |
|
20 |
The PRIME implementation pseudocode is as follows:
|
21 |
|
|
|
60 |
|
61 |
## Evaluation
|
62 |
|
63 |
+
Through PRIME, we successfully achieve substantial improvements on key reasoning benchmarks over our SFT version of the model, leading to **16.7%** improvement on average, and over **20%** on AMC&AIME competitions. Our final model Eurus-2-7B-PRIME, based on Qwen-2.5-Math-7B-Base, surpassed its instruct version on 5 key reasoning benchmarks.
|
64 |
|
65 |
The final results are presented below:
|
66 |
|
67 |
+
| | **Eurus-2-7B-PRIME** | **Eurus-2-7B-SFT** | **Qwen-2.5-Math-7B-Instruct** | **Llama-3.1-70B-Instruct** | **GPT-4o** |
|
68 |
+
| ------------- | -------------------- | ------------------ | ----------------------------- | -------------------------- | ---------- |
|
69 |
+
| AIME 2024 | **26.7 (+23.3)** | 3.3 | 13.3 | 16.7 | 9.3 |
|
70 |
+
| MATH-500 | 79.2 (+14.1) | 65.1 | **79.8** | 64.6 | 76.4 |
|
71 |
+
| AMC | **57.8 (+27.7)** | 30.1 | 50.6 | 30.1 | 45.8 |
|
72 |
+
| Minerva Math | **38.6 (+5.9)** | 32.7 | 34.6 | 35.3 | 36.8 |
|
73 |
+
| OlympiadBench | 42.1 (+12.3) | 29.8 | 40.7 | 31.9 | **43.3** |
|
74 |
+
| Avg. | **48.9 (+ 16.7)** | 32.2 | 43.8 | 36.4 | 43.3 |
|
75 |
|
76 |
![image-20241230162026156](./figures/performance.jpg)
|
77 |
|
|
|
83 |
| SFT Data | **230K (open-source)** | 2.5M (open-source and in-house) |
|
84 |
| RM Data | **0** | 618K (in-house) |
|
85 |
| RM | **Eurus-2-7B-SFT** | Qwen2.5-Math-RM (72B) |
|
86 |
+
| RL Data | **150K queries \\(\times\\)4 samples** | 66K queries \\(\times\\) 32 samples |
|
87 |
|
88 |
|
89 |
|