aashish1904 commited on
Commit
b7efc47
·
verified ·
1 Parent(s): 06a5f1b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ tags: []
6
+
7
+ ---
8
+
9
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
10
+
11
+
12
+ # QuantFactory/Llama3.1-8B-PRM-Deepseek-Data-GGUF
13
+ This is quantized version of [RLHFlow/Llama3.1-8B-PRM-Deepseek-Data](https://huggingface.co/RLHFlow/Llama3.1-8B-PRM-Deepseek-Data) created using llama.cpp
14
+
15
+ # Original Model Card
16
+
17
+
18
+ This is a process-supervised reward (PRM) trained on Mistral-generated data from the project [RLHFlow/RLHF-Reward-Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling)
19
+
20
+ The model is trained from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on [RLHFlow/Deepseek-PRM-Data](https://huggingface.co/datasets/RLHFlow/Deepseek-PRM-Data) for 1 epochs. We use a global batch size of 32 and a learning rate of 2e-6, where we pack the samples and split them into chunks of 8192 token. See more training details at https://github.com/RLHFlow/Online-RLHF/blob/main/math/llama-3.1-prm.yaml .
21
+
22
+
23
+ ## BoN evaluation result for Mistral generator:
24
+
25
+ | Model | Method | GSM8K | MATH |
26
+ | ------------- | ------------- | ------------- | -------- |
27
+ | Mistral-7B | Pass@1 | 77.9 | 28.4 |
28
+ | Mistral-7B | Majority Voting@1024 | 84.2 | 36.8 |
29
+ | Mistral-7B | Mistral-ORM@1024 | 90.1 | 43.6 |
30
+ | Mistral-7B | Mistral-PRM@1024 | 92.4 | 46.3 |
31
+
32
+ ## Scaling the inference sampling to N=1024 for Deepseek generator:
33
+
34
+ | Model | Method | GSM8K | MATH |
35
+ | ------------- | ------------- | ------------- | -------- |
36
+ | Deepseek-7B | Pass@1 | 83.9 | 38.4 |
37
+ | Deepseek-7B | Majority Voting@1024 | 89.7 | 57.4 |
38
+ | Deepseek-7B | Deepseek-ORM@1024 | 93.4 | 52.4 |
39
+ | Deepseek-7B | Deepseek-PRM@1024 | 93.0 | 58.1 |
40
+ | Deepseek-7B | Mistral-ORM@1024 (OOD) | 90.3 | 54.9 |
41
+ | Deepseek-7B | Mistral-PRM@1024 (OOD) | 91.9 | 56.9 |
42
+
43
+ ## Visualization
44
+
45
+
46
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643e59806db6ba8c5ee123f3/i622m76fvKv8drLmwl8Q3.png)
47
+
48
+ ## Usage
49
+
50
+ See https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/main/math-rm for detailed examples.
51
+
52
+ ## Citation
53
+
54
+ The automatic annotation was proposed in the Math-shepherd paper:
55
+
56
+ ```
57
+ @inproceedings{wang2024math,
58
+ title={Math-shepherd: Verify and reinforce llms step-by-step without human annotations},
59
+ author={Wang, Peiyi and Li, Lei and Shao, Zhihong and Xu, Runxin and Dai, Damai and Li, Yifei and Chen, Deli and Wu, Yu and Sui, Zhifang},
60
+ booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
61
+ pages={9426--9439},
62
+ year={2024}
63
+ }
64
+
65
+ ```
66
+
67
+ If you find the training recipe useful, please consider cite it as follows.
68
+
69
+ ```
70
+ @misc{xiong2024rlhflowmath,
71
+ author={Wei Xiong and Hanning Zhang and Nan Jiang and Tong Zhang},
72
+ title = {An Implementation of Generative PRM},
73
+ year = {2024},
74
+ publisher = {GitHub},
75
+ journal = {GitHub repository},
76
+ howpublished = {\url{https://github.com/RLHFlow/RLHF-Reward-Modeling}}
77
+ }
78
+ ```
79
+
80
+
81
+