Ablustrund
commited on
Commit
β’
fd760fd
1
Parent(s):
4de6c88
Update README.md
Browse files
README.md
CHANGED
@@ -1,17 +1,37 @@
|
|
1 |
---
|
2 |
license: agpl-3.0
|
3 |
language:
|
4 |
-
-
|
5 |
tags:
|
6 |
-
- moss
|
7 |
- llm
|
8 |
- reward model
|
|
|
|
|
9 |
---
|
10 |
|
11 |
# MOSS-RLHF
|
12 |
|
13 |
-
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>π <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://github.
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## π Introduction
|
17 |
|
@@ -24,34 +44,24 @@ Contributions are summarized as follows:
|
|
24 |
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
|
25 |
|
26 |
|
27 |
-
##
|
28 |
-
- A 7B Chinese reward model based on openChineseLlama.
|
29 |
-
- A 7B English reward model based on Llama-7B.
|
30 |
-
- Open source code for RL training in large language models.
|
31 |
-
- ...
|
32 |
-
|
33 |
-
## β¨ Start training your own model!
|
34 |
-
|
35 |
-
Run code in a few steps.
|
36 |
-
|
37 |
-
### π© Requirements & Setup
|
38 |
|
39 |
This repository works on Python 3.8 and PyTorch 1.13.1.
|
40 |
|
41 |
We recommend using the **conda** virtual environment to run the code.
|
42 |
|
43 |
-
#### Step 1:
|
44 |
```bash
|
45 |
conda update conda -n base -c defaults
|
46 |
conda create -n rlhf python=3.8
|
47 |
conda activate rlhf
|
48 |
```
|
49 |
-
#### Step 2:
|
50 |
```bash
|
51 |
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
|
52 |
```
|
53 |
|
54 |
-
#### Step 3:
|
55 |
```bash
|
56 |
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
|
57 |
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
|
@@ -60,16 +70,57 @@ apt install libaio-dev
|
|
60 |
DS_BUILD_OPS=1 pip install deepspeed
|
61 |
```
|
62 |
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
-
|
|
|
|
|
|
|
66 |
|
67 |
## Citation
|
68 |
|
69 |
```bibtex
|
70 |
@article{zheng2023secrets,
|
71 |
-
|
72 |
-
|
73 |
-
|
|
|
|
|
|
|
74 |
}
|
75 |
-
```
|
|
|
1 |
---
|
2 |
license: agpl-3.0
|
3 |
language:
|
4 |
+
- zh
|
5 |
tags:
|
|
|
6 |
- llm
|
7 |
- reward model
|
8 |
+
- moss
|
9 |
+
- rlhf
|
10 |
---
|
11 |
|
12 |
# MOSS-RLHF
|
13 |
|
14 |
+
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>π <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
|
15 |
+
|
16 |
+
|
17 |
+
## π News
|
18 |
+
### π Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
|
19 |
+
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
|
20 |
+
<br>
|
21 |
|
22 |
+
### π Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
|
23 |
+
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
|
24 |
+
|
25 |
+
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
|
26 |
+
<br>
|
27 |
+
|
28 |
+
## π§Ύ Open-source List
|
29 |
+
- [x] Open source code for RL training in large language models.
|
30 |
+
- [x] A 7B Chinese reward model based on openChineseLlama.
|
31 |
+
- [x] A 7B English reward model based on Llama-7B.
|
32 |
+
- [x] SFT model for English.
|
33 |
+
- [ ] Policy model for English after RLHF.
|
34 |
+
- ...
|
35 |
|
36 |
## π Introduction
|
37 |
|
|
|
44 |
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
|
45 |
|
46 |
|
47 |
+
## π© Requirements & Setup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
This repository works on Python 3.8 and PyTorch 1.13.1.
|
50 |
|
51 |
We recommend using the **conda** virtual environment to run the code.
|
52 |
|
53 |
+
#### Step 1: Create a new Python virtual environment
|
54 |
```bash
|
55 |
conda update conda -n base -c defaults
|
56 |
conda create -n rlhf python=3.8
|
57 |
conda activate rlhf
|
58 |
```
|
59 |
+
#### Step 2: Install PyTorch and TensorBoard
|
60 |
```bash
|
61 |
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
|
62 |
```
|
63 |
|
64 |
+
#### Step 3: Install the remaining dependencies
|
65 |
```bash
|
66 |
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
|
67 |
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
|
|
|
70 |
DS_BUILD_OPS=1 pip install deepspeed
|
71 |
```
|
72 |
|
73 |
+
## β¨ Start training your own model!
|
74 |
+
Run code in a few steps.
|
75 |
+
|
76 |
+
### Step 1: Recover Reward model weights
|
77 |
+
We can not directly release the full weight of the reward model because of protocol restrictions.
|
78 |
+
You can merge the diff weight with original Llama-7B to recover the reward model we used.
|
79 |
+
|
80 |
+
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
|
81 |
+
```bash
|
82 |
+
1) Download the weight diff into your local machine. The weight diff is located at:
|
83 |
+
# For English:
|
84 |
+
TODO
|
85 |
+
# For Chinese:
|
86 |
+
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
|
87 |
+
|
88 |
+
2) Merge the weight diff with the original Llama-7B:
|
89 |
+
# For English:
|
90 |
+
# Reward model
|
91 |
+
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
|
92 |
+
# SFT model
|
93 |
+
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
|
94 |
+
# Policy model
|
95 |
+
TODO
|
96 |
+
# For Chinese:
|
97 |
+
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
|
98 |
+
```
|
99 |
+
### Step 2: Select your own SFT model.
|
100 |
+
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
|
101 |
+
You can use your own SFT model, or a strong base model instead of our SFT model.
|
102 |
+
|
103 |
+
### Step 3: Start training
|
104 |
+
Run the command below.
|
105 |
+
```
|
106 |
+
# For Chinese:
|
107 |
+
# You need to use your own sft model currently.
|
108 |
+
bash run_zh.sh
|
109 |
|
110 |
+
# For English:
|
111 |
+
# We have loaded the sft model and reward model to huggingface.
|
112 |
+
bash run_en.sh
|
113 |
+
```
|
114 |
|
115 |
## Citation
|
116 |
|
117 |
```bibtex
|
118 |
@article{zheng2023secrets,
|
119 |
+
title={Secrets of RLHF in Large Language Models Part I: PPO},
|
120 |
+
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
|
121 |
+
year={2023},
|
122 |
+
eprint={2307.04964},
|
123 |
+
archivePrefix={arXiv},
|
124 |
+
primaryClass={cs.CL}
|
125 |
}
|
126 |
+
```
|