Ablustrund commited on
Commit
2db4b2f
โ€ข
1 Parent(s): 5fc35aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
2
  license: agpl-3.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: agpl-3.0
3
+ language:
4
+ - zh
5
  ---
6
+
7
+ # MOSS-RLHF
8
+
9
+ ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>๐Ÿ‘‰ <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://github.com/OpenLMLab/MOSS-RLHF" target="_blank">[Open-source code]*
10
+
11
+
12
+ ## ๐ŸŒ  Introduction
13
+
14
+ Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
15
+ In this technical report, we intend to help researchers to train their models stably with human feedback.
16
+
17
+ Contributions are summarized as follows:
18
+ 1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
19
+ 2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
20
+ 3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
21
+
22
+
23
+ ## ๐Ÿงพ Open-source List
24
+ - A 7B Chinese reward model based on openChineseLlama.
25
+ - A 7B English reward model based on Llama-7B.
26
+ - Open source code for RL training in large language models.
27
+ - ...
28
+
29
+ ## โœจ Start training your own model!
30
+
31
+ Run code in a few steps.
32
+
33
+ ### ๐Ÿ”ฉ Requirements & Setup
34
+
35
+ This repository works on Python 3.8 and PyTorch 1.13.1.
36
+
37
+ We recommend using the **conda** virtual environment to run the code.
38
+
39
+ #### Step 1: create a new Python virtual environment
40
+ ```bash
41
+ conda update conda -n base -c defaults
42
+ conda create -n rlhf python=3.8
43
+ conda activate rlhf
44
+ ```
45
+ #### Step 2: install PyTorch and TensorBoard
46
+ ```bash
47
+ conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
48
+ ```
49
+
50
+ #### Step 3: install the remaining dependencies
51
+ ```bash
52
+ conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
53
+ pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
54
+
55
+ apt install libaio-dev
56
+ DS_BUILD_OPS=1 pip install deepspeed
57
+ ```
58
+
59
+ ### ๐Ÿ‘‰ Start Training
60
+
61
+ TODO, To be finalised before 15. July 2023
62
+
63
+ ## Citation
64
+
65
+ ```bibtex
66
+ @article{zheng2023secrets,
67
+ title={Secrets of RLHF in Large Language Models Part I: PPO},
68
+ author={Rui Zheng and Shihan Dou and Songyang Gao and Yuan Hua and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Yuhao Zhou and Limao Xiong and Lu Chen and Zhiheng Xi and Nuo Xu and Wenbin Lai and Minghao Zhu and Cheng Chang and Zhangyue Yin and Rongxiang Weng and Wensen Cheng and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
69
+ year={2023}
70
+ }
71
+ ```