Adam
commited on
Commit
•
ab4a005
1
Parent(s):
2149817
feat: updated links
Browse files
README.md
CHANGED
@@ -23,15 +23,21 @@ This model follows the blog of ChatGPT and the paper of InstructGPT and especial
|
|
23 |
|
24 |
## Our Training Methodology and Speedup Recipes
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
1. **Supervised fine-tuning (SFT):** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-sft-deepspeed)
|
29 |
|
30 |
2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed)
|
31 |
|
32 |
-
3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model** from step 2 and train them on the data-set with comparisons.
|
33 |
|
34 |
-
To view the details behind each step head into their respective links and view the model card there.
|
35 |
|
36 |
### Reinforcement learning from human feedback
|
37 |
|
|
|
23 |
|
24 |
## Our Training Methodology and Speedup Recipes
|
25 |
|
26 |
+
The training process simply involves a single python run of DeepSpeed-Chat which initiates the whole 3-step pipeline, saving all models in the process:
|
27 |
+
|
28 |
+
``` bash
|
29 |
+
python train.py --actor-model facebook/opt-1.3b --reward-model facebook/opt-350m --deployment-type single_node
|
30 |
+
```
|
31 |
+
|
32 |
+
This pipeline can be broken up into three key steps:
|
33 |
|
34 |
1. **Supervised fine-tuning (SFT):** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-sft-deepspeed)
|
35 |
|
36 |
2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed)
|
37 |
|
38 |
+
3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model** from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed) and [critic](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed).
|
39 |
|
40 |
+
To view the details behind each step head into their respective links and view the model card there.
|
41 |
|
42 |
### Reinforcement learning from human feedback
|
43 |
|