Text Generation
Transformers
PyTorch
English
opt
deepspeed
chatgpt
sft
Inference Endpoints
text-generation-inference
AdamG012 commited on
Commit
8b33723
1 Parent(s): e79fa60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -18,9 +18,9 @@ datasets:
18
 
19
  # ChatGPT OPT 1.3B DeepSpeed Supervised fine tuning
20
 
21
- *fsalab-chat-opt-1.3b-sft-deepspeed*
22
 
23
- This model consists of the first step of a modified pipeline the to the traditional training process of Chat-GPT models, which is comprised of a three-step procedure of **supervised fine tuning**, [reward model](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed) and [RLHF](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-deepspeed).
24
 
25
  This project's main goal was to make proper use of existing frameworks that revolve around the minimisation of training costs and thus the eventual improvements towards both the feasibility and usability of ChatGPT-like models. The framework selected here is DeepSpeed which has been instrumental in the development of this model and through this framework it was possible to train the ChatGPT-like model on much larger data-sets with a reasonable number of GPUs and consequently achieve significantly better performance.
26
 
@@ -38,9 +38,9 @@ This pipeline can be broken up into three key steps:
38
 
39
  1. **Supervised fine-tuning (SFT):** In the first step we perform supervised fine tuning by taking the pretrained models, configuring them to use smaller learning rates and then subsequently trained on a labelled data-set.
40
 
41
- 2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed)
42
 
43
- 3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model** from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed) and [critic](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed). . This generates both an actor and critic model. I also generate an actor model with an exponential moving average (EMA) which is known to improve conversational response quality.
44
 
45
  To view the details behind each step head into their respective links and view the model card there.
46
 
@@ -79,9 +79,9 @@ If using through the HuggingFace transformers library:
79
  ``` python
80
  from transformers import AutoTokenizer, AutoModelForCausalLM
81
 
82
- tokenizer = AutoTokenizer.from_pretrained("FSALab/deepspeed-chatgpt-opt1.3b-sft")
83
 
84
- model = AutoModelForCausalLM.from_pretrained("FSALab/deepspeed-chatgpt-opt1.3b-sft")
85
  ```
86
 
87
 
 
18
 
19
  # ChatGPT OPT 1.3B DeepSpeed Supervised fine tuning
20
 
21
+ *chat-opt-1.3b-sft-deepspeed*
22
 
23
+ This model consists of the first step of a modified pipeline the to the traditional training process of Chat-GPT models, which is comprised of a three-step procedure of **supervised fine tuning**, [reward model](https://huggingface.co/AdamG012/chat-opt-350m-reward-deepspeed) and reinforcement learning from human feedback models; [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed), [actor EMA](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) and [critic](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed) models.
24
 
25
  This project's main goal was to make proper use of existing frameworks that revolve around the minimisation of training costs and thus the eventual improvements towards both the feasibility and usability of ChatGPT-like models. The framework selected here is DeepSpeed which has been instrumental in the development of this model and through this framework it was possible to train the ChatGPT-like model on much larger data-sets with a reasonable number of GPUs and consequently achieve significantly better performance.
26
 
 
38
 
39
  1. **Supervised fine-tuning (SFT):** In the first step we perform supervised fine tuning by taking the pretrained models, configuring them to use smaller learning rates and then subsequently trained on a labelled data-set.
40
 
41
+ 2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/AdamG012/chat-opt-350m-reward-deepspeed).
42
 
43
+ 3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model* from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed) and [critic](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed). I also generate an [actor model with an exponential moving average (EMA)](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) which is known to improve conversational response quality.
44
 
45
  To view the details behind each step head into their respective links and view the model card there.
46
 
 
79
  ``` python
80
  from transformers import AutoTokenizer, AutoModelForCausalLM
81
 
82
+ tokenizer = AutoTokenizer.from_pretrained("AdamG012/chat-opt-1.3b-sft-deepspeed")
83
 
84
+ model = AutoModelForCausalLM.from_pretrained("AdamG012/chat-opt-1.3b-sft-deepspeed")
85
  ```
86
 
87