Text Generation
Transformers
PyTorch
English
opt
deepspeed
chatgpt
sft
rlhf
Inference Endpoints
text-generation-inference
AdamG012 commited on
Commit
da9bab4
1 Parent(s): bad289b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -74,6 +74,29 @@ This pipeline can be broken up into three key steps:
74
  | Attention Dropout | 0.0 |
75
  | Prescale gradients | False |
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
 
79
  ## **Acknowledgements**
74
  | Attention Dropout | 0.0 |
75
  | Prescale gradients | False |
76
 
77
+ ## Installation
78
+
79
+ If using through the HuggingFace transformers library:
80
+
81
+ ``` python
82
+ from transformers import AutoTokenizer, AutoModelForCausalLM
83
+
84
+ tokenizer = AutoTokenizer.from_pretrained("AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed")
85
+
86
+ model = AutoModelForCausalLM.from_pretrained("AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed")
87
+ ```
88
+
89
+
90
+ If you would like to clone from source:
91
+ ```bash
92
+ # Make sure you have git-lfs installed (https://git-lfs.github.com)
93
+ git lfs install
94
+ git clone https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed
95
+
96
+ # if you want to clone without large files – just their pointers
97
+ # prepend your git clone with the following env var:
98
+ GIT_LFS_SKIP_SMUDGE=1
99
+ ```
100
 
101
 
102
  ## **Acknowledgements**