Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
5821fb6
1 Parent(s): 6bef90b

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +317 -0
README.md ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - databricks/databricks-dolly-15k
4
+ - OpenAssistant/oasst1
5
+ - sahil2801/CodeAlpaca-20k
6
+ inference: false
7
+ language:
8
+ - en
9
+ license: other
10
+ model_type: llama
11
+ ---
12
+
13
+ <!-- header start -->
14
+ <div style="width: 100%;">
15
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
16
+ </div>
17
+ <div style="display: flex; justify-content: space-between; width: 100%;">
18
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
20
+ </div>
21
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
+ </div>
24
+ </div>
25
+ <!-- header end -->
26
+
27
+ # Allen AI's Tulu 7B GPTQ
28
+
29
+ These files are GPTQ model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b).
30
+
31
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
32
+
33
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
34
+
35
+ ## Repositories available
36
+
37
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tulu-7B-GPTQ)
38
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
39
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
40
+
41
+ ## Prompt template: Tulu
42
+
43
+ ```
44
+ <|user|>
45
+ {prompt}
46
+ <|assistant|>
47
+ ```
48
+
49
+ ## Provided files
50
+
51
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
52
+
53
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
54
+
55
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
56
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
57
+ | main | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
58
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
59
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
60
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
61
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
62
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
63
+
64
+ ## How to download from branches
65
+
66
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/tulu-7B-GPTQ:gptq-4bit-32g-actorder_True`
67
+ - With Git, you can clone a branch with:
68
+ ```
69
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/tulu-7B-GPTQ`
70
+ ```
71
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
72
+
73
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
74
+
75
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
76
+
77
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
78
+
79
+ 1. Click the **Model tab**.
80
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/tulu-7B-GPTQ`.
81
+ - To download from a specific branch, enter for example `TheBloke/tulu-7B-GPTQ:gptq-4bit-32g-actorder_True`
82
+ - see Provided Files above for the list of branches for each option.
83
+ 3. Click **Download**.
84
+ 4. The model will start downloading. Once it's finished it will say "Done"
85
+ 5. In the top left, click the refresh icon next to **Model**.
86
+ 6. In the **Model** dropdown, choose the model you just downloaded: `tulu-7B-GPTQ`
87
+ 7. The model will automatically load, and is now ready for use!
88
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
89
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
90
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
91
+
92
+ ## How to use this GPTQ model from Python code
93
+
94
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
95
+
96
+ `GITHUB_ACTIONS=true pip install auto-gptq`
97
+
98
+ Then try the following example code:
99
+
100
+ ```python
101
+ from transformers import AutoTokenizer, pipeline, logging
102
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
103
+
104
+ model_name_or_path = "TheBloke/tulu-7B-GPTQ"
105
+ model_basename = "gptq_model-4bit-128g"
106
+
107
+ use_triton = False
108
+
109
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
110
+
111
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
112
+ model_basename=model_basename
113
+ use_safetensors=True,
114
+ trust_remote_code=True,
115
+ device="cuda:0",
116
+ use_triton=use_triton,
117
+ quantize_config=None)
118
+
119
+ """
120
+ To download from a specific branch, use the revision parameter, as in this example:
121
+
122
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
123
+ revision="gptq-4bit-32g-actorder_True",
124
+ model_basename=model_basename,
125
+ use_safetensors=True,
126
+ trust_remote_code=True,
127
+ device="cuda:0",
128
+ quantize_config=None)
129
+ """
130
+
131
+ prompt = "Tell me about AI"
132
+ prompt_template=f'''<|user|>
133
+ {prompt}
134
+ <|assistant|>
135
+ '''
136
+
137
+ print("\n\n*** Generate:")
138
+
139
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
140
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
141
+ print(tokenizer.decode(output[0]))
142
+
143
+ # Inference can also be done using transformers' pipeline
144
+
145
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
146
+ logging.set_verbosity(logging.CRITICAL)
147
+
148
+ print("*** Pipeline:")
149
+ pipe = pipeline(
150
+ "text-generation",
151
+ model=model,
152
+ tokenizer=tokenizer,
153
+ max_new_tokens=512,
154
+ temperature=0.7,
155
+ top_p=0.95,
156
+ repetition_penalty=1.15
157
+ )
158
+
159
+ print(pipe(prompt_template)[0]['generated_text'])
160
+ ```
161
+
162
+ ## Compatibility
163
+
164
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
165
+
166
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
167
+
168
+ <!-- footer start -->
169
+ ## Discord
170
+
171
+ For further support, and discussions on these models and AI in general, join us at:
172
+
173
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
174
+
175
+ ## Thanks, and how to contribute.
176
+
177
+ Thanks to the [chirper.ai](https://chirper.ai) team!
178
+
179
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
180
+
181
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
182
+
183
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
184
+
185
+ * Patreon: https://patreon.com/TheBlokeAI
186
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
187
+
188
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
189
+
190
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
191
+
192
+ Thank you to all my generous patrons and donaters!
193
+
194
+ <!-- footer end -->
195
+
196
+ # Original model card: Allen AI's Tulu 7B
197
+
198
+
199
+ # Tulu 7B
200
+
201
+ This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
202
+ *Please note this is a model diff - see below for usage instructions*.
203
+
204
+ This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
205
+ The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
206
+
207
+ This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
208
+
209
+ ## Usage
210
+
211
+ We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
212
+ [https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
213
+
214
+ Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
215
+ and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
216
+
217
+ Then, run:
218
+ ```bash
219
+ python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
220
+ ```
221
+
222
+ And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
223
+
224
+ ## Input Format
225
+
226
+ The model is trained to use the following format (note the newlines):
227
+ ```
228
+ <|user|>
229
+ Your message here!
230
+ <|assistant|>
231
+ ```
232
+
233
+ For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
234
+
235
+ ## Performance
236
+
237
+ Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
238
+
239
+ | MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
240
+ |:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
241
+ | 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
242
+
243
+ If you use this model, please cite our work, the llama paper, and the original datasets:
244
+
245
+ ```
246
+ @misc{wang2023far,
247
+ title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
248
+ author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
249
+ year={2023},
250
+ eprint={2306.04751},
251
+ archivePrefix={arXiv},
252
+ primaryClass={cs.CL}
253
+ }
254
+ ```
255
+
256
+ ```
257
+ @misc{touvron2023llama,
258
+ title={LLaMA: Open and Efficient Foundation Language Models},
259
+ author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
260
+ year={2023},
261
+ eprint={2302.13971},
262
+ archivePrefix={arXiv},
263
+ primaryClass={cs.CL}
264
+ }
265
+ ```
266
+
267
+ ```
268
+ @misc{dolly,
269
+ author = {Databricks},
270
+ title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
271
+ year = {2023},
272
+ publisher = {GitHub},
273
+ journal = {GitHub repository},
274
+ howpublished = {Blog post},
275
+ url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
276
+ }
277
+ ```
278
+
279
+ ```
280
+ @article{longpre2023flan,
281
+ title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
282
+ author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
283
+ journal={arXiv preprint arXiv:2301.13688},
284
+ year={2023}
285
+ }
286
+ ```
287
+
288
+ ```
289
+ @misc{köpf2023openassistant,
290
+ title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
291
+ author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
292
+ year={2023},
293
+ eprint={2304.07327},
294
+ archivePrefix={arXiv},
295
+ primaryClass={cs.CL}
296
+ }
297
+ ```
298
+
299
+ ```
300
+ @article{peng2023instruction,
301
+ title={Instruction Tuning with GPT-4},
302
+ author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
303
+ journal={arXiv preprint arXiv:2304.03277},
304
+ year={2023}
305
+ }
306
+ ```
307
+
308
+ ```
309
+ @misc{codealpaca,
310
+ author = {Sahil Chaudhary},
311
+ title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
312
+ year = {2023},
313
+ publisher = {GitHub},
314
+ journal = {GitHub repository},
315
+ howpublished = {\url{https://github.com/sahil280114/codealpaca}},
316
+ }
317
+ ```