TheBloke commited on
Commit
1b67149
1 Parent(s): a12650e

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +290 -0
README.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - en
5
+ license: other
6
+ model_creator: NousResearch
7
+ model_link: https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b
8
+ model_name: Nous Hermes Llama 2 7B
9
+ model_type: llama
10
+ quantized_by: TheBloke
11
+ tags:
12
+ - llama-2
13
+ - self-instruct
14
+ - distillation
15
+ - synthetic instruction
16
+ ---
17
+
18
+ <!-- header start -->
19
+ <div style="width: 100%;">
20
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
21
+ </div>
22
+ <div style="display: flex; justify-content: space-between; width: 100%;">
23
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
24
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
25
+ </div>
26
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
27
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
28
+ </div>
29
+ </div>
30
+ <!-- header end -->
31
+
32
+ # Nous Hermes Llama 2 7B - GPTQ
33
+ - Model creator: [NousResearch](https://huggingface.co/NousResearch)
34
+ - Original model: [Nous Hermes Llama 2 7B](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
35
+
36
+ ## Description
37
+
38
+ This repo contains GPTQ model files for [NousResearch's Nous Hermes Llama 2 7B](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b).
39
+
40
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
41
+
42
+ ## Repositories available
43
+
44
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ)
45
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GGML)
46
+ * [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
47
+
48
+ ## Prompt template: Alpaca
49
+
50
+ ```
51
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
52
+
53
+ ### Instruction: {prompt}
54
+
55
+ ### Response:
56
+ ```
57
+
58
+ ## Provided files
59
+
60
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
61
+
62
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
63
+
64
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
65
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
66
+ | [main](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/main) | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
67
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
68
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
69
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
70
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
71
+ | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
72
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
73
+ | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 7.31 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
74
+
75
+ ## How to download from branches
76
+
77
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Nous-Hermes-Llama-2-7B-GPTQ:gptq-4bit-32g-actorder_True`
78
+ - With Git, you can clone a branch with:
79
+ ```
80
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GPTQ`
81
+ ```
82
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
83
+
84
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
85
+
86
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
87
+
88
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
89
+
90
+ 1. Click the **Model tab**.
91
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-Llama-2-7B-GPTQ`.
92
+ - To download from a specific branch, enter for example `TheBloke/Nous-Hermes-Llama-2-7B-GPTQ:gptq-4bit-32g-actorder_True`
93
+ - see Provided Files above for the list of branches for each option.
94
+ 3. Click **Download**.
95
+ 4. The model will start downloading. Once it's finished it will say "Done"
96
+ 5. In the top left, click the refresh icon next to **Model**.
97
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-Llama-2-7B-GPTQ`
98
+ 7. The model will automatically load, and is now ready for use!
99
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
100
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
101
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
102
+
103
+ ## How to use this GPTQ model from Python code
104
+
105
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
106
+
107
+ `GITHUB_ACTIONS=true pip install auto-gptq`
108
+
109
+ Then try the following example code:
110
+
111
+ ```python
112
+ from transformers import AutoTokenizer, pipeline, logging
113
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
114
+
115
+ model_name_or_path = "TheBloke/Nous-Hermes-Llama-2-7B-GPTQ"
116
+ model_basename = "gptq_model-4bit-128g"
117
+
118
+ use_triton = False
119
+
120
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
121
+
122
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
123
+ model_basename=model_basename,
124
+ use_safetensors=True,
125
+ trust_remote_code=False,
126
+ device="cuda:0",
127
+ use_triton=use_triton,
128
+ quantize_config=None)
129
+
130
+ """
131
+ To download from a specific branch, use the revision parameter, as in this example:
132
+
133
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
134
+ revision="gptq-4bit-32g-actorder_True",
135
+ model_basename=model_basename,
136
+ use_safetensors=True,
137
+ trust_remote_code=False,
138
+ device="cuda:0",
139
+ quantize_config=None)
140
+ """
141
+
142
+ prompt = "Tell me about AI"
143
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
144
+
145
+ ### Instruction: {prompt}
146
+
147
+ ### Response:
148
+ '''
149
+
150
+ print("\n\n*** Generate:")
151
+
152
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
153
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
154
+ print(tokenizer.decode(output[0]))
155
+
156
+ # Inference can also be done using transformers' pipeline
157
+
158
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
159
+ logging.set_verbosity(logging.CRITICAL)
160
+
161
+ print("*** Pipeline:")
162
+ pipe = pipeline(
163
+ "text-generation",
164
+ model=model,
165
+ tokenizer=tokenizer,
166
+ max_new_tokens=512,
167
+ temperature=0.7,
168
+ top_p=0.95,
169
+ repetition_penalty=1.15
170
+ )
171
+
172
+ print(pipe(prompt_template)[0]['generated_text'])
173
+ ```
174
+
175
+ ## Compatibility
176
+
177
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
178
+
179
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
180
+
181
+ <!-- footer start -->
182
+ ## Discord
183
+
184
+ For further support, and discussions on these models and AI in general, join us at:
185
+
186
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
187
+
188
+ ## Thanks, and how to contribute.
189
+
190
+ Thanks to the [chirper.ai](https://chirper.ai) team!
191
+
192
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
193
+
194
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
195
+
196
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
197
+
198
+ * Patreon: https://patreon.com/TheBlokeAI
199
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
200
+
201
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
202
+
203
+ **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
204
+
205
+
206
+ Thank you to all my generous patrons and donaters!
207
+
208
+ <!-- footer end -->
209
+
210
+ # Original model card: NousResearch's Nous Hermes Llama 2 7B
211
+
212
+
213
+ # Model Card: Nous-Hermes-Llama2-7b
214
+
215
+ Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
216
+
217
+ ## Model Description
218
+
219
+ Nous-Hermes-Llama2-7b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
220
+
221
+ This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
222
+
223
+ This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
224
+
225
+
226
+ ## Model Training
227
+
228
+ The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
229
+
230
+ This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
231
+
232
+ ## Collaborators
233
+ The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
234
+
235
+ Special mention goes to @winglian for assisting in some of the training issues.
236
+
237
+ Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
238
+
239
+ Among the contributors of datasets:
240
+ - GPTeacher was made available by Teknium
241
+ - Wizard LM by nlpxucan
242
+ - Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
243
+ - GPT4-LLM and Unnatural Instructions were provided by Microsoft
244
+ - Airoboros dataset by jondurbin
245
+ - Camel-AI's domain expert datasets are from Camel-AI
246
+ - CodeAlpaca dataset by Sahil 2801.
247
+
248
+ If anyone was left out, please open a thread in the community tab.
249
+
250
+ ## Prompt Format
251
+
252
+ The model follows the Alpaca prompt format:
253
+ ```
254
+ ### Instruction:
255
+ <prompt>
256
+
257
+ ### Response:
258
+ <leave a newline blank for model to respond>
259
+
260
+ ```
261
+
262
+ or
263
+
264
+ ```
265
+ ### Instruction:
266
+ <prompt>
267
+
268
+ ### Input:
269
+ <additional context>
270
+
271
+ ### Response:
272
+ <leave a newline blank for model to respond>
273
+
274
+ ```
275
+
276
+ ## Benchmark Results
277
+ Coming soon
278
+
279
+ ## Resources for Applied Use Cases:
280
+ For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
281
+ For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
282
+
283
+ LM Studio is a good choice for a chat interface that supports GGML versions (to come)
284
+
285
+ ## Future Plans
286
+ We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
287
+
288
+ ## Model Usage
289
+ The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
290
+