TheBloke commited on
Commit
2ad104a
1 Parent(s): 006e1df

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +36 -86
README.md CHANGED
@@ -35,9 +35,20 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
35
 
36
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
37
 
38
- ## Required: latest version of Transformers
39
 
40
- Before trying these GPTQs, please update Transformers to the latest Github code:
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ```
43
  pip3 install git+https://github.com/huggingface/transformers
@@ -45,7 +56,6 @@ pip3 install git+https://github.com/huggingface/transformers
45
 
46
  If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
47
 
48
- Note that at the time of writing, ExLlama is not yet compatible with the Llama 2 70B models, but support is coming soon.
49
 
50
  ## Repositories available
51
 
@@ -86,27 +96,41 @@ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/L
86
  ```
87
  - In Python Transformers code, the branch is the `revision` parameter; see below.
88
 
89
- ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
90
 
91
- Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
92
 
93
  It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
94
 
95
- Before trying the model, first update Transformers to the latest Github code:
 
 
 
 
 
 
 
 
96
 
 
 
97
  ```
98
- pip3 install git+https://github.com/huggingface/transformers
 
99
  ```
 
 
 
100
 
101
- ExLlama is not currently compatible with Llama 2 70B but support is expected soon.
102
 
103
  1. Click the **Model tab**.
104
- 2. Under **Download custom model or LoRA**, enter `%%REPO_GPTQ`.
105
  - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
106
  - see Provided Files above for the list of branches for each option.
107
  3. Click **Download**.
108
  4. The model will start downloading. Once it's finished it will say "Done"
109
- 5. Set Loader to AutoGPTQ or GPTQ-for-LLaMA
110
  - If you use AutoGPTQ, make sure "No inject fused attention" is ticked
111
  6. In the top left, click the refresh icon next to **Model**.
112
  7. In the **Model** dropdown, choose the model you just downloaded: `TheBloke/Llama-2-70B-chat-GPTQ`
@@ -200,83 +224,9 @@ print(pipe(prompt_template)[0]['generated_text'])
200
 
201
  The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
202
 
203
- ExLlama is not currently compatible with Llama 2 70B models, but support is coming soon. Please see the Provided Files table above for per-file compatibility.
204
-
205
- <!-- footer start -->
206
- ## Discord
207
-
208
- For further support, and discussions on these models and AI in general, join us at:
209
-
210
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
211
-
212
- ## Thanks, and how to contribute.
213
-
214
- Thanks to the [chirper.ai](https://chirper.ai) team!
215
-
216
- I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
217
-
218
- If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
219
-
220
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
221
-
222
- * Patreon: https://patreon.com/TheBlokeAI
223
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
224
-
225
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
226
-
227
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
228
-
229
- Thank you to all my generous patrons and donaters!
230
-
231
- <!-- footer end -->
232
-
233
- # Original model card: Meta's Llama 2 70B Chat
234
-
235
-
236
- <!-- header start -->
237
- <div style="width: 100%;">
238
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
239
- </div>
240
- <div style="display: flex; justify-content: space-between; width: 100%;">
241
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
242
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
243
- </div>
244
- <div style="display: flex; flex-direction: column; align-items: flex-end;">
245
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
246
- </div>
247
- </div>
248
- <!-- header end -->
249
 
250
- # Meta's Llama 2 70B Chat fp16
251
-
252
- These files are fp16 pytorch model files for [Meta's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
253
-
254
- They were produced by downloading the PTH files from Meta, and then converting to HF format using the latest Transformers 4.32.0.dev0, from Git, with the Llama 2 PR included: https://github.com/huggingface/transformers/pull/24891.
255
-
256
- Command to convert was:
257
- ```
258
- python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 70B --output_dir /workspace/process/llama-2-70b-chat/source --safe_serialization true
259
- ```
260
-
261
- The files were saved in Safetensors format.
262
-
263
- I am uploading this repo because I initially tried to create GPTQs using the [Meta Llama 2 70B Chat HF repo](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf), but got strange errors that suggested the weights were not correct. But converting from the PTH files using the latest `convert_llama_weights_to_hf.py` script worked fine.
264
-
265
- Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
266
-
267
- ## Repositories available
268
-
269
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
270
- * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
271
- * [My fp16 conversion of the unquantised PTH model files](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
272
-
273
- ## Prompt template: Llama-2-Chat
274
-
275
- ```
276
- System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
277
- User: {prompt}
278
- Assistant:
279
- ```
280
 
281
  <!-- footer start -->
282
  ## Discord
 
35
 
36
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
37
 
38
+ ## ExLlama support for 70B is here!
39
 
40
+ As of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee), ExLlama has support for Llama 2 70B models.
41
+
42
+ Please make sure you update ExLlama to the latest version. If you are a text-generation-webui one-click user, you must first uninstall the ExLlama wheel, then clone ExLlama into `text-generation-webui/repositories`; full instructions are below.
43
+
44
+ Now that we have ExLlama, that is the recommended loader to use for these models, as performance should be better than with AutoGPTQ and GPTQ-for-LLaMa, and you will be able to use the higher accuracy models, eg 128g + Act-Order.
45
+
46
+ Reminder: ExLlama does not support 3-bit models, so if you wish to try those quants, you will need to use AutoGPTQ or GPTQ-for-LLaMa.
47
+
48
+
49
+ ## AutoGPTQ and GPTQ-for-LLaMa requires latest version of Transformers
50
+
51
+ If you plan to use any of these quants with AutoGPTQ or GPTQ-for-LLaMa, you will need to update Transformers to the latest Github code:
52
 
53
  ```
54
  pip3 install git+https://github.com/huggingface/transformers
 
56
 
57
  If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
58
 
 
59
 
60
  ## Repositories available
61
 
 
96
  ```
97
  - In Python Transformers code, the branch is the `revision` parameter; see below.
98
 
99
+ ### How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
100
 
101
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui), which includes support for Llama 2 models.
102
 
103
  It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
104
 
105
+ ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
106
+
107
+ ExLlama has now been updated to support Llama 2 70B, but you will need to update ExLlama to the latest version.
108
+
109
+ By default text-generation-webui installs a pre-compiled wheel for ExLlama. Until text-generation-webui updates to reflect the ExLlama changes - which hopefully won't be long - you must uninstall that and then clone ExLlama into the `text-generation-webui/repositories` directory. ExLlama will then compile its kernel on model load.
110
+
111
+ Note that this requires that your system is capable of compiling CUDA extensions, which may be an issue on Windows.
112
+
113
+ Instructions for Linux One Click Installer:
114
 
115
+ 1. Change directory into the text-generation-webui main folder: `cd /path/to/text-generation-webui`
116
+ 2. Activate the conda env of text-generation-webui:
117
  ```
118
+ source "installer_files/conda/etc/profile.d/conda.sh"
119
+ conda activate installer_files/env
120
  ```
121
+ 3. Run: `pip3 uninstall exllama`
122
+ 4. Run: `cd repositories/exllama` followed by `git pull` to update exllama.
123
+ 6. Now launch text-generation-webui and follow the instructions below for downloading and running the model. ExLlama should build its kernel when the model first loads.
124
 
125
+ ### Downloading and running the model in text-generation-webui
126
 
127
  1. Click the **Model tab**.
128
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`.
129
  - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
130
  - see Provided Files above for the list of branches for each option.
131
  3. Click **Download**.
132
  4. The model will start downloading. Once it's finished it will say "Done"
133
+ 5. Set Loader to ExLlama if you plan to use a 4-bit file, or else choose AutoGPTQ or GPTQ-for-LLaMA.
134
  - If you use AutoGPTQ, make sure "No inject fused attention" is ticked
135
  6. In the top left, click the refresh icon next to **Model**.
136
  7. In the **Model** dropdown, choose the model you just downloaded: `TheBloke/Llama-2-70B-chat-GPTQ`
 
224
 
225
  The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
226
 
227
+ ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
228
 
229
+ Please see the Provided Files table above for per-file compatibility.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
230
 
231
  <!-- footer start -->
232
  ## Discord