TheBloke commited on
Commit
fd7ec57
1 Parent(s): 36ff5be

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +70 -35
README.md CHANGED
@@ -17,55 +17,85 @@ license: other
17
  </div>
18
  <!-- header end -->
19
 
20
- # LmSys' Vicuna 33B (final) GPTQ
21
 
22
- These files are GPTQ 4bit model files for [LmSys' Vicuna 33B (final)](https://huggingface.co/lmsys/vicuna-33b-v1.3).
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
- This is the final version of Vicuna 33B, replacing the preview version previously released.
27
 
28
  ## Repositories available
29
 
30
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GPTQ)
31
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GGML)
32
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
33
 
34
- ## Prompt template
35
 
36
  ```
37
- A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
38
- USER: prompt
 
39
  ASSISTANT:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
 
41
 
42
- ## How to easily download and use this model in text-generation-webui
43
 
44
- Please make sure you're using the latest version of text-generation-webui
 
 
45
 
46
  1. Click the **Model tab**.
47
  2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-33B-GPTQ`.
 
 
48
  3. Click **Download**.
49
  4. The model will start downloading. Once it's finished it will say "Done"
50
  5. In the top left, click the refresh icon next to **Model**.
51
  6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-33B-GPTQ`
52
  7. The model will automatically load, and is now ready for use!
53
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
54
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
55
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
56
 
57
  ## How to use this GPTQ model from Python code
58
 
59
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
60
 
61
- `pip install auto-gptq`
62
 
63
  Then try the following example code:
64
 
65
  ```python
66
  from transformers import AutoTokenizer, pipeline, logging
67
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
68
- import argparse
69
 
70
  model_name_or_path = "TheBloke/vicuna-33B-GPTQ"
71
  model_basename = "vicuna-33b-GPTQ-4bit--1g.act.order"
@@ -75,17 +105,32 @@ use_triton = False
75
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
76
 
77
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
78
- model_basename=model_basename,
79
  use_safetensors=True,
80
- trust_remote_code=False,
81
  device="cuda:0",
82
  use_triton=use_triton,
83
  quantize_config=None)
84
 
85
- # Note: check the prompt template is correct for this model.
 
 
 
 
 
 
 
 
 
 
 
86
  prompt = "Tell me about AI"
87
- prompt_template=f'''USER: {prompt}
88
- ASSISTANT:'''
 
 
 
 
89
 
90
  print("\n\n*** Generate:")
91
 
@@ -112,20 +157,11 @@ pipe = pipeline(
112
  print(pipe(prompt_template)[0]['generated_text'])
113
  ```
114
 
115
- ## Provided files
116
-
117
- **vicuna-33b-GPTQ-4bit--1g.act.order.safetensors**
118
-
119
- This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
120
 
121
- It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
122
 
123
- * `vicuna-33b-GPTQ-4bit--1g.act.order.safetensors`
124
- * Works with AutoGPTQ in CUDA or Triton modes.
125
- * LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
126
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
127
- * Works with text-generation-webui, including one-click-installers.
128
- * Parameters: Groupsize = -1. Act Order / desc_act = True.
129
 
130
  <!-- footer start -->
131
  ## Discord
@@ -147,15 +183,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
147
  * Patreon: https://patreon.com/TheBlokeAI
148
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
149
 
150
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
151
 
152
- **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
153
 
154
  Thank you to all my generous patrons and donaters!
155
 
156
  <!-- footer end -->
157
 
158
- # Original model card: LmSys' Vicuna 33B (final)
159
 
160
 
161
  # Vicuna Model Card
@@ -194,8 +230,7 @@ See more details in the "Training Details of Vicuna Models" section in the appen
194
 
195
  ## Evaluation
196
 
197
- Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
198
 
199
  ## Difference between different versions of Vicuna
200
  See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
201
-
 
17
  </div>
18
  <!-- header end -->
19
 
20
+ # LmSys' Vicuna 33B 1.3 GPTQ
21
 
22
+ These files are GPTQ model files for [LmSys' Vicuna 33B 1.3](https://huggingface.co/lmsys/vicuna-33b-v1.3).
23
 
24
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
25
 
26
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
27
 
28
  ## Repositories available
29
 
30
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-33B-GPTQ)
31
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GGML)
32
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
33
 
34
+ ## Prompt template: Vicuna
35
 
36
  ```
37
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
38
+
39
+ USER: {prompt}
40
  ASSISTANT:
41
+
42
+ ```
43
+
44
+ ## Provided files
45
+
46
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
47
+
48
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
49
+
50
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
51
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
52
+ | main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
53
+ | gptq-4bit-32g-actorder_True | 4 | 32 | 1 | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
54
+ | gptq-4bit-64g-actorder_True | 4 | 64 | 1 | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
55
+ | gptq-4bit-128g-actorder_True | 4 | 128 | 1 | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
56
+ | gptq-8bit--1g-actorder_True | 8 | None | 1 | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
57
+ | gptq-3bit--1g-actorder_True | 3 | None | 1 | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
58
+ | gptq-3bit-128g-actorder_False | 3 | 128 | 0 | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
59
+
60
+ ## How to download from branches
61
+
62
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/vicuna-33B-GPTQ:gptq-4bit-32g-actorder_True`
63
+ - With Git, you can clone a branch with:
64
+ ```
65
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/vicuna-33B-GPTQ`
66
  ```
67
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
68
 
69
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
70
 
71
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
72
+
73
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
74
 
75
  1. Click the **Model tab**.
76
  2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-33B-GPTQ`.
77
+ - To download from a specific branch, enter for example `TheBloke/vicuna-33B-GPTQ:gptq-4bit-32g-actorder_True`
78
+ - see Provided Files above for the list of branches for each option.
79
  3. Click **Download**.
80
  4. The model will start downloading. Once it's finished it will say "Done"
81
  5. In the top left, click the refresh icon next to **Model**.
82
  6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-33B-GPTQ`
83
  7. The model will automatically load, and is now ready for use!
84
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
85
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
86
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
87
 
88
  ## How to use this GPTQ model from Python code
89
 
90
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
91
 
92
+ `GITHUB_ACTIONS=true pip install auto-gptq`
93
 
94
  Then try the following example code:
95
 
96
  ```python
97
  from transformers import AutoTokenizer, pipeline, logging
98
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
 
99
 
100
  model_name_or_path = "TheBloke/vicuna-33B-GPTQ"
101
  model_basename = "vicuna-33b-GPTQ-4bit--1g.act.order"
 
105
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
106
 
107
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
108
+ model_basename=model_basename
109
  use_safetensors=True,
110
+ trust_remote_code=True,
111
  device="cuda:0",
112
  use_triton=use_triton,
113
  quantize_config=None)
114
 
115
+ """
116
+ To download from a specific branch, use the revision parameter, as in this example:
117
+
118
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
119
+ revision="gptq-4bit-32g-actorder_True",
120
+ model_basename=model_basename,
121
+ use_safetensors=True,
122
+ trust_remote_code=True,
123
+ device="cuda:0",
124
+ quantize_config=None)
125
+ """
126
+
127
  prompt = "Tell me about AI"
128
+ prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
129
+
130
+ USER: {prompt}
131
+ ASSISTANT:
132
+
133
+ '''
134
 
135
  print("\n\n*** Generate:")
136
 
 
157
  print(pipe(prompt_template)[0]['generated_text'])
158
  ```
159
 
160
+ ## Compatibility
 
 
 
 
161
 
162
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
163
 
164
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
 
165
 
166
  <!-- footer start -->
167
  ## Discord
 
183
  * Patreon: https://patreon.com/TheBlokeAI
184
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
185
 
186
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
187
 
188
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
189
 
190
  Thank you to all my generous patrons and donaters!
191
 
192
  <!-- footer end -->
193
 
194
+ # Original model card: LmSys' Vicuna 33B 1.3
195
 
196
 
197
  # Vicuna Model Card
 
230
 
231
  ## Evaluation
232
 
233
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
234
 
235
  ## Difference between different versions of Vicuna
236
  See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)