TheBloke commited on
Commit
093d275
1 Parent(s): d167062

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +70 -32
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  inference: false
3
  license: other
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -9,7 +11,7 @@ license: other
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -19,52 +21,82 @@ license: other
19
 
20
  # LmSys' Vicuna 7B v1.3 GPTQ
21
 
22
- These files are GPTQ 4bit model files for [LmSys' Vicuna 7B v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3).
 
 
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
  ## Repositories available
27
 
28
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-7B-v1.3-GPTQ)
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-7b-v1.3)
31
 
32
- ## Prompt template
33
 
34
  ```
35
  A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
36
 
37
- USER: prompt
38
  ASSISTANT:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ```
 
 
 
 
 
40
 
41
- ## How to easily download and use this model in text-generation-webui
42
 
43
- Please make sure you're using the latest version of text-generation-webui
44
 
45
  1. Click the **Model tab**.
46
  2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-7B-v1.3-GPTQ`.
 
 
47
  3. Click **Download**.
48
  4. The model will start downloading. Once it's finished it will say "Done"
49
  5. In the top left, click the refresh icon next to **Model**.
50
  6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-7B-v1.3-GPTQ`
51
  7. The model will automatically load, and is now ready for use!
52
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
53
- * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
54
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
55
 
56
  ## How to use this GPTQ model from Python code
57
 
58
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
59
 
60
- `pip install auto-gptq`
61
 
62
  Then try the following example code:
63
 
64
  ```python
65
  from transformers import AutoTokenizer, pipeline, logging
66
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
67
- import argparse
68
 
69
  model_name_or_path = "TheBloke/vicuna-7B-v1.3-GPTQ"
70
  model_basename = "vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order"
@@ -74,17 +106,32 @@ use_triton = False
74
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
75
 
76
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
77
- model_basename=model_basename,
78
  use_safetensors=True,
79
- trust_remote_code=False,
80
  device="cuda:0",
81
  use_triton=use_triton,
82
  quantize_config=None)
83
 
84
- # Note: check the prompt template is correct for this model.
 
 
 
 
 
 
 
 
 
 
 
85
  prompt = "Tell me about AI"
86
- prompt_template=f'''USER: {prompt}
87
- ASSISTANT:'''
 
 
 
 
88
 
89
  print("\n\n*** Generate:")
90
 
@@ -111,27 +158,18 @@ pipe = pipeline(
111
  print(pipe(prompt_template)[0]['generated_text'])
112
  ```
113
 
114
- ## Provided files
115
-
116
- **vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order.safetensors**
117
-
118
- This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
119
 
120
- It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
121
 
122
- * `vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order.safetensors`
123
- * Works with AutoGPTQ in CUDA or Triton modes.
124
- * Works with ExLlama.
125
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
126
- * Works with text-generation-webui, including one-click-installers.
127
- * Parameters: Groupsize = 128. Act Order / desc_act = False.
128
 
129
  <!-- footer start -->
130
  ## Discord
131
 
132
  For further support, and discussions on these models and AI in general, join us at:
133
 
134
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
135
 
136
  ## Thanks, and how to contribute.
137
 
@@ -146,9 +184,9 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
146
  * Patreon: https://patreon.com/TheBlokeAI
147
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
148
 
149
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
150
 
151
- **Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
152
 
153
  Thank you to all my generous patrons and donaters!
154
 
@@ -193,7 +231,7 @@ See more details in the "Training Details of Vicuna Models" section in the appen
193
 
194
  ## Evaluation
195
 
196
- Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
197
 
198
  ## Difference between different versions of Vicuna
199
  See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
 
1
  ---
2
  inference: false
3
  license: other
4
+ model_type: llama
5
+
6
  ---
7
 
8
  <!-- header start -->
 
11
  </div>
12
  <div style="display: flex; justify-content: space-between; width: 100%;">
13
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
15
  </div>
16
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
17
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
21
 
22
  # LmSys' Vicuna 7B v1.3 GPTQ
23
 
24
+ These files are GPTQ model files for [LmSys' Vicuna 7B v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3).
25
+
26
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
27
 
28
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
29
 
30
  ## Repositories available
31
 
32
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/vicuna-7B-v1.3-GPTQ)
33
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML)
34
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-7b-v1.3)
35
 
36
+ ## Prompt template: Vicuna
37
 
38
  ```
39
  A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
40
 
41
+ USER: {prompt}
42
  ASSISTANT:
43
+
44
+ ```
45
+
46
+ ## Provided files
47
+
48
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
49
+
50
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
51
+
52
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
53
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
54
+ | main | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
55
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
56
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
57
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
58
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
59
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
60
+
61
+ ## How to download from branches
62
+
63
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/vicuna-7B-v1.3-GPTQ:gptq-4bit-32g-actorder_True`
64
+ - With Git, you can clone a branch with:
65
  ```
66
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/vicuna-7B-v1.3-GPTQ`
67
+ ```
68
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
69
+
70
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
71
 
72
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
73
 
74
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
75
 
76
  1. Click the **Model tab**.
77
  2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-7B-v1.3-GPTQ`.
78
+ - To download from a specific branch, enter for example `TheBloke/vicuna-7B-v1.3-GPTQ:gptq-4bit-32g-actorder_True`
79
+ - see Provided Files above for the list of branches for each option.
80
  3. Click **Download**.
81
  4. The model will start downloading. Once it's finished it will say "Done"
82
  5. In the top left, click the refresh icon next to **Model**.
83
  6. In the **Model** dropdown, choose the model you just downloaded: `vicuna-7B-v1.3-GPTQ`
84
  7. The model will automatically load, and is now ready for use!
85
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
86
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
87
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
88
 
89
  ## How to use this GPTQ model from Python code
90
 
91
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
92
 
93
+ `GITHUB_ACTIONS=true pip install auto-gptq`
94
 
95
  Then try the following example code:
96
 
97
  ```python
98
  from transformers import AutoTokenizer, pipeline, logging
99
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
 
100
 
101
  model_name_or_path = "TheBloke/vicuna-7B-v1.3-GPTQ"
102
  model_basename = "vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order"
 
106
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
107
 
108
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
109
+ model_basename=model_basename
110
  use_safetensors=True,
111
+ trust_remote_code=True,
112
  device="cuda:0",
113
  use_triton=use_triton,
114
  quantize_config=None)
115
 
116
+ """
117
+ To download from a specific branch, use the revision parameter, as in this example:
118
+
119
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
120
+ revision="gptq-4bit-32g-actorder_True",
121
+ model_basename=model_basename,
122
+ use_safetensors=True,
123
+ trust_remote_code=True,
124
+ device="cuda:0",
125
+ quantize_config=None)
126
+ """
127
+
128
  prompt = "Tell me about AI"
129
+ prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
130
+
131
+ USER: {prompt}
132
+ ASSISTANT:
133
+
134
+ '''
135
 
136
  print("\n\n*** Generate:")
137
 
 
158
  print(pipe(prompt_template)[0]['generated_text'])
159
  ```
160
 
161
+ ## Compatibility
 
 
 
 
162
 
163
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
164
 
165
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
 
166
 
167
  <!-- footer start -->
168
  ## Discord
169
 
170
  For further support, and discussions on these models and AI in general, join us at:
171
 
172
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
173
 
174
  ## Thanks, and how to contribute.
175
 
 
184
  * Patreon: https://patreon.com/TheBlokeAI
185
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
186
 
187
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
188
 
189
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
190
 
191
  Thank you to all my generous patrons and donaters!
192
 
 
231
 
232
  ## Evaluation
233
 
234
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
235
 
236
  ## Difference between different versions of Vicuna
237
  See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)