TheBloke commited on
Commit
6f4079d
1 Parent(s): 86d2d19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -131
README.md CHANGED
@@ -38,6 +38,14 @@ quantized_by: TheBloke
38
 
39
  This repo contains GPTQ model files for [Charles Goddard's Mixtralnt 4X7B Test](https://huggingface.co/chargoddard/mixtralnt-4x7b-test).
40
 
 
 
 
 
 
 
 
 
41
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
42
 
43
  <!-- description end -->
@@ -60,22 +68,6 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
60
  <!-- prompt-template end -->
61
 
62
 
63
-
64
- <!-- README_GPTQ.md-compatible clients start -->
65
- ## Known compatible clients / servers
66
-
67
- GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
68
-
69
- These GPTQ models are known to work in the following inference servers/webuis.
70
-
71
- - [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
72
- - [KoboldAI United](https://github.com/henk717/koboldai)
73
- - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
74
- - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
75
-
76
- This may not be a complete list; if you know of others, please let me know!
77
- <!-- README_GPTQ.md-compatible clients end -->
78
-
79
  <!-- README_GPTQ.md-provided-files start -->
80
  ## Provided files, and GPTQ parameters
81
 
@@ -181,6 +173,8 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
181
  <!-- README_GPTQ.md-text-generation-webui start -->
182
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
183
 
 
 
184
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
185
 
186
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
@@ -204,122 +198,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
204
 
205
  <!-- README_GPTQ.md-text-generation-webui end -->
206
 
207
- <!-- README_GPTQ.md-use-from-tgi start -->
208
- ## Serving this model from Text Generation Inference (TGI)
209
-
210
- It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
211
-
212
- Example Docker parameters:
213
-
214
- ```shell
215
- --model-id TheBloke/mixtralnt-4x7b-test-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
216
- ```
217
-
218
- Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
219
-
220
- ```shell
221
- pip3 install huggingface-hub
222
- ```
223
-
224
- ```python
225
- from huggingface_hub import InferenceClient
226
-
227
- endpoint_url = "https://your-endpoint-url-here"
228
-
229
- prompt = "Tell me about AI"
230
- prompt_template=f'''{prompt}
231
- '''
232
-
233
- client = InferenceClient(endpoint_url)
234
- response = client.text_generation(prompt,
235
- max_new_tokens=128,
236
- do_sample=True,
237
- temperature=0.7,
238
- top_p=0.95,
239
- top_k=40,
240
- repetition_penalty=1.1)
241
-
242
- print(f"Model output: {response}")
243
- ```
244
- <!-- README_GPTQ.md-use-from-tgi end -->
245
- <!-- README_GPTQ.md-use-from-python start -->
246
- ## Python code example: inference from this GPTQ model
247
-
248
- ### Install the necessary packages
249
-
250
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
251
-
252
- ```shell
253
- pip3 install --upgrade transformers optimum
254
- # If using PyTorch 2.1 + CUDA 12.x:
255
- pip3 install --upgrade auto-gptq
256
- # or, if using PyTorch 2.1 + CUDA 11.x:
257
- pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
258
- ```
259
-
260
- If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
261
-
262
- ```shell
263
- pip3 uninstall -y auto-gptq
264
- git clone https://github.com/PanQiWei/AutoGPTQ
265
- cd AutoGPTQ
266
- git checkout v0.5.1
267
- pip3 install .
268
- ```
269
-
270
- ### Example Python code
271
-
272
- ```python
273
- from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
274
-
275
- model_name_or_path = "TheBloke/mixtralnt-4x7b-test-GPTQ"
276
- # To use a different branch, change revision
277
- # For example: revision="gptq-4bit-128g-actorder_True"
278
- model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
279
- device_map="auto",
280
- trust_remote_code=False,
281
- revision="main")
282
-
283
- tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
284
-
285
- prompt = "Tell me about AI"
286
- prompt_template=f'''{prompt}
287
- '''
288
-
289
- print("\n\n*** Generate:")
290
-
291
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
292
- output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
293
- print(tokenizer.decode(output[0]))
294
-
295
- # Inference can also be done using transformers' pipeline
296
-
297
- print("*** Pipeline:")
298
- pipe = pipeline(
299
- "text-generation",
300
- model=model,
301
- tokenizer=tokenizer,
302
- max_new_tokens=512,
303
- do_sample=True,
304
- temperature=0.7,
305
- top_p=0.95,
306
- top_k=40,
307
- repetition_penalty=1.1
308
- )
309
-
310
- print(pipe(prompt_template)[0]['generated_text'])
311
- ```
312
- <!-- README_GPTQ.md-use-from-python end -->
313
-
314
- <!-- README_GPTQ.md-compatibility start -->
315
- ## Compatibility
316
-
317
- The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
318
-
319
- [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
320
 
321
- For a list of clients/servers, please see "Known compatible clients / servers", above.
322
- <!-- README_GPTQ.md-compatibility end -->
323
 
324
  <!-- footer start -->
325
  <!-- 200823 -->
 
38
 
39
  This repo contains GPTQ model files for [Charles Goddard's Mixtralnt 4X7B Test](https://huggingface.co/chargoddard/mixtralnt-4x7b-test).
40
 
41
+ ## Requires AutoGPTQ PR + transformers 4.36.0
42
+
43
+ These files were made with, and will currently only work with, this AutoGPTQ PR: https://github.com/LaaZa/AutoGPTQ/tree/Mixtral-fix
44
+
45
+ To test, please build AutoGPTQ from source using that PR. You also need Transformers version 4.36.0, released December 11th.
46
+
47
+ Transformers support has just arrived also via two PRs - and is expected in main Transformers + Optimum tomorrow (Dec 12th).
48
+
49
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
50
 
51
  <!-- description end -->
 
68
  <!-- prompt-template end -->
69
 
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  <!-- README_GPTQ.md-provided-files start -->
72
  ## Provided files, and GPTQ parameters
73
 
 
173
  <!-- README_GPTQ.md-text-generation-webui start -->
174
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
175
 
176
+ **WILL ONLY WORK WITH TRANSFORMERS 4.36.0 PLUS AUTOGPTQ FROM FORK LISTED IN DESCRIPTION**
177
+
178
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
179
 
180
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
 
198
 
199
  <!-- README_GPTQ.md-text-generation-webui end -->
200
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
 
 
 
202
 
203
  <!-- footer start -->
204
  <!-- 200823 -->