TheBloke commited on
Commit
9dc8df9
1 Parent(s): 21c79ed

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -53
README.md CHANGED
@@ -57,6 +57,11 @@ quantized_by: TheBloke
57
 
58
  This repo contains GPTQ model files for [Eric Hartford's Dolphin 2.5 Mixtral 8X7B](https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b).
59
 
 
 
 
 
 
60
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
61
 
62
  <!-- description end -->
@@ -89,14 +94,8 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
89
 
90
  GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
91
 
92
- These GPTQ models are known to work in the following inference servers/webuis.
93
-
94
- - [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
95
- - [KoboldAI United](https://github.com/henk717/koboldai)
96
- - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
97
- - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
98
 
99
- This may not be a complete list; if you know of others, please let me know!
100
  <!-- README_GPTQ.md-compatible clients end -->
101
 
102
  <!-- README_GPTQ.md-provided-files start -->
@@ -204,6 +203,12 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
204
  <!-- README_GPTQ.md-text-generation-webui start -->
205
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
206
 
 
 
 
 
 
 
207
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
208
 
209
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
@@ -230,54 +235,18 @@ It is strongly recommended to use the text-generation-webui one-click-installers
230
  <!-- README_GPTQ.md-use-from-tgi start -->
231
  ## Serving this model from Text Generation Inference (TGI)
232
 
233
- It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
234
-
235
- Example Docker parameters:
236
-
237
- ```shell
238
- --model-id TheBloke/dolphin-2.5-mixtral-8x7b-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
239
- ```
240
-
241
- Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
242
-
243
- ```shell
244
- pip3 install huggingface-hub
245
- ```
246
-
247
- ```python
248
- from huggingface_hub import InferenceClient
249
-
250
- endpoint_url = "https://your-endpoint-url-here"
251
 
252
- prompt = "Tell me about AI"
253
- prompt_template=f'''<|im_start|>system
254
- {system_message}<|im_end|>
255
- <|im_start|>user
256
- {prompt}<|im_end|>
257
- <|im_start|>assistant
258
- '''
259
-
260
- client = InferenceClient(endpoint_url)
261
- response = client.text_generation(prompt,
262
- max_new_tokens=128,
263
- do_sample=True,
264
- temperature=0.7,
265
- top_p=0.95,
266
- top_k=40,
267
- repetition_penalty=1.1)
268
-
269
- print(f"Model output: {response}")
270
- ```
271
  <!-- README_GPTQ.md-use-from-tgi end -->
272
  <!-- README_GPTQ.md-use-from-python start -->
273
  ## Python code example: inference from this GPTQ model
274
 
275
  ### Install the necessary packages
276
 
277
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
278
 
279
  ```shell
280
- pip3 install --upgrade transformers optimum
281
  # If using PyTorch 2.1 + CUDA 12.x:
282
  pip3 install --upgrade auto-gptq
283
  # or, if using PyTorch 2.1 + CUDA 11.x:
@@ -290,8 +259,7 @@ If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Lik
290
  pip3 uninstall -y auto-gptq
291
  git clone https://github.com/PanQiWei/AutoGPTQ
292
  cd AutoGPTQ
293
- git checkout v0.5.1
294
- pip3 install .
295
  ```
296
 
297
  ### Example Python code
@@ -309,7 +277,8 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
309
 
310
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
311
 
312
- prompt = "Tell me about AI"
 
313
  prompt_template=f'''<|im_start|>system
314
  {system_message}<|im_end|>
315
  <|im_start|>user
@@ -345,11 +314,8 @@ print(pipe(prompt_template)[0]['generated_text'])
345
  <!-- README_GPTQ.md-compatibility start -->
346
  ## Compatibility
347
 
348
- The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
349
-
350
- [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
351
 
352
- For a list of clients/servers, please see "Known compatible clients / servers", above.
353
  <!-- README_GPTQ.md-compatibility end -->
354
 
355
  <!-- footer start -->
 
57
 
58
  This repo contains GPTQ model files for [Eric Hartford's Dolphin 2.5 Mixtral 8X7B](https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b).
59
 
60
+ Mixtral GPTQs currently require:
61
+ * Transformers 4.36.0 or later
62
+ * either, AutoGPTQ 0.6 compiled from source, or
63
+ * Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers`
64
+
65
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
66
 
67
  <!-- description end -->
 
94
 
95
  GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
96
 
97
+ Mixtral GPTQs currently have special requirements - see Description above.
 
 
 
 
 
98
 
 
99
  <!-- README_GPTQ.md-compatible clients end -->
100
 
101
  <!-- README_GPTQ.md-provided-files start -->
 
203
  <!-- README_GPTQ.md-text-generation-webui start -->
204
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
205
 
206
+ **NOTE**: Requires:
207
+
208
+ * Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github
209
+ * Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`,
210
+ * or, `Loader: Transformers`, if you installed Transformers from Github (`pip3 install git+https://github.com/huggingface/transformers)
211
+
212
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
213
 
214
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
 
235
  <!-- README_GPTQ.md-use-from-tgi start -->
236
  ## Serving this model from Text Generation Inference (TGI)
237
 
238
+ Not currently supported for Mixtral models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
240
  <!-- README_GPTQ.md-use-from-tgi end -->
241
  <!-- README_GPTQ.md-use-from-python start -->
242
  ## Python code example: inference from this GPTQ model
243
 
244
  ### Install the necessary packages
245
 
246
+ Requires: Transformers 4.37.0.dev0 from Github, Optimum or later, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later.
247
 
248
  ```shell
249
+ pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum
250
  # If using PyTorch 2.1 + CUDA 12.x:
251
  pip3 install --upgrade auto-gptq
252
  # or, if using PyTorch 2.1 + CUDA 11.x:
 
259
  pip3 uninstall -y auto-gptq
260
  git clone https://github.com/PanQiWei/AutoGPTQ
261
  cd AutoGPTQ
262
+ DISABLE_QIGEN=1 pip3 install .
 
263
  ```
264
 
265
  ### Example Python code
 
277
 
278
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
279
 
280
+ prompt = "Write a story about llamas"
281
+ system_message = "You are a story writing assistant"
282
  prompt_template=f'''<|im_start|>system
283
  {system_message}<|im_end|>
284
  <|im_start|>user
 
314
  <!-- README_GPTQ.md-compatibility start -->
315
  ## Compatibility
316
 
317
+ The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github).
 
 
318
 
 
319
  <!-- README_GPTQ.md-compatibility end -->
320
 
321
  <!-- footer start -->