TheBloke commited on
Commit
fccadc6
1 Parent(s): bbf1043

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -52
README.md CHANGED
@@ -11,10 +11,34 @@ license: apache-2.0
11
  model_creator: Mistral AI_
12
  model_name: Mixtral 8X7B Instruct v0.1
13
  model_type: mixtral
14
- prompt_template: '<s>[INST] {prompt} [/INST]
15
 
16
  '
17
  quantized_by: TheBloke
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
  <!-- markdownlint-disable MD041 -->
20
 
@@ -44,6 +68,11 @@ quantized_by: TheBloke
44
 
45
  This repo contains GPTQ model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
46
 
 
 
 
 
 
47
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
48
 
49
  <!-- description end -->
@@ -60,7 +89,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
60
  ## Prompt template: Mistral
61
 
62
  ```
63
- <s>[INST] {prompt} [/INST]
64
 
65
  ```
66
 
@@ -73,14 +102,8 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
73
 
74
  GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
75
 
76
- These GPTQ models are known to work in the following inference servers/webuis.
77
-
78
- - [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
79
- - [KoboldAI United](https://github.com/henk717/koboldai)
80
- - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
81
- - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
82
 
83
- This may not be a complete list; if you know of others, please let me know!
84
  <!-- README_GPTQ.md-compatible clients end -->
85
 
86
  <!-- README_GPTQ.md-provided-files start -->
@@ -187,6 +210,12 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
187
  <!-- README_GPTQ.md-text-generation-webui start -->
188
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
189
 
 
 
 
 
 
 
190
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
191
 
192
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
@@ -213,50 +242,18 @@ It is strongly recommended to use the text-generation-webui one-click-installers
213
  <!-- README_GPTQ.md-use-from-tgi start -->
214
  ## Serving this model from Text Generation Inference (TGI)
215
 
216
- It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
217
 
218
- Example Docker parameters:
219
-
220
- ```shell
221
- --model-id TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
222
- ```
223
-
224
- Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
225
-
226
- ```shell
227
- pip3 install huggingface-hub
228
- ```
229
-
230
- ```python
231
- from huggingface_hub import InferenceClient
232
-
233
- endpoint_url = "https://your-endpoint-url-here"
234
-
235
- prompt = "Tell me about AI"
236
- prompt_template=f'''<s>[INST] {prompt} [/INST]
237
- '''
238
-
239
- client = InferenceClient(endpoint_url)
240
- response = client.text_generation(prompt,
241
- max_new_tokens=128,
242
- do_sample=True,
243
- temperature=0.7,
244
- top_p=0.95,
245
- top_k=40,
246
- repetition_penalty=1.1)
247
-
248
- print(f"Model output: {response}")
249
- ```
250
  <!-- README_GPTQ.md-use-from-tgi end -->
251
  <!-- README_GPTQ.md-use-from-python start -->
252
  ## Python code example: inference from this GPTQ model
253
 
254
  ### Install the necessary packages
255
 
256
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
257
 
258
  ```shell
259
- pip3 install --upgrade transformers optimum
260
  # If using PyTorch 2.1 + CUDA 12.x:
261
  pip3 install --upgrade auto-gptq
262
  # or, if using PyTorch 2.1 + CUDA 11.x:
@@ -269,8 +266,7 @@ If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Lik
269
  pip3 uninstall -y auto-gptq
270
  git clone https://github.com/PanQiWei/AutoGPTQ
271
  cd AutoGPTQ
272
- git checkout v0.5.1
273
- pip3 install .
274
  ```
275
 
276
  ### Example Python code
@@ -288,8 +284,9 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
288
 
289
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
290
 
291
- prompt = "Tell me about AI"
292
- prompt_template=f'''<s>[INST] {prompt} [/INST]
 
293
  '''
294
 
295
  print("\n\n*** Generate:")
@@ -320,11 +317,8 @@ print(pipe(prompt_template)[0]['generated_text'])
320
  <!-- README_GPTQ.md-compatibility start -->
321
  ## Compatibility
322
 
323
- The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
324
-
325
- [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
326
 
327
- For a list of clients/servers, please see "Known compatible clients / servers", above.
328
  <!-- README_GPTQ.md-compatibility end -->
329
 
330
  <!-- footer start -->
 
11
  model_creator: Mistral AI_
12
  model_name: Mixtral 8X7B Instruct v0.1
13
  model_type: mixtral
14
+ prompt_template: '[INST] {prompt} [/INST]
15
 
16
  '
17
  quantized_by: TheBloke
18
+ widget:
19
+ - output:
20
+ text: 'Arr, shiver me timbers! Ye have a llama on yer lawn, ye say? Well, that
21
+ be a new one for me! Here''s what I''d suggest, arr:
22
+
23
+
24
+ 1. Firstly, ensure yer safety. Llamas may look gentle, but they can be protective
25
+ if they feel threatened.
26
+
27
+ 2. Try to make the area less appealing to the llama. Remove any food sources
28
+ or water that might be attracting it.
29
+
30
+ 3. Contact local animal control or a wildlife rescue organization. They be the
31
+ experts and can provide humane ways to remove the llama from yer property.
32
+
33
+ 4. If ye have any experience with animals, you could try to gently herd the
34
+ llama towards a nearby field or open space. But be careful, arr!
35
+
36
+
37
+ Remember, arr, it be important to treat the llama with respect and care. It
38
+ be a creature just trying to survive, like the rest of us.'
39
+ text: '[INST] You are a pirate chatbot who always responds with Arr and pirate speak!
40
+
41
+ There''s a llama on my lawn, how can I get rid of him? [/INST]'
42
  ---
43
  <!-- markdownlint-disable MD041 -->
44
 
 
68
 
69
  This repo contains GPTQ model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
70
 
71
+ Mixtral GPTQs currently require:
72
+ * Transformers 4.36.0 or later
73
+ * either, AutoGPTQ 0.6 compiled from source, or
74
+ * Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers`
75
+
76
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
77
 
78
  <!-- description end -->
 
89
  ## Prompt template: Mistral
90
 
91
  ```
92
+ [INST] {prompt} [/INST]
93
 
94
  ```
95
 
 
102
 
103
  GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
104
 
105
+ Mixtral GPTQs currently have special requirements - see Description above.
 
 
 
 
 
106
 
 
107
  <!-- README_GPTQ.md-compatible clients end -->
108
 
109
  <!-- README_GPTQ.md-provided-files start -->
 
210
  <!-- README_GPTQ.md-text-generation-webui start -->
211
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
212
 
213
+ **NOTE**: Requires:
214
+
215
+ * Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github
216
+ * Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`,
217
+ * or, `Loader: Transformers`, if you installed Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers`
218
+
219
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
220
 
221
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
 
242
  <!-- README_GPTQ.md-use-from-tgi start -->
243
  ## Serving this model from Text Generation Inference (TGI)
244
 
245
+ Not currently supported for Mixtral models.
246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
247
  <!-- README_GPTQ.md-use-from-tgi end -->
248
  <!-- README_GPTQ.md-use-from-python start -->
249
  ## Python code example: inference from this GPTQ model
250
 
251
  ### Install the necessary packages
252
 
253
+ Requires: Transformers 4.37.0.dev0 from Github, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later.
254
 
255
  ```shell
256
+ pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum
257
  # If using PyTorch 2.1 + CUDA 12.x:
258
  pip3 install --upgrade auto-gptq
259
  # or, if using PyTorch 2.1 + CUDA 11.x:
 
266
  pip3 uninstall -y auto-gptq
267
  git clone https://github.com/PanQiWei/AutoGPTQ
268
  cd AutoGPTQ
269
+ DISABLE_QIGEN=1 pip3 install .
 
270
  ```
271
 
272
  ### Example Python code
 
284
 
285
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
286
 
287
+ prompt = "Write a story about llamas"
288
+ system_message = "You are a story writing assistant"
289
+ prompt_template=f'''[INST] {prompt} [/INST]
290
  '''
291
 
292
  print("\n\n*** Generate:")
 
317
  <!-- README_GPTQ.md-compatibility start -->
318
  ## Compatibility
319
 
320
+ The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github).
 
 
321
 
 
322
  <!-- README_GPTQ.md-compatibility end -->
323
 
324
  <!-- footer start -->