TheBloke commited on
Commit
a3a0dac
1 Parent(s): f1fed55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -7
README.md CHANGED
@@ -42,6 +42,17 @@ This repo contains GPTQ model files for [Mistral AI's Mistral 7B Instruct v0.1](
42
 
43
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
44
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- description end -->
46
  <!-- repositories-available start -->
47
  ## Repositories available
@@ -70,7 +81,7 @@ Multiple quantisation parameters are provided, to allow you to choose the best o
70
 
71
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
72
 
73
- All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
74
 
75
  <details>
76
  <summary>Explanation of GPTQ parameters</summary>
@@ -164,6 +175,10 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
164
  <!-- README_GPTQ.md-text-generation-webui start -->
165
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
166
 
 
 
 
 
167
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
168
 
169
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
@@ -187,10 +202,11 @@ It is strongly recommended to use the text-generation-webui one-click-installers
187
 
188
  ### Install the necessary packages
189
 
190
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
191
 
192
  ```shell
193
- pip3 install transformers optimum
 
194
  pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
195
  ```
196
 
@@ -251,11 +267,8 @@ print(pipe(prompt_template)[0]['generated_text'])
251
  <!-- README_GPTQ.md-compatibility start -->
252
  ## Compatibility
253
 
254
- The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
255
-
256
- [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
257
 
258
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
259
  <!-- README_GPTQ.md-compatibility end -->
260
 
261
  <!-- footer start -->
 
42
 
43
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
44
 
45
+ ### GPTQs will work in Transformers only - and requires Transformers from Github
46
+
47
+ At the time of writing (September 28th), AutoGPTQ has not yet added support for the new Mistral models.
48
+
49
+ These GPTQs were made directly from Transformers, and so can only be loaded via the Transformers interface. They can't be loaded directly from AutoGPTQ.
50
+
51
+ In addition, you will need to install Transformers from Github, with:
52
+ ```
53
+ pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79
54
+ ```
55
+
56
  <!-- description end -->
57
  <!-- repositories-available start -->
58
  ## Repositories available
 
81
 
82
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
83
 
84
+ These files were made with Transformers 4.34.0.dev0, from commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79.
85
 
86
  <details>
87
  <summary>Explanation of GPTQ parameters</summary>
 
175
  <!-- README_GPTQ.md-text-generation-webui start -->
176
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
177
 
178
+ NOTE: These models haven't been tested in text-generation-webui. But I hope they will work.
179
+
180
+ You will need to use **Loader: Transformers**. AutoGPTQ will not work. I don't know about ExLlama - it might work as this model is so similar to Llama; let me know if it does!
181
+
182
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
183
 
184
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
 
202
 
203
  ### Install the necessary packages
204
 
205
+ Requires: Transformers 4.34.0.dev0 from Github source, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
206
 
207
  ```shell
208
+ pip3 install optimum
209
+ pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79
210
  pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
211
  ```
212
 
 
267
  <!-- README_GPTQ.md-compatibility start -->
268
  ## Compatibility
269
 
270
+ The files provided are only tested to work with Transformers 4.34.0.dev0 as of commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79.
 
 
271
 
 
272
  <!-- README_GPTQ.md-compatibility end -->
273
 
274
  <!-- footer start -->