TheBloke commited on
Commit
9d2f75e
1 Parent(s): cfd3a3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -34
README.md CHANGED
@@ -29,58 +29,39 @@ Join me at: https://discord.gg/UBgz4VXf
29
 
30
  ## EXPERIMENTAL
31
 
32
- Please note this is an experimental first model. Support for it is currently quite limited.
 
 
33
 
34
  To use it you will require:
35
 
36
  1. AutoGPTQ, from the latest `main` branch and compiled with `pip install .`
37
  2. `pip install einops`
38
 
39
- You can then use it immediately from Python code - see example code below
40
-
41
- ## text-generation-webui
42
-
43
- There is also provisional AutoGPTQ support in text-generation-webui.
44
-
45
- However at the time I'm writing this, a commit is needed to text-generation-webui to enable it to load this model.
46
 
47
- A [PR has been opened](https://github.com/oobabooga/text-generation-webui/pull/2367) which will provide support for this model.
48
 
49
- To get it working before the PR is merged, you will need to:
50
- 1. Edit `text-generation-webui/modules/AutoGPTQ_loader.py`
51
- 2. Make the following change:
52
-
53
- Find the line that says:
54
- ```
55
- 'use_safetensors': use_safetensors,
56
- ```
57
-
58
- And after it, add:
59
  ```
60
- 'trust_remote_code': shared.args.trust_remote_code,
 
 
61
  ```
62
 
63
- [Once you are done the file should look like this](https://github.com/oobabooga/text-generation-webui/blob/473a57e35219c063d2fc230cfc7b5a118b448b38/modules/AutoGPTQ_loader.py#L33-L39)
64
 
65
- 3. Install `einops` if you don't already have it:
66
-
67
- ```
68
- pip install einops
69
- ```
70
 
71
- 4. Install the latest AutoGPTQ and compile from source - note that this requires compiling the CUDA kernel, which requires CUDA toolkit. This may be an issue for Windows users.
72
 
73
- ```
74
- git clone https://github.com/PanQiWei/AutoGPTQ
75
- cd AutoGPTQ
76
- pip install . # This step requires CUDA toolkit installed
77
- ```
78
 
79
- 5. Then launch text-generation-webui as described below
80
 
81
  ## How to download and use this model in text-generation-webui
82
 
83
- 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust_remote_code`
84
  2. Click the **Model tab**.
85
  3. Under **Download custom model or LoRA**, enter `TheBloke/falcon-7B-instruct-GPTQ`.
86
  4. Click **Download**.
 
29
 
30
  ## EXPERIMENTAL
31
 
32
+ Please note this is an experimental GPTQ model. Support for it is currently quite limited.
33
+
34
+ It is also expected to be **VERY SLOW**. This is unavoidable at the moment, but is being looked at.
35
 
36
  To use it you will require:
37
 
38
  1. AutoGPTQ, from the latest `main` branch and compiled with `pip install .`
39
  2. `pip install einops`
40
 
41
+ You can then use it immediately from Python code - see example code below - or from text-generation-webui.
 
 
 
 
 
 
42
 
43
+ ## AutoGPTQ
44
 
45
+ To install AutoGPTQ please follow these instructions:
 
 
 
 
 
 
 
 
 
46
  ```
47
+ git clone https://github.com/PanQiWei/AutoGPTQ
48
+ cd AutoGPTQ
49
+ pip install .
50
  ```
51
 
52
+ These steps will require that you have the [Nvidia CUDA toolkit](https://developer.nvidia.com/cuda-12-0-1-download-archive) installed.
53
 
54
+ ## text-generation-webui
 
 
 
 
55
 
56
+ There is also provisional AutoGPTQ support in text-generation-webui.
57
 
58
+ This requires text-generation-webui as of commit 204731952ae59d79ea3805a425c73dd171d943c3.
 
 
 
 
59
 
60
+ So please first update text-genration-webui to the latest version.
61
 
62
  ## How to download and use this model in text-generation-webui
63
 
64
+ 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust-remote-code`
65
  2. Click the **Model tab**.
66
  3. Under **Download custom model or LoRA**, enter `TheBloke/falcon-7B-instruct-GPTQ`.
67
  4. Click **Download**.