Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,8 @@ LoRA credit to https://huggingface.co/baseten/alpaca-30b
|
|
12 |
# Usage
|
13 |
1. Run manually through GPTQ
|
14 |
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode)
|
15 |
-
|
|
|
16 |
|
17 |
Since this is instruction tuned, for best results, use the following format for inference:
|
18 |
```
|
|
|
12 |
# Usage
|
13 |
1. Run manually through GPTQ
|
14 |
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode)
|
15 |
+
|
16 |
+
**Note that a recent code change in GPTQ broke functionality for GPTQ in general, so please follow [these instructions](https://huggingface.co/elinas/alpaca-30b-lora-int4/discussions/2#641a38d5f1ad1c1173d8f192) to fix the issue!**
|
17 |
|
18 |
Since this is instruction tuned, for best results, use the following format for inference:
|
19 |
```
|