TheBloke commited on
Commit
edf850c
1 Parent(s): af3f9cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -2
README.md CHANGED
@@ -1,7 +1,99 @@
1
  ---
2
  license: other
 
3
  ---
4
 
5
- GPTQ 4bit of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b)
6
 
7
- More details coming soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ inference: false
4
  ---
5
 
6
+ # Alpaca LoRA GPTQ 4bit
7
 
8
+ This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b)
9
+
10
+ ## These files need a lot of VRAM!
11
+
12
+ I believe they will work on 2 x 24GB cards, and I hope that at least the 1024g file will work on an A100 40GB.
13
+
14
+ I can't guarantee that the two 128g files will work in only 40GB of VRAM.
15
+
16
+ I haven't specifically tested VRAM requirements yet but will aim to do so at some point. If you have any experiences to share, please do so in the comments.
17
+
18
+ ## GIBBERISH OUTPUT IN `text-generation-webui`?
19
+
20
+ Please read the Provided Files section below. You should use `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest Triton branch of GPTQ-for-LLaMa.
21
+
22
+ ## Provided files
23
+
24
+ Three files are provided. **The second and third files will not work unless you use a recent version of the Triton branch of GPTQ-for-LLaMa**
25
+
26
+ Specifically, the last two files use `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with the CUDA branch of GPTQ-for-LLaMa, or `text-generation-webui` one-click installers.
27
+
28
+ Unless you are able to use the latest Triton GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
29
+
30
+ * `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors`
31
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
32
+ * Works with text-generation-webui one-click-installers
33
+ * Works on Windows
34
+ * Will require ~40GB of VRAM, meaning you'll need an A100 or 2 x 24GB cards.
35
+ * I haven't yet tested how much VRAM is required exactly so it's possible it won't run on an A100
36
+ * Parameters: Groupsize = 128g. No act-order.
37
+ * Command used to create the GPTQ:
38
+ ```
39
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors
40
+ ```
41
+ * `alpaca-lora-65B-GPTQ-4bit-128g.safetensors`
42
+ * Only works with the latest Triton branch of GPTQ-for-LLaMa
43
+ * **Does not** work with text-generation-webui one-click-installers
44
+ * **Does not** work on Windows
45
+ * Will require 40+GB of VRAM, meaning you'll need an A100 or 2 x 24GB cards.
46
+ * I haven't yet tested how much VRAM is required exactly so it's possible it won't run on an A100 40GB
47
+ * Parameters: Groupsize = 128g. act-order.
48
+ * Offers highest quality quantisation, but requires recent Triton GPTQ-for-LLaMa code and more VRAM
49
+ * Command used to create the GPTQ:
50
+ ```
51
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors alpaca-lora-65B-GPTQ-4bit-128g.safetensors
52
+ ```
53
+ * `alpaca-lora-65B-GPTQ-4bit-1024g.safetensors`
54
+ * Only works with the latest Triton branch of GPTQ-for-LLaMa
55
+ * **Does not** work with text-generation-webui one-click-installers
56
+ * **Does not** work on Windows
57
+ * Should require less VRAM than the 128g file, so hopefully will run in an A100 40GB
58
+ * I haven't yet tested how much VRAM is required exactly
59
+ * Parameters: Groupsize = 1024g. act-order.
60
+ * Offers the benefits of act-order, but at a higher groupsize to reduce VRAM requirements
61
+ * Command used to create the GPTQ:
62
+ ```
63
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 1024 --save_safetensors alpaca-lora-65B-GPTQ-4bit-1024g.safetensors
64
+ ```
65
+
66
+ ## How to run in `text-generation-webui`
67
+
68
+ File `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
69
+
70
+ [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
71
+
72
+ The other two `safetensors` model files were created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest Triton GPTQ-for-LLaMa is used inside the UI.
73
+
74
+ If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
75
+ ```
76
+ # Clone text-generation-webui, if you don't already have it
77
+ git clone https://github.com/oobabooga/text-generation-webui
78
+ # Make a repositories directory
79
+ mkdir text-generation-webui/repositories
80
+ cd text-generation-webui/repositories
81
+ # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
82
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
83
+ ```
84
+
85
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
86
+ ```
87
+ cd text-generation-webui
88
+ python server.py --model medalpaca-13B-GPTQ-4bit --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
89
+ ```
90
+
91
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
92
+
93
+ If you can't update GPTQ-for-LLaMa to the latest Triton branch, or don't want to, you can use `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
94
+
95
+ # Original model card not provided
96
+
97
+ No model card was provided in [changsung's original repository](https://huggingface.co/chansung/alpaca-lora-65b).
98
+
99
+ Based on the name, I assume this is the result of fine tuning using the original GPT 3.5 Alpaca dataset. It is unknown as to whether the original Stanford data was used, or the [cleaned tloen/alpaca-lora variant](https://github.com/tloen/alpaca-lora).