TheBloke commited on
Commit
1943183
1 Parent(s): b1e9b81

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ # Tim Dettmers' Guanaco 13B GPTQ
7
+
8
+ These files are GPTQ 4bit model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-13b).
9
+
10
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
11
+
12
+ ## Other repositories available
13
+
14
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/TheBloke/guanaco-13B-GPTQ)
15
+ * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-13B-GGML)
16
+ * [Original unquantised fp16 model in HF format](https://huggingface.co/timdettmers/guanaco-13b)
17
+
18
+ ## How to easily download and use this model in text-generation-webui
19
+
20
+ Open the text-generation-webui UI as normal.
21
+
22
+ 1. Click the **Model tab**.
23
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/guanaco-13B-GPTQ`.
24
+ 3. Click **Download**.
25
+ 4. Wait until it says it's finished downloading.
26
+ 5. Click the **Refresh** icon next to **Model** in the top left.
27
+ 6. In the **Model drop-down**: choose the model you just downloaded, `guanaco-13B-GPTQ`.
28
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
29
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
30
+ 9. Click **Save settings for this model** in the top right.
31
+ 10. Click **Reload the Model** in the top right.
32
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
33
+
34
+ ## Provided files
35
+
36
+ **Compatible file - Guanaco-13B-GPTQ-4bit-128g.no-act-order.safetensors**
37
+
38
+ In the `main` branch you will find `Guanaco-13B-GPTQ-4bit-128g.no-act-order.safetensors`
39
+
40
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
41
+
42
+ It was created without groupsize to minimise VRAM requirements, to keep it under 24GB VRAM. It was created with the `--act-order` parameter to maximise accuracy.
43
+
44
+ * `Guanaco-13B-GPTQ-4bit-128g.no-act-order.safetensors`
45
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
46
+ * Works with AutoGPTQ
47
+ * Works with text-generation-webui one-click-installers
48
+ * Parameters: Groupsize = 128. No act-order.
49
+ * Command used to create the GPTQ:
50
+ ```
51
+ python llama.py /workspace/process/TheBloke_guanaco-13B-GGML/HF wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/process/TheBloke_guanaco-13B-GGML/gptq/Guanaco-13B-GPTQ-4bit-128g.no-act-order.safetensors
52
+ ```
53
+
54
+ # Original model card
55
+
56
+ Not provided by original model creator.