robert
update
ea9369e
---
license: other
inference: false
tags:
- GPTQ
- 3-bit
- quantized
---
# LLaMa 65B 3bit GPTQ
This is a GPTQ format quantised 3-bit model of LLaMa 65B.
It is the result of quantising to 3-bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LLaMa-65B-GPTQ-3bit`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `LLaMa-65B-GPTQ-3bit`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 3`, `Groupsize = None`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**Compatible file - LLaMa-65B-GPTQ-3bit.safetensors**
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created with the `--act-order` parameter to maximise inference quality, and with group_size = None to minmise VRAM requirements.
* `Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ.
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = None. act-order.
* Command used to create the GPTQ:
```
python llama.py /workspace/models/huggyllama_llama-65b wikitext2 --wbits 3 --true-sequential --act-order --save_safetensors /workspace/llama-3bit/LLaMa-65B-GPTQ-3bit.safetensors
```