mobicham's picture
Update README.md
5959afe
---
license: llama2
train: false
inference: false
pipeline_tag: text-generation
---
## Llama-2-70b-hf-2bit_g16_s128-HQQ
This is a version of the LLama-2-70B-hf model quantized to 2-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq_blog/
This model outperforms an fp16 LLama-2-13B (perplexity 4.13 vs. 4.63) for a comparable ~26GB size.
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
``` Python
model_id = 'mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ'
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = HQQModelForCausalLM.from_quantized(model_id)
```
*Limitations*: <br>
-Only supports single GPU runtime.<br>
-Not compatible with HuggingFace's PEFT.<br>