mobicham's picture
Update README.md
510d225
|
raw
history blame
No virus
745 Bytes
metadata
license: llama2
train: false
inference: false
pipeline_tag: text-generation

Llama-2-13b-hf-4bit_g64-HQQ

This a version of the LLama2-13B model quantized to 4-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq/

To run the model, install the HQQ library from https://github.com/mobiusml/hqq/tree/main/code and use it as follows:

from hqq.models.llama import LlamaHQQ
import transformers

model_id = 'mobiuslabsgmbh/Llama-2-13b-hf-4bit_g64-HQQ'
#Load the tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
#Load the model
model = LlamaHQQ.from_quantized(model_id)

Limitations:
-Only supports single GPU runtime.
-Not compatible with HuggingFace's PEFT.