Edit model card

Exllama v2 Quantizations of HuginnV5.5-12.6B

Using turboderp's ExLlamaV2 v0.0.12 for quantization.

The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/The-Face-Of-Goonery/HuginnV5.5-12.6B

Branch Bits lm_head bits VRAM (4k) VRAM (16k) VRAM (32k) Description
6_5 6.5 8.0 12.0 GB 14.7 GB 18.4 GB Near unquantized performance at vastly reduced size, recommended.
5_0 5.0 6.0 9.8 GB 12.4 GB 16.1 GB Slightly lower quality vs 6.5.
4_25 4.25 6.0 8.7 GB 11.3 GB 15.0 GB GPTQ equivalent bits per weight.
3_5 3.5 6.0 7.6 GB 10.1 GB 13.8 GB Lower quality, not recommended.

Download instructions

With git:

git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/HuginnV5.5-12.6B-exl2 HuginnV5.5-12.6B-exl2-6_5

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called HuginnV5.5-12.6B-exl2:

mkdir HuginnV5.5-12.6B-exl2
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --local-dir HuginnV5.5-12.6B-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

Linux:

mkdir HuginnV5.5-12.6B-exl2-6_5
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6_5 --local-dir-use-symlinks False

Windows (which apparently doesn't like _ in folders sometimes?):

mkdir HuginnV5.5-12.6B-exl2-6.5
huggingface-cli download bartowski/HuginnV5.5-12.6B-exl2 --revision 6_5 --local-dir HuginnV5.5-12.6B-exl2-6.5 --local-dir-use-symlinks False

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
4
Inference Examples
Unable to determine this model's library. Check the docs .