Exllama v2 Quantizations of Nous-Hermes-2-SOLAR-10.7B

Using turboderp's ExLlamaV2 v0.0.11 for quantization.

The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B

Branch Bits lm_head bits VRAM (4k) VRAM (16k) VRAM (32k) Description
8_0 8.0 8.0 11.9 GB 13.3 GB 15.3 GB Maximum quality that ExLlamaV2 can produce, near unquantized performance.
6_5 6.5 8.0 10.3 GB 11.7 GB 13.7 GB Very similar to 8.0, good tradeoff of size vs performance, recommended.
5_0 5.0 6.0 8.3 GB 9.7 GB 11.7 GB Slightly lower quality vs 6.5, but usable on 8GB cards.
4_25 4.25 6.0 7.4 GB 8.6 GB 10.6 GB GPTQ equivalent bits per weight, slightly higher quality.
3_5 3.5 6.0 6.4 GB 7.8 GB 9.8 GB Lower quality, only use if you have to.

Download instructions

With git:

git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Nous-Hermes-2-SOLAR-10.7B-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called Nous-Hermes-2-SOLAR-10.7B-exl2:

mkdir Nous-Hermes-2-SOLAR-10.7B-exl2
huggingface-cli download bartowski/Nous-Hermes-2-SOLAR-10.7B-exl2 --local-dir Nous-Hermes-2-SOLAR-10.7B-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir Nous-Hermes-2-SOLAR-10.7B-exl2
huggingface-cli download bartowski/Nous-Hermes-2-SOLAR-10.7B-exl2 --revision 4_0 --local-dir Nous-Hermes-2-SOLAR-10.7B-exl2 --local-dir-use-symlinks False
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for bartowski/Nous-Hermes-2-SOLAR-10.7B-exl2

Finetuned
(42)
this model