Edit model card

Exllama v2 Quantizations of OpenHermes-2.5-neural-chat-7b-v3-1-7B

Using turboderp's ExLlamaV2 v0.0.9 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.

Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.

Original model: https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B

4.0 bits per weight

5.0 bits per weight

6.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2:

mkdir OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2
huggingface-cli download bartowski/OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2 --local-dir OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2
huggingface-cli download bartowski/OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2 --revision 4_0 --local-dir OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2 --local-dir-use-symlinks False
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .