bartowski's picture
Quant for 4.0
a7391c8
metadata
quantized_by: bartowski

Exllama v2 Quantizations of WhiteRabbitNeo-13B at 4.0 bits per weight

Using turboderp's ExLlamaV2 v0.0.11 for quantization.

Conversion was done using the default calibration dataset.

Original model: https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-13B

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/WhiteRabbitNeo-13B-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download from a different branch, add the --revision parameter:

mkdir WhiteRabbitNeo-13B-exl2
huggingface-cli download bartowski/WhiteRabbitNeo-13B-exl2 --revision 4_0 --local-dir WhiteRabbitNeo-13B-exl2 --local-dir-use-symlinks False