--- license: apache-2.0 quantized_by: bartowski --- # Exllama v2 Quantizations of neural-chat-7b-v3-3 at 8.0 bits per weight Using turboderp's ExLlamaV2 v0.0.10 for quantization. Conversion was done using VMWareOpenInstruct.parquet as calibration dataset. Original model: https://huggingface.co/Intel/neural-chat-7b-v3-3 ## Download instructions With git: ```shell git clone --single-branch --branch 8.0 https://huggingface.co/bartowski/neural-chat-7b-v3-3-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir neural-chat-7b-v3-3-exl2 huggingface-cli download bartowski/neural-chat-7b-v3-3-exl2 --revision 8_0 --local-dir neural-chat-7b-v3-3-exl2 --local-dir-use-symlinks False ```