metadata
quantized_by: bartowski
Exllama v2 Quantizations of NeuralHermes-2.5-Mistral-7B at 4.0 bits per weight
Using turboderp's ExLlamaV2 v0.0.10 for quantization.
Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.
Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B
Download instructions
With git:
git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/NeuralHermes-2.5-Mistral-7B-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download from a different branch, add the --revision
parameter:
mkdir NeuralHermes-2.5-Mistral-7B-exl2
huggingface-cli download bartowski/NeuralHermes-2.5-Mistral-7B-exl2 --revision 4_0 --local-dir NeuralHermes-2.5-Mistral-7B-exl2 --local-dir-use-symlinks False