--- quantized_by: bartowski --- # Exllama v2 Quantizations of dolphin-2.5-mixtral-8x7b at 6.0 bits per weight Using turboderp's ExLlamaV2 v0.0.11 for quantization. Conversion was done using the default calibration dataset. Original model: https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b ## Download instructions With git: ```shell git clone --single-branch --branch 6.0 https://huggingface.co/bartowski/dolphin-2.5-mixtral-8x7b-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir dolphin-2.5-mixtral-8x7b-exl2 huggingface-cli download bartowski/dolphin-2.5-mixtral-8x7b-exl2 --revision 6_0 --local-dir dolphin-2.5-mixtral-8x7b-exl2 --local-dir-use-symlinks False ```