--- license: apache-2.0 pipeline_tag: text-generation tags: - finetuned inference: parameters: temperature: 0.7 quantized_by: bartowski --- # Exllama v2 Quantizations of Mistral-7B-Instruct-v0.2 at 4.0 bits per weight Using turboderp's ExLlamaV2 v0.0.10 for quantization. Conversion was done using VMWareOpenInstruct.parquet as calibration dataset. Original model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 ## Download instructions With git: ```shell git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistral-7B-Instruct-v0.2-exl2 huggingface-cli download bartowski/Mistral-7B-Instruct-v0.2-exl2 --revision 4_0 --local-dir Mistral-7B-Instruct-v0.2-exl2 --local-dir-use-symlinks False ```