--- license: apache-2.0 tags: - openchat - mistral - C-RLFT datasets: - openchat/openchat_sharegpt4_dataset - imone/OpenOrca_FLAN - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel - tiedong/goat - glaiveai/glaive-code-assistant - meta-math/MetaMathQA - OpenAssistant/oasst_top1_2023-08-25 - TIGER-Lab/MathInstruct library_name: transformers pipeline_tag: text-generation quantized_by: bartowski --- # Exllama v2 Quantizations of openchat-3.5-1210 at 8.0 bits per weight Using turboderp's ExLlamaV2 v0.0.10 for quantization. Conversion was done using VMWareOpenInstruct.parquet as calibration dataset. Original model: https://huggingface.co/openchat/openchat-3.5-1210 ## Download instructions With git: ```shell git clone --single-branch --branch 8.0 https://huggingface.co/bartowski/openchat-3.5-1210-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir openchat-3.5-1210-exl2 huggingface-cli download bartowski/openchat-3.5-1210-exl2 --revision 8_0 --local-dir openchat-3.5-1210-exl2 --local-dir-use-symlinks False ```