--- tags: - generated_from_trainer model-index: - name: zephyr-7b-beta results: [] license: mit datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en base_model: mistralai/Mistral-7B-v0.1 quantized_by: bartowski --- # Exllama v2 Quantizations of zephyr-7b-beta at 8.0 Using turboderp's ExLlamaV2 v0.0.7 for quantization. Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. Original model: https://huggingface.co/HuggingFaceH4/zephyr-7b-beta ## Download instructions With git: ```shell git clone --single-branch --branch 8.0 https://huggingface.co/bartowski/zephyr-7b-beta-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir zephyr-7b-beta-exl2 huggingface-cli download bartowski/zephyr-7b-beta-exl2 --revision --local-dir zephyr-7b-beta-exl2 --local-dir-use-symlinks False ```