--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - jondurbin/airoboros-2.2 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_V2_196k - TokenBender/python_eval_instruct_51k tags: - code license: apache-2.0 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: verified: false quantized_by: bartowski --- ## Exllama v2 Quantizations of speechless-instruct-mistral-7b-v0.2 Using turboderp's ExLlamaV2 v0.0.21 for quantization. The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/uukuguy/speechless-instruct-mistral-7b-v0.2 ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` [INST] <> {system_prompt} <> {prompt} [/INST] ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/speechless-instruct-mistral-7b-v0.2-exl2 speechless-instruct-mistral-7b-v0.2-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/speechless-instruct-mistral-7b-v0.2-exl2 --revision 6_5 --local-dir speechless-instruct-mistral-7b-v0.2-exl2-6_5 ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/speechless-instruct-mistral-7b-v0.2-exl2 --revision 6_5 --local-dir speechless-instruct-mistral-7b-v0.2-exl2-6.5 ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski