bartowski's picture
Quant for 6.0
206a8cf
metadata
license: apache-2.0
tags:
  - rag
  - closed-qa
  - context
  - mistral
quantized_by: bartowski

Exllama v2 Quantizations of docsgpt-7b-mistral at 6.0 bits per weight

Using turboderp's ExLlamaV2 v0.0.10 for quantization.

Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.

Original model: https://huggingface.co/Arc53/docsgpt-7b-mistral

Download instructions

With git:

git clone --single-branch --branch 6.0 https://huggingface.co/bartowski/docsgpt-7b-mistral-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download from a different branch, add the --revision parameter:

mkdir docsgpt-7b-mistral-exl2
huggingface-cli download bartowski/docsgpt-7b-mistral-exl2 --revision 6_0 --local-dir docsgpt-7b-mistral-exl2 --local-dir-use-symlinks False