bartowski's picture
Add 5.0 link
f41bae2
|
raw
history blame
1.72 kB
metadata
license: apache-2.0
language:
  - en
tags:
  - mistral
  - instruct
  - finetune
  - chatml
  - gpt4
quantized_by: bartowski

Exllama v2 Quantizations of Autolycus-Mistral_7B

Using turboderp's ExLlamaV2 v0.0.7 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.

Original model: https://huggingface.co/FPHam/Autolycus-Mistral_7B

5.0 bits per weight

6.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/Autolycus-Mistral_7B-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called Autolycus-Mistral_7B-exl2:

mkdir Autolycus-Mistral_7B-exl2
huggingface-cli download bartowski/Autolycus-Mistral_7B-exl2 --local-dir Autolycus-Mistral_7B-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir Autolycus-Mistral_7B-exl2
huggingface-cli download bartowski/Autolycus-Mistral_7B-exl2 --revision 4.0 --local-dir Autolycus-Mistral_7B-exl2 --local-dir-use-symlinks False