Edit model card

Exllama v2 Quantizations of opus-v0-7b

Using turboderp's ExLlamaV2 v0.0.7 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.

Original model: https://huggingface.co/dreamgen/opus-v0-7b

4.0 bits per weight

6.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/opus-v0-7b-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called opus-v0-7b-exl2:

mkdir opus-v0-7b-exl2
huggingface-cli download bartowski/opus-v0-7b-exl2 --local-dir opus-v0-7b-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir opus-v0-7b-exl2
huggingface-cli download bartowski/opus-v0-7b-exl2 --revision 4.0 --local-dir opus-v0-7b-exl2 --local-dir-use-symlinks False
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .