Exllama v2 Quantizations of speechless-mistral-dolphin-orca-platypus-samantha-7b
Using turboderp's ExLlamaV2 v0.0.8 for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.
Original model: https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
Download instructions
With git:
git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download the main
(only useful if you only care about measurement.json) branch to a folder called speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
:
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
mkdir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2
huggingface-cli download bartowski/speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --revision 4.0 --local-dir speechless-mistral-dolphin-orca-platypus-samantha-7b-exl2 --local-dir-use-symlinks False
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.