Edit model card

Exllama v2 Quantizations of Mixtral_7Bx2_MoE

Using turboderp's ExLlamaV2 v0.0.11 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using the default calibration dataset.

Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.

Original model: https://huggingface.co/cloudyu/Mixtral_7Bx2_MoE

3.5 bits per weight

3.75 bits per weight

4.0 bits per weight

5.0 bits per weight

6.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Mixtral_7Bx2_MoE-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called Mixtral_7Bx2_MoE-exl2:

mkdir Mixtral_7Bx2_MoE-exl2
huggingface-cli download bartowski/Mixtral_7Bx2_MoE-exl2 --local-dir Mixtral_7Bx2_MoE-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir Mixtral_7Bx2_MoE-exl2
huggingface-cli download bartowski/Mixtral_7Bx2_MoE-exl2 --revision 4_0 --local-dir Mixtral_7Bx2_MoE-exl2 --local-dir-use-symlinks False
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .