Edit model card

Official AQLM quantization of mistralai/Mistral-7B-Instruct-v0.2 .

For this quantization, we used 2 codebooks of 8 bits.

Results:

Model Quantization MMLU (5-shot) Model size, Gb
mistralai/Mistral-7B-Instruct-v0.2 None 0.5912 14.5
2x8 0.4384 2.3
Downloads last month
91
Safetensors
Model size
2.01B params
Tensor type
FP16
·
I8
·

Collection including ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8