Edit model card

CorticalStack/mistral-7b-metamathqa-gptq

CorticalStack/mistral-7b-metamathqa-gptq is an GPTQ quantised version of CorticalStack/mistral-7b-metamathqa-sft.

GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). MacOS users: please use GGUF models.

These GPTQ models are known to work in the following inference servers/webuis.

Downloads last month
1
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·