File size: 757 Bytes
0c045dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
---
license: apache-2.0
---
# CorticalStack/mistral-7b-metamathqa-gptq
CorticalStack/mistral-7b-metamathqa-gptq is an GPTQ quantised version of [CorticalStack/mistral-7b-metamathqa-sft](https://huggingface.co/CorticalStack/mistral-7b-metamathqa-sft).
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). MacOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |