Edit model card

QuantFactory/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF

This ia quantized version of Lumpen1/MadWizard-SFT-v2-Mistral-7b-v0.3 created using llama.cpp

Downloads last month
2,186
GGUF
Model size
7.25B params
Architecture
llama
Inference Examples
Input a message to start chatting with QuantFactory/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF.
Unable to determine this model's library. Check the docs .

Quantized from