quantized version of Locutusque/Rhino-Mistral-7B-GGUF. Only 5 bit and 16 bit quantization so far.
- Downloads last month
- 46
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the
docs
.
quantized version of Locutusque/Rhino-Mistral-7B-GGUF. Only 5 bit and 16 bit quantization so far.