GGUFs
Collection
I take requests, feel free to drop me a line in the community posts
•
36 items
•
Updated
•
1
Important Note: Inferencing is only available on this fork of llama.cpp at the moment: https://github.com/iamlemec/llama.cpp/tree/mistral-nemo (All credits to iamlemec for his work on Mistral-Nemo support) Other front-ends like the main branch of llama.cpp, kobold.cpp, and text-generation-web-ui may not work as intended
Quantized from Mistral-Nemo-Instruct-2407 fp16
KL-Divergence Reference Chart
(Click on image to view in full size)
Quant-specific Tips:
- If you are getting a
cudaMalloc failed: out of memory
error, try passing an argument for lower context in llama.cpp, e.g. for 8k:-c 8192
- If you have all ampere generation or newer cards, you can use flash attention like so:
-fa
- Provided Flash Attention is enabled you can also use quantized cache to save on VRAM e.g. for 8-bit:
-ctk q8_0 -ctv q8_0
- Mistral recommends a temperature of 0.3 for this model
Original model card can be found here