Edit model card

Llamacpp Quantizations of openbuddy-mistral-7b-v19.1-4k

Using llama.cpp commit fa97464 for quantization.

Original model: https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v19.1-4k

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
openbuddy-mistral-7b-v19.1-4k-Q8_0.gguf Q8_0 7.73GB Extremely high quality, generally unneeded but max available quant.
openbuddy-mistral-7b-v19.1-4k-Q6_K.gguf Q6_K 5.97GB Very high quality, near perfect, recommended.
openbuddy-mistral-7b-v19.1-4k-Q5_K_M.gguf Q5_K_M 5.15GB High quality, very usable.
openbuddy-mistral-7b-v19.1-4k-Q5_K_S.gguf Q5_K_S 5.02GB High quality, very usable.
openbuddy-mistral-7b-v19.1-4k-Q5_0.gguf Q5_0 5.02GB High quality, older format, generally not recommended.
openbuddy-mistral-7b-v19.1-4k-Q4_K_M.gguf Q4_K_M 4.39GB Good quality, similar to 4.25 bpw.
openbuddy-mistral-7b-v19.1-4k-Q4_K_S.gguf Q4_K_S 4.16GB Slightly lower quality with small space savings.
openbuddy-mistral-7b-v19.1-4k-Q4_0.gguf Q4_0 4.13GB Decent quality, older format, generally not recommended.
openbuddy-mistral-7b-v19.1-4k-Q3_K_L.gguf Q3_K_L 3.84GB Lower quality but usable, good for low RAM availability.
openbuddy-mistral-7b-v19.1-4k-Q3_K_M.gguf Q3_K_M 3.54GB Even lower quality.
openbuddy-mistral-7b-v19.1-4k-Q3_K_S.gguf Q3_K_S 3.18GB Low quality, not recommended.
openbuddy-mistral-7b-v19.1-4k-Q2_K.gguf Q2_K 2.74GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
239
GGUF
Model size
7.28B params
Architecture
llama
+1
Inference Examples
Unable to determine this model's library. Check the docs .