Edit model card

Llamacpp Quantizations of gemma-1.1-7b-it

Using llama.cpp release b2589 for quantization.

Original model: https://huggingface.co/google/gemma-1.1-7b-it

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
gemma-1.1-7b-it-Q8_0.gguf Q8_0 9.07GB Extremely high quality, generally unneeded but max available quant.
gemma-1.1-7b-it-Q6_K.gguf Q6_K 7.01GB Very high quality, near perfect, recommended.
gemma-1.1-7b-it-Q5_K_M.gguf Q5_K_M 6.14GB High quality, very usable.
gemma-1.1-7b-it-Q5_K_S.gguf Q5_K_S 5.98GB High quality, very usable.
gemma-1.1-7b-it-Q5_0.gguf Q5_0 5.98GB High quality, older format, generally not recommended.
gemma-1.1-7b-it-Q4_K_M.gguf Q4_K_M 5.32GB Good quality, uses about 4.83 bits per weight.
gemma-1.1-7b-it-Q4_K_S.gguf Q4_K_S 5.04GB Slightly lower quality with small space savings.
gemma-1.1-7b-it-IQ4_NL.gguf IQ4_NL 5.04GB Decent quality, similar to Q4_K_S, new method of quanting,
gemma-1.1-7b-it-IQ4_XS.gguf IQ4_XS 4.80GB Decent quality, new method with similar performance to Q4.
gemma-1.1-7b-it-Q4_0.gguf Q4_0 5.01GB Decent quality, older format, generally not recommended.
gemma-1.1-7b-it-Q3_K_L.gguf Q3_K_L 4.70GB Lower quality but usable, good for low RAM availability.
gemma-1.1-7b-it-Q3_K_M.gguf Q3_K_M 4.36GB Even lower quality.
gemma-1.1-7b-it-IQ3_M.gguf IQ3_M 4.10GB Medium-low quality, new method with decent performance.
gemma-1.1-7b-it-IQ3_S.gguf IQ3_S 3.98GB Lower quality, new method with decent performance, recommended over Q3 quants.
gemma-1.1-7b-it-Q3_K_S.gguf Q3_K_S 3.98GB Low quality, not recommended.
gemma-1.1-7b-it-Q2_K.gguf Q2_K 3.48GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
3,061
GGUF
+1