---
library_name: transformers
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of gemma-1.1-2b-it
Using llama.cpp release b2589 for quantization.
Original model: https://huggingface.co/google/gemma-1.1-2b-it
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-1.1-2b-it-Q8_0.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q8_0.gguf) | Q8_0 | 2.66GB | Extremely high quality, generally unneeded but max available quant. |
| [gemma-1.1-2b-it-Q6_K.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q6_K.gguf) | Q6_K | 2.06GB | Very high quality, near perfect, *recommended*. |
| [gemma-1.1-2b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q5_K_M.gguf) | Q5_K_M | 1.83GB | High quality, *recommended*. |
| [gemma-1.1-2b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q5_K_S.gguf) | Q5_K_S | 1.79GB | High quality, *recommended*. |
| [gemma-1.1-2b-it-Q5_0.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q5_0.gguf) | Q5_0 | 1.79GB | High quality, older format, generally not recommended. |
| [gemma-1.1-2b-it-Q4_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q4_K_M.gguf) | Q4_K_M | 1.63GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [gemma-1.1-2b-it-Q4_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q4_K_S.gguf) | Q4_K_S | 1.55GB | Slightly lower quality with small space savings. |
| [gemma-1.1-2b-it-IQ4_NL.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-IQ4_NL.gguf) | IQ4_NL | 1.56GB | Decent quality, similar to Q4_K_S, new method of quanting, *recommended*. |
| [gemma-1.1-2b-it-IQ4_XS.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-IQ4_XS.gguf) | IQ4_XS | 1.50GB | Decent quality, new method with similar performance to Q4. |
| [gemma-1.1-2b-it-Q4_0.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q4_0.gguf) | Q4_0 | 1.55GB | Decent quality, older format, generally not recommended. |
| [gemma-1.1-2b-it-Q3_K_L.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q3_K_L.gguf) | Q3_K_L | 1.46GB | Lower quality but usable, good for low RAM availability. |
| [gemma-1.1-2b-it-Q3_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q3_K_M.gguf) | Q3_K_M | 1.38GB | Even lower quality. |
| [gemma-1.1-2b-it-IQ3_M.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-IQ3_M.gguf) | IQ3_M | 1.30GB | Medium-low quality, new method with decent performance. |
| [gemma-1.1-2b-it-IQ3_S.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-IQ3_S.gguf) | IQ3_S | 1.28GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [gemma-1.1-2b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q3_K_S.gguf) | Q3_K_S | 1.28GB | Low quality, not recommended. |
| [gemma-1.1-2b-it-Q2_K.gguf](https://huggingface.co/bartowski/gemma-1.1-2b-it-GGUF/blob/main/gemma-1.1-2b-it-Q2_K.gguf) | Q2_K | 1.15GB | Extremely low quality, *not* recommended. |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski