File size: 3,304 Bytes
334ee04 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2b - GGUF
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/gemma-2b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q2_K.gguf) | Q2_K | 1.08GB |
| [gemma-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [gemma-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [gemma-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [gemma-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [gemma-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q3_K.gguf) | Q3_K | 1.29GB |
| [gemma-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [gemma-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [gemma-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [gemma-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q4_0.gguf) | Q4_0 | 1.44GB |
| [gemma-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [gemma-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [gemma-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q4_K.gguf) | Q4_K | 1.52GB |
| [gemma-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [gemma-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q4_1.gguf) | Q4_1 | 1.56GB |
| [gemma-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q5_0.gguf) | Q5_0 | 1.68GB |
| [gemma-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [gemma-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q5_K.gguf) | Q5_K | 1.71GB |
| [gemma-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [gemma-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q5_1.gguf) | Q5_1 | 1.79GB |
| [gemma-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2b-gguf/blob/main/gemma-2b.Q6_K.gguf) | Q6_K | 1.92GB |
Original model description:
Repo model google/gemma-2b is gated. You must be authenticated to access it.
|