metadata
base_model: google/gemma-2-9b-it
extra_gated_button_content: Acknowledge license
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
inference: false
library_name: gguf
license: gemma
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- conversational
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
gemma-2-9b-it-IMat-GGUF
Llama.cpp imatrix quantization of google/gemma-2-9b-it
Original Model: google/gemma-2-9b-it
Original dtype: BF16
(bfloat16
)
Quantized by: https://github.com/ggerganov/llama.cpp/pull/8156
IMatrix dataset: here
Files
IMatrix
Status: ✅ Available
Link: here
Common Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
gemma-2-9b-it.Q8_0.gguf | Q8_0 | 9.83GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q6_K.gguf | Q6_K | 7.59GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q4_K.gguf | Q4_K | 5.76GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-9b-it.Q3_K.gguf | Q3_K | 4.76GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-9b-it.Q2_K.gguf | Q2_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No |
All Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
gemma-2-9b-it.BF16.gguf | BF16 | 18.49GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.FP16.gguf | F16 | 18.49GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q8_0.gguf | Q8_0 | 9.83GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q6_K.gguf | Q6_K | 7.59GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q5_K.gguf | Q5_K | 6.65GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q5_K_S.gguf | Q5_K_S | 6.48GB | ✅ Available | ⚪ Static | 📦 No |
gemma-2-9b-it.Q4_K.gguf | Q4_K | 5.76GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-9b-it.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.Q3_K.gguf | Q3_K | 4.76GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-9b-it.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ3_M | IQ3_M | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ3_S | IQ3_S | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.Q2_K.gguf | Q2_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No |
gemma-2-9b-it.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ2_M | IQ2_M | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ2_S | IQ2_S | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ1_M | IQ1_M | - | ⏳ Processing | 🟢 IMatrix | - |
gemma-2-9b-it.IQ1_S | IQ1_S | - | ⏳ Processing | 🟢 IMatrix | - |
Downloading using huggingface-cli
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
Inference
Simple chat template
<bos><start_of_turn>user
{user_prompt}<end_of_turn>
<start_of_turn>model
{assistant_response}<end_of_turn>
<start_of_turn>user
{next_user_prompt}<end_of_turn>
Llama.cpp
llama.cpp/main -m gemma-2-9b-it.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
FAQ
Why is the IMatrix not applied everywhere?
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
How do I merge a split GGUF?
- Make sure you have
gguf-split
available- To get hold of
gguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find
gguf-split
- To get hold of
- Locate your GGUF chunks folder (ex:
gemma-2-9b-it.Q8_0
) - Run
gguf-split --merge gemma-2-9b-it.Q8_0/gemma-2-9b-it.Q8_0-00001-of-XXXXX.gguf gemma-2-9b-it.Q8_0.gguf
- Make sure to point
gguf-split
to the first chunk of the split.
- Make sure to point
Got a suggestion? Ping me @legraphista!