|
--- |
|
base_model: google/gemma-2-9b-it |
|
inference: false |
|
license: apache-2.0 |
|
model_name: Gemma-2-9B-Instruct-4Bit-GPTQ |
|
pipeline_tag: text-generation |
|
quantized_by: Granther |
|
tags: |
|
- gptq |
|
--- |
|
|
|
# Gemma-2-9B-Instruct-4Bit-GPTQ |
|
- Original Model: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) |
|
- Model Creator: [google](https://huggingface.co/google) |
|
|
|
## Quantization |
|
- This model was quantized with the Auto-GPTQ library |
|
|
|
|
|
## Metrics |
|
|
|
| Benchmark | Metric | Gemma 2 GPTQ | Gemma 2 9B it | |
|
| ------------------| ---------- | ----------- | -------------- | |
|
| [PIQA](piqa) | 0-shot | 80.52 | 80.79 | |
|
| [MMLU](mmlu) | 5-shot | 52.0 | 50.00 | |
|
|