File size: 696 Bytes
fb3139a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
base_model: google/gemma-2-9b-it
inference: false
license: apache-2.0
model_name: Gemma-2-9B-Instruct-4Bit-GPTQ
pipeline_tag: text-generation
quantized_by: Granther
tags:
- gptq
---

# Gemma-2-9B-Instruct-4Bit-GPTQ
- Original Model: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- Model Creator: [google](https://huggingface.co/google)

## Quantization
- This model was quantized with the Auto-GPTQ library


## Metrics

| Benchmark  | Metric | Gemma 2 GPTQ | Gemma 2 9B it |
| ------------------| ---------- | -----------  | -------------- |
| [PIQA](piqa)      | 0-shot     | 80.52        | 80.79          |
| [MMLU](mmlu)      | 5-shot     | 52.0         | 50.00          |