Instructions to use google/gemma-4-E4B-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/gemma-4-E4B-it with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("google/gemma-4-E4B-it") model = AutoModelForImageTextToText.from_pretrained("google/gemma-4-E4B-it") - Notebooks
- Google Colab
- Kaggle
Fix model ID: add -it suffix
Browse files
.eval_results/gpqa_diamond.yaml
CHANGED
|
@@ -4,6 +4,6 @@
|
|
| 4 |
value: 58.6
|
| 5 |
date: '2026-04-02'
|
| 6 |
source:
|
| 7 |
-
url: https://huggingface.co/google/gemma-4-E4B
|
| 8 |
name: Model Card
|
| 9 |
user: merve
|
|
|
|
| 4 |
value: 58.6
|
| 5 |
date: '2026-04-02'
|
| 6 |
source:
|
| 7 |
+
url: https://huggingface.co/google/gemma-4-E4B-it
|
| 8 |
name: Model Card
|
| 9 |
user: merve
|