It's a gguf model file of gemma-3-12b-it, which is developed by Google.

It's very applicable for deploying and using in PCs, laptops or mobiles.
gemma-3-12b-it-q4_0.gguf is the quantization-aware trained(QAT) checkpoints of Gemma 3, 3x less VRAM, while retaining almost the same quality. Recommend it.

If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:

  • If you are using Zotero for managing and reading your personal PDFs, PapersGPT is a free plugin which can assist you to chat PDFs effectively by your local gemma-3-12b-it.
  • you can download ChatPDFLocal MacOS app from here, load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
Downloads last month
917
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support