license: apache-2.0

Model Summary

This repository hosts quantized versions of the IBM Granite 3.3 8B Instruct model.

Format: GGUF
Converter: llama.cpp 12b17501e6015ffe568ac54fdf08e6580833bf1b
Quantizer: LM-Kit.NET 2025.4.9

For more detailed information on the base model, please visit the following link:

Downloads last month
4
GGUF
Model size
8.17B params
Architecture
granite
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support