Quantized to i1-GGUF using SpongeQuant, the Oobabooga of LLM quantization.


X-ray of hand
X-ray of hand
Pixies – Where Is My Mind (USA, 1988)

What is a GGUF?

GGUF is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but GGUF improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn’t have enough memory, GGUF can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. GGUF is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, GGUF allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems.

What is an i1-GGUF?

i1-GGUF is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard GGUF, i1-GGUF allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, i1-GGUF models deliver better responses than traditional GGUF models while maintaining efficiency.

Downloads last month
386
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for SpongeEngine/patricide-12B-Unslop-Mell-i1-GGUF

Quantized
(5)
this model