GPTQ Version of meta-llama/Llama-3.2-1B: 8 bit, 128 groupsize
✅ Tested on vLLM

Downloads last month
76
Safetensors
Model size
516M params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for adriabama06/Llama-3.2-1B-Instruct-GPTQ-8bit-128g

Quantized
(215)
this model