Transformers documentation


You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.41.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started


The EETQ library supports int8 per-channel weight-only quantization for NVIDIA GPUS. The high-performance GEMM and GEMV kernels are from FasterTransformer and TensorRT-LLM. It requires no calibration dataset and does not need to pre-quantize your model. Moreover, the accuracy degradation is negligible owing to the per-channel quantization.

Make sure you have eetq installed from the relase page

pip install --no-cache-dir

or via the source code EETQ requires CUDA capability <= 8.9 and >= 7.0

git clone
cd EETQ/
git submodule update --init --recursive
pip install .

An unquantized model can be quantized via “from_pretrained”.

from transformers import AutoModelForCausalLM, EetqConfig
path = "/path/to/model"
quantization_config = EetqConfig("int8")
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", quantization_config=quantization_config)

A quantized model can be saved via “saved_pretrained” and be reused again via the “from_pretrained”.

quant_path = "/path/to/save/quantized/model"
model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto")
< > Update on GitHub