Diffusers documentation
Quantization
Quantization
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they’re quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
Diffusers supports multiple quantization backends to make large diffusion models like Flux more accessible. This guide shows how to use the PipelineQuantizationConfig class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.
Pipeline-level quantization
There are two ways you can use PipelineQuantizationConfig depending on the level of control you want over the quantization specifications of each model in the pipeline.
- for more basic and simple use cases, you only need to define the
quant_backend
,quant_kwargs
, andcomponents_to_quantize
- for more granular quantization control, provide a
quant_mapping
that provides the quantization specifications for the individual model components
Simple quantization
Initialize PipelineQuantizationConfig with the following parameters.
quant_backend
specifies which quantization backend to use. Currently supported backends include:bitsandbytes_4bit
,bitsandbytes_8bit
,gguf
,quanto
, andtorchao
.quant_kwargs
contains the specific quantization arguments to use.components_to_quantize
specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as FluxPipeline. The example below quantizes the T5 text encoder in FluxPipeline while keeping the CLIP model intact.
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig
pipeline_quant_config = PipelineQuantizationConfig(
quant_backend="bitsandbytes_4bit",
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
components_to_quantize=["transformer", "text_encoder_2"],
)
Pass the pipeline_quant_config
to from_pretrained() to quantize the pipeline.
pipe = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
quantization_config=pipeline_quant_config,
torch_dtype=torch.bfloat16,
).to("cuda")
image = pipe("photo of a cute dog").images[0]
quant_mapping
The quant_mapping
argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends.
Initialize PipelineQuantizationConfig and pass a quant_mapping
to it. The quant_mapping
allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.
The example below uses two quantization backends, ~quantizers.QuantoConfig
and transformers.BitsAndBytesConfig, for the transformer and text encoder.
import torch
from diffusers import DiffusionPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers.quantization_config import QuantoConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
pipeline_quant_config = PipelineQuantizationConfig(
quant_mapping={
"transformer": QuantoConfig(weights_dtype="int8"),
"text_encoder_2": TransformersBitsAndBytesConfig(
load_in_4bit=True, compute_dtype=torch.bfloat16
),
}
)
There is a separate bitsandbytes backend in Transformers. You need to import and use transformers.BitsAndBytesConfig for components that come from Transformers. For example, text_encoder_2
in FluxPipeline is a T5EncoderModel from Transformers so you need to use transformers.BitsAndBytesConfig instead of diffusers.BitsAndBytesConfig.
Use the simple quantization method above if you don’t want to manage these distinct imports or aren’t sure where each pipeline component comes from.
import torch
from diffusers import DiffusionPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
pipeline_quant_config = PipelineQuantizationConfig(
quant_mapping={
"transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16),
"text_encoder_2": TransformersBitsAndBytesConfig(
load_in_4bit=True, compute_dtype=torch.bfloat16
),
}
)
Pass the pipeline_quant_config
to from_pretrained() to quantize the pipeline.
pipe = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
quantization_config=pipeline_quant_config,
torch_dtype=torch.bfloat16,
).to("cuda")
image = pipe("photo of a cute dog").images[0]
Resources
Check out the resources below to learn more about quantization.
If you are new to quantization, we recommend checking out the following beginner-friendly courses in collaboration with DeepLearning.AI.
Refer to the Contribute new quantization method guide if you’re interested in adding a new quantization method.
The Transformers quantization Overview provides an overview of the pros and cons of different quantization backends.
Read the Exploring Quantization Backends in Diffusers blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.