Quantization
Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn’t be able to fit into memory, and speeding up inference. Diffusers supports 8-bit and 4-bit quantization with bitsandbytes.
Quantization techniques that aren’t supported in Transformers can be added with the DiffusersQuantizer class.
Learn how to quantize models in the Quantization guide.
BitsAndBytesConfig
class diffusers.BitsAndBytesConfig
< source >( load_in_8bit = False load_in_4bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False llm_int8_has_fp16_weight = False bnb_4bit_compute_dtype = None bnb_4bit_quant_type = 'fp4' bnb_4bit_use_double_quant = False bnb_4bit_quant_storage = None **kwargs )
Parameters
- load_in_8bit (
bool
, optional, defaults toFalse
) — This flag is used to enable 8-bit quantization with LLM.int8(). - load_in_4bit (
bool
, optional, defaults toFalse
) — This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers frombitsandbytes
. - llm_int8_threshold (
float
, optional, defaults to 6.0) — This corresponds to the outlier threshold for outlier detection as described inLLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale
paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). - llm_int8_skip_modules (
List[str]
, optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example forCausalLM
models, the lastlm_head
is typically kept in its originaldtype
. - llm_int8_enable_fp32_cpu_offload (
bool
, optional, defaults toFalse
) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such asgoogle/flan-t5-xxl
. Note that the int8 operations will not be run on CPU. - llm_int8_has_fp16_weight (
bool
, optional, defaults toFalse
) — This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass. - bnb_4bit_compute_dtype (
torch.dtype
or str, optional, defaults totorch.float32
) — This sets the computational type which might be different than the input type. For example, inputs might be fp32, but computation can be set to bf16 for speedups. - bnb_4bit_quant_type (
str
, optional, defaults to"fp4"
) — This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified byfp4
ornf4
. - bnb_4bit_use_double_quant (
bool
, optional, defaults toFalse
) — This flag is used for nested quantization where the quantization constants from the first quantization are quantized again. - bnb_4bit_quant_storage (
torch.dtype
or str, optional, defaults totorch.uint8
) — This sets the storage type to pack the quanitzed 4-bit prarams. - kwargs (
Dict[str, Any]
, optional) — Additional parameters from which to initialize the configuration object.
This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using bitsandbytes
.
This replaces load_in_8bit
or load_in_4bit
therefore both options are mutually exclusive.
Currently only supports LLM.int8()
, FP4
, and NF4
quantization. If more methods are added to bitsandbytes
,
then more arguments will be added to this class.
Returns True
if the model is quantizable, False
otherwise.
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
This method returns the quantization method used for the model. If the model is not quantizable, it returns
None
.
to_diff_dict
< source >( ) → Dict[str, Any]
Returns
Dict[str, Any]
Dictionary of all the attributes that make up this configuration instance,
Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.
DiffusersQuantizer
class diffusers.DiffusersQuantizer
< source >( quantization_config: QuantizationConfigMixin **kwargs )
Abstract class of the HuggingFace quantizer. Supports for now quantizing HF diffusers models for inference and/or quantization. This class is used only for diffusers.models.modeling_utils.ModelMixin.from_pretrained and cannot be easily used outside the scope of that method yet.
Attributes
quantization_config (diffusers.quantizers.quantization_config.QuantizationConfigMixin
):
The quantization config that defines the quantization parameters of your model that you want to quantize.
modules_to_not_convert (List[str]
, optional):
The list of module names to not convert when quantizing the model.
required_packages (List[str]
, optional):
The list of required pip packages to install prior to using the quantizer
requires_calibration (bool
):
Whether the quantization method requires to calibrate the model before using it.
adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization
adjust_target_dtype
< source >( torch_dtype: torch.dtype )
Override this method if you want to adjust the target_dtype
variable used in from_pretrained
to compute the
device_map in case the device_map is a str
. E.g. for bitsandbytes we force-set target_dtype
to torch.int8
and for 4-bit we pass a custom enum accelerate.CustomDtype.int4
.
check_if_quantized_param
< source >( model: ModelMixin param_value: torch.Tensor param_name: str state_dict: typing.Dict[str, typing.Any] **kwargs )
checks if a loaded state_dict component is part of quantized param + some validation; only defined for quantization methods that require to create a new parameters for quantization.
checks if the quantized param has expected shape.
takes needed components from state_dict and creates quantized param.
Potentially dequantize the model to retrive the original model, with some loss in accuracy / performance. Note not all quantization schemes support this.
get_special_dtypes_update
< source >( model torch_dtype: torch.dtype )
returns dtypes for modules that are not quantized - used for the computation of the device_map in case one
passes a str as a device_map. The method will use the modules_to_not_convert
that is modified in
_process_model_before_weight_loading
. diffusers
models don’t have any modules_to_not_convert
attributes
yet but this can change soon in the future.
postprocess_model
< source >( model: ModelMixin **kwargs )
Post-process the model post weights loading. Make sure to override the abstract method
_process_model_after_weight_loading
.
preprocess_model
< source >( model: ModelMixin **kwargs )
Setting model attributes and/or converting model before weights loading. At this point the model should be
initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace
modules in-place. Make sure to override the abstract method _process_model_before_weight_loading
.
update_device_map
< source >( device_map: typing.Optional[typing.Dict[str, typing.Any]] )
Override this method if you want to pass a override the existing device map with a new one. E.g. for
bitsandbytes, since accelerate
is a hard requirement, if no device_map is passed, the device_map is set to
`“auto”“
update_missing_keys
< source >( model missing_keys: typing.List[str] prefix: str )
Override this method if you want to adjust the missing_keys
.
update_torch_dtype
< source >( torch_dtype: torch.dtype )
Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to override this method in case you want to make sure that behavior is preserved
This method is used to potentially check for potential conflicts with arguments that are passed in
from_pretrained
. You need to define it for all future quantizers that are integrated with diffusers. If no
explicit check are needed, simply return nothing.