Transformers documentation

Quantization

You are viewing v4.36.1 version. A newer version v4.46.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Quantization

Quantization techniques reduces memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn’t be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.

Learn how to quantize models in the Quantization guide.

AwqConfig

class transformers.AwqConfig

< >

( bits: int = 4 group_size: int = 128 zero_point: bool = True version: AWQLinearVersion = <AWQLinearVersion.GEMM: 'gemm'> backend: AwqBackendPackingMethod = <AwqBackendPackingMethod.AUTOAWQ: 'autoawq'> do_fuse: typing.Optional[bool] = None fuse_max_seq_len: typing.Optional[int] = None modules_to_fuse: typing.Optional[dict] = None **kwargs )

Parameters

  • bits (int, optional, defaults to 4) — The number of bits to quantize to.
  • group_size (int, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
  • zero_point (bool, optional, defaults to True) — Whether to use zero point quantization.
  • version (AWQLinearVersion, optional, defaults to AWQLinearVersion.GEMM) — The version of the quantization algorithm to use. GEMM is better for big batch_size (e.g. >= 8) otherwise, GEMV is better (e.g. < 8 )
  • backend (AwqBackendPackingMethod, optional, defaults to AwqBackendPackingMethod.AUTOAWQ) — The quantization backend. Some models might be quantized using llm-awq backend. This is useful for users that quantize their own models using llm-awq library.
  • do_fuse (bool, optional, defaults to False) — Whether to fuse attention and mlp layers together for faster inference
  • fuse_max_seq_len (int, optional) — The Maximum sequence length to generate when using fusing.
  • modules_to_fuse (dict, optional, default to None) — Overwrite the natively supported fusing scheme with the one specified by the users.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using auto-awq library awq quantization relying on auto_awq backend.

post_init

< >

( )

Safety checker that arguments are correct

GPTQConfig

class transformers.GPTQConfig

< >

( bits: int tokenizer: typing.Any = None dataset: typing.Union[str, typing.List[str], NoneType] = None group_size: int = 128 damp_percent: float = 0.1 desc_act: bool = False sym: bool = True true_sequential: bool = True use_cuda_fp16: bool = False model_seqlen: typing.Optional[int] = None block_name_to_quantize: typing.Optional[str] = None module_name_preceding_first_block: typing.Optional[typing.List[str]] = None batch_size: int = 1 pad_token_id: typing.Optional[int] = None use_exllama: typing.Optional[bool] = None max_input_length: typing.Optional[int] = None exllama_config: typing.Union[typing.Dict[str, typing.Any], NoneType] = None cache_block_outputs: bool = True **kwargs )

Parameters

  • bits (int) — The number of bits to quantize to, supported numbers are (2, 3, 4, 8).
  • tokenizer (str or PreTrainedTokenizerBase, optional) — The tokenizer used to process the dataset. You can pass either:
    • A custom tokenizer object.
    • A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
    • A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.
  • dataset (Union[List[str]], optional) — The dataset used for quantization. You can provide your own dataset in a list of string or just use the original datasets used in GPTQ paper [‘wikitext2’,‘c4’,‘c4-new’,‘ptb’,‘ptb-new’]
  • group_size (int, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
  • damp_percent (float, optional, defaults to 0.1) — The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.
  • desc_act (bool, optional, defaults to False) — Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order.
  • sym (bool, optional, defaults to True) — Whether to use symetric quantization.
  • true_sequential (bool, optional, defaults to True) — Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes quantization using inputs that have passed through the previously quantized layers.
  • use_cuda_fp16 (bool, optional, defaults to False) — Whether or not to use optimized cuda kernel for fp16 model. Need to have model in fp16.
  • model_seqlen (int, optional) — The maximum sequence length that the model can take.
  • block_name_to_quantize (str, optional) — The transformers block name to quantize.
  • module_name_preceding_first_block (List[str], optional) — The layers that are preceding the first Transformer block.
  • batch_size (int, optional, defaults to 1) — The batch size used when processing the dataset
  • pad_token_id (int, optional) — The pad token id. Needed to prepare the dataset when batch_size > 1.
  • use_exllama (bool, optional) — Whether to use exllama backend. Defaults to True if unset. Only works with bits = 4.
  • max_input_length (int, optional) — The maximum input length. This is needed to initialize a buffer that depends on the maximum expected input length. It is specific to the exllama backend with act-order.
  • exllama_config (Dict[str, Any], optional) — The exllama config. You can specify the version of the exllama kernel through the version key. Defaults to {"version": 1} if unset.
  • cache_block_outputs (bool, optional, defaults to True) — Whether to cache block outputs to reuse as inputs for the succeeding block.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using optimum api for gptq quantization relying on auto_gptq backend.

from_dict_optimum

< >

( config_dict )

Get compatible class with optimum gptq config dict

post_init

< >

( )

Safety checker that arguments are correct

to_dict_optimum

< >

( )

Get compatible dict for optimum gptq config

BitsAndBytesConfig

class transformers.BitsAndBytesConfig

< >

( load_in_8bit = False load_in_4bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False llm_int8_has_fp16_weight = False bnb_4bit_compute_dtype = None bnb_4bit_quant_type = 'fp4' bnb_4bit_use_double_quant = False **kwargs )

Parameters

  • load_in_8bit (bool, optional, defaults to False) — This flag is used to enable 8-bit quantization with LLM.int8().
  • load_in_4bit (bool, optional, defaults to False) — This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers from bitsandbytes.
  • llm_int8_threshold (float, optional, defaults to 6.0) — This corresponds to the outlier threshold for outlier detection as described in LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).
  • llm_int8_skip_modules (List[str], optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example for CausalLM models, the last lm_head is kept in its original dtype.
  • llm_int8_enable_fp32_cpu_offload (bool, optional, defaults to False) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such as google/flan-t5-xxl. Note that the int8 operations will not be run on CPU.
  • llm_int8_has_fp16_weight (bool, optional, defaults to False) — This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass.
  • bnb_4bit_compute_dtype (torch.dtype or str, optional, defaults to torch.float32) — This sets the computational type which might be different than the input time. For example, inputs might be fp32, but computation can be set to bf16 for speedups.
  • bnb_4bit_quant_type (str, optional, defaults to "fp4") — This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified by fp4 or nf4.
  • bnb_4bit_use_double_quant (bool, optional, defaults to False) — This flag is used for nested quantization where the quantization constants from the first quantization are quantized again.
  • kwargs (Dict[str, Any], optional) — Additional parameters from which to initialize the configuration object.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes.

This replaces load_in_8bit or load_in_4bittherefore both options are mutually exclusive.

Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes, then more arguments will be added to this class.

is_quantizable

< >

( )

Returns True if the model is quantizable, False otherwise.

post_init

< >

( )

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

quantization_method

< >

( )

This method returns the quantization method used for the model. If the model is not quantizable, it returns None.

to_diff_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.