Quantize 🤗 Transformers models
bitsandbytes
Integration
🤗 Transformers is closely integrated with most used modules on bitsandbytes
. You can load your model in 8-bit precision with few lines of code.
This is supported by most of the GPU hardwares since the 0.37.0
release of bitsandbytes
.
Learn more about the quantization method in the LLM.int8() paper, or the blogpost about the collaboration.
Here are the things you can do using bitsandbytes
integration
Load a large model in 8bit
You can load a model by roughly halving the memory requirements by using load_in_8bit=True
argument when calling .from_pretrained
method
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map == "auto", load_in_8bit=True)
Then, use your model as you would usually use a PreTrainedModel.
You can check the memory footprint of your model with get_memory_footprint
method.
print(model.get_memory_footprint())
With this integration we were able to load large models on smaller devices and run them without any issue.
Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub. Note also that you cannot train 8-bit weights as this is not supported yet. However you can use 8-bit models to train extra parameters, this will be covered in the next section.
Advanced usecases
This section is intended to advanced users, that want to explore what it is possible to do beyond loading and running 8-bit models.
Offload between cpu
and gpu
One of the advanced usecase of this is being able to load a model and dispatch the weights between CPU
and GPU
. Note that the weights that will be dispatched on CPU will not be converted in 8-bit, thus kept in float32
. This feature is intended for users that want to fit a very large model and dispatch the model between GPU and CPU.
First, load a BitsAndBytesConfig
from transformers
and set the attribute llm_int8_enable_fp32_cpu_offload
to True
:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
Let’s say you want to load bigscience/bloom-1b7
model, and you have just enough GPU RAM to fit the entire model except the lm_head
. Therefore write a custom device_map as follows:
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": "cpu",
"transformer.h": 0,
"transformer.ln_f": 0,
}
And load your model as follows:
model_8bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
And that’s it! Enjoy your model!
Play with llm_int8_threshold
You can play with the llm_int8_threshold
argument to change the threshold of the outliers. An “outlier” is a hidden state value that is greater than a certain threshold.
This corresponds to the outlier threshold for outlier detection as described in LLM.int8()
paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).
This argument can impact the inference speed of the model. We suggest to play with this parameter to find which one is the best for your usecase.
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_threshold=10,
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Skip the conversion of some modules
Some models has several modules that needs to be not converted in 8-bit to ensure stability. For example Jukebox model has several lm_head
modules that should be skipped. Play with llm_int8_skip_modules
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_skip_modules=["lm_head"],
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Fine-tune a model that has been loaded in 8-bit
With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been loaded in 8-bit.
This enables fine-tuning large models such as flan-t5-large
or facebook/opt-6.7b
in a single google Colab. Please have a look at peft
library for more details.
BitsAndBytesConfig
class transformers.BitsAndBytesConfig
< source >( load_in_8bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False )
Parameters
-
load_in_8bit (
bool
, optional, defaults toFalse
) — This flag is used to enable 8-bit quantization with LLM.int8(). -
llm_int8_threshold (
float
, optional, defaults to 6) — This corresponds to the outlier threshold for outlier detection as described inLLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale
paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). -
llm_int8_skip_modules (
List[str]
, optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example forCausalLM
models, the lastlm_head
is kept in its originaldtype
. -
llm_int8_enable_fp32_cpu_offload (
bool
, optional, defaults toFalse
) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such asgoogle/flan-t5-xxl
. Note that the int8 operations will not be run on CPU.
This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using bitsandbytes
.
This replaces load_in_8bit
therefore both options are mutually exclusive.
For now, only arguments that are relative to LLM.int8()
are supported, therefore the arguments are all termed as
llm_int8_*
. If more methods are added to bitsandbytes
, then more arguments will be added to this class.
from_dict
< source >( config_dict return_unused_kwargs **kwargs ) → PretrainedConfig
Parameters
-
config_dict (
Dict[str, Any]
) — Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the get_config_dict() method. -
kwargs (
Dict[str, Any]
) — Additional parameters from which to initialize the configuration object.
Returns
The configuration object instantiated from those parameters.
Instantiates a PretrainedConfig from a Python dictionary of parameters.
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
Quantization with 🤗 optimum
Please have a look at Optimum documentation to learn more about quantization methods that are supported by optimum
and see if these are applicable for your usecase.