With Transformers it’s very easy to load any model in 4 or 8-bit, quantizing them on the fly with bitsandbytes primitives.
Please review the bitsandbytes section in the Accelerate docs.
Details about the BitsAndBytesConfig can be found here.
If your hardware supports it, bf16
is the optimal compute dtype. The default is float32
for backward compatibility and numerical stability. float16
often leads to numerical instabilities, but bfloat16
provides the benefits of both worlds: numerical stability and significant computation speedup. Therefore, be sure to check if your hardware supports bf16
and configure it using the bnb_4bit_compute_dtype
parameter in BitsAndBytesConfig:
import torch
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
With PEFT
, you can use QLoRA out of the box with LoraConfig
and a 4-bit base model.
Please review the bitsandbytes section in the Accelerate docs.
Bitsandbytes is also easily usable from within Accelerate.
Please review the bitsandbytes section in the Accelerate docs.
You can use any of the 8-bit and/or paged optimizers by simple passing them to the transformers.Trainer
class on initialization.All bnb optimizers are supported by passing the correct string in TrainingArguments
’s optim
attribute - e.g. (paged_adamw_32bit
).
See the official API docs for reference.
Here we point out to relevant doc sections in transformers / peft / Trainer + very briefly explain how these are integrated:
e.g. for transformers state that you can load any model in 8-bit / 4-bit precision, for PEFT, you can use QLoRA out of the box with LoraConfig
+ 4-bit base model, for Trainer: all bnb optimizers are supported by passing the correct string in TrainingArguments
’s optim
attribute - e.g. (paged_adamw_32bit
):