Bitsandbytes documentation

bitsandbytes

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.45.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

bitsandbytes

bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. bitsandbytes provides three main features for dramatically reducing memory consumption for inference and training:

  • 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
  • LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
  • QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don’t compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.

License

bitsandbytes is MIT licensed.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

< > Update on GitHub