Bitsandbytes documentation

Installation Guide

You are viewing v0.48.0 version. A newer version v0.48.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Installation Guide

Welcome to the installation guide for the bitsandbytes library! This document provides step-by-step instructions to install bitsandbytes across various platforms and hardware configurations.

We provide official support for NVIDIA GPUs, CPUs, Intel XPUs, and Intel Gaudi platforms. We also have experimental support for additional platforms such as AMD ROCm.

Table of Contents

System Requirements

These are the minimum requirements for bitsandbytes across all platforms. Please be aware that some compute platforms may impose more strict requirements.

  • Python >= 3.9
  • PyTorch >= 2.3

NVIDIA CUDA

bitsandbytes is currently supported on NVIDIA GPUs with Compute Capability 6.0+. The library can be built using CUDA Toolkit versions as old as 11.8.

Feature CC Required Example Hardware Requirement
LLM.int8() 7.5+ Turing (RTX 20 series, T4) or newer GPUs
8-bit optimizers/quantization 6.0+ Pascal (GTX 10X0 series, P100) or newer GPUs
NF4/FP4 quantization 6.0+ Pascal (GTX 10X0 series, P100) or newer GPUs

Support for Maxwell GPUs is deprecated and will be removed in a future release. Maxwell support is not included in PyPI distributions from v0.48.0 on and must be built from source. For the best results, a Turing generation device or newer is recommended.

Installation via PyPI

This is the most straightforward and recommended installation option.

The currently distributed bitsandbytes packages are built with the following configurations:

OS CUDA Toolkit Host Compiler Targets
Linux x86-64 11.8 - 12.6 GCC 11.2 sm60, sm70, sm75, sm80, sm86, sm89, sm90
Linux x86-64 12.8 - 12.9 GCC 11.2 sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
Linux x86-64 13.0 GCC 11.2 sm75, sm80, sm86, sm89, sm90, sm100, sm110, sm120
Linux aarch64 11.8 - 12.6 GCC 11.2 sm75, sm80, sm90
Linux aarch64 12.8 - 13.0 GCC 11.2 sm75, sm80, sm90, sm100, sm120
Windows x86-64 11.8 - 12.6 MSVC 19.43+ (VS2022) sm50, sm60, sm75, sm80, sm86, sm89, sm90
Windows x86-64 12.8 - 12.9 MSVC 19.43+ (VS2022) sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
Windows x86-64 13.0 MSVC 19.43+ (VS2022) sm75, sm80, sm86, sm89, sm90, sm100, sm120

The Linux build has a minimum glibc version of 2.24.

Use pip or uv to install the latest release:

pip install bitsandbytes

Compile from Source

Don’t hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.

For Linux and Windows systems, compiling from source allows you to customize the build configurations. See below for detailed platform-specific instructions (see the CMakeLists.txt if you want to check the specifics and explore some additional options):

Linux
Windows

To compile from source, you need CMake >= 3.22.1 and Python >= 3.9 installed. Make sure you have a compiler installed to compile C++ (gcc, make, headers, etc.). It is recommended to use GCC 9 or newer.

For example, to install a compiler and CMake on Ubuntu:

apt-get install -y build-essential cmake

You should also install CUDA Toolkit by following the NVIDIA CUDA Installation Guide for Linux guide. The current minimum supported CUDA Toolkit version that we support is 11.8.

git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
cmake -DCOMPUTE_BACKEND=cuda -S .
make
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)

If you have multiple versions of the CUDA Toolkit installed or it is in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.

Intel XPU

  • A compatible PyTorch version with Intel XPU support is required. The current minimum is PyTorch 2.6.0. It is recommended to use the latest stable release. See Getting Started on Intel GPU for guidance.

Installation via PyPI

This is the most straightforward and recommended installation option.

The currently distributed bitsandbytes packages are built with the following configurations:

OS oneAPI Toolkit Kernel Implementation
Linux x86-64 2025.1.3 SYCL + Triton
Windows x86-64 N/A SYCL

The Linux build has a minimum glibc version of 2.34.

Use pip or uv to install the latest release:

pip install bitsandbytes

Intel Gaudi

  • A compatible PyTorch version with Intel Gaudi support is required. The current minimum is Gaudi v1.21 with PyTorch 2.6.0. It is recommended to use the latest stable release. See the Gaudi software installation guide for guidance.

Installation from PyPI

Use pip or uv to install the latest release:

pip install bitsandbytes

CPU

Installation from PyPI

This is the most straightforward and recommended installation option.

The currently distributed bitsandbytes packages are built with the following configurations:

OS Host Compiler Hardware Minimum
Linux x86-64 GCC 11.4 AVX2
Linux aarch64 GCC 11.4
Windows x86-64 MSVC 19.43+ (VS2022) AVX2

The Linux build has a minimum glibc version of 2.24.

Use pip or uv to install the latest release:

pip install bitsandbytes

Compile from Source

To compile from source, simply install the package from source using pip. The package will be built for CPU only at this time.

git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
pip install -e .

AMD ROCm (Preview)

  • A compatible PyTorch version with AMD ROCm support is required. It is recommended to use the latest stable release. See PyTorch on ROCm for guidance.
  • ROCm support is currently only available in our preview wheels or when building from source.

Preview Wheels from main

The currently distributed preview bitsandbytes are built with the following configurations:

OS ROCm Targets
Linux x86-64 6.1.2 gfx90a / gfx942 / gfx1100
Linux x86-64 6.2.4 gfx90a / gfx942 / gfx1100
Linux x86-64 6.3.4 gfx90a / gfx942 / gfx1100
Linux x86-64 6.4.4 gfx90a / gfx942 / gfx1100
Linux x86-64 7.0.0 gfx90a / gfx942 / gfx1100

Windows is not currently supported.

Please see Preview Wheels for installation instructions.

Compile from Source

bitsandbytes can be compiled from ROCm 6.1 - ROCm 7.0.

# Install bitsandbytes from source
# Clone bitsandbytes repo
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/

# Compile & install
apt-get install -y build-essential cmake  # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S .  # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
make
pip install -e .   # `-e` for "editable" install, when developing BNB (otherwise leave that out)

Preview Wheels

If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (the wheel links will remain stable!):

Linux
Windows
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!

# x86_64 (most users)
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl

# ARM/aarch64
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl
< > Update on GitHub