Diffusers documentation
Installation
Installation
Diffusers is tested on Python 3.8+, PyTorch 1.4+, and Flax 0.4.1+. Follow the installation instructions for the deep learning library you’re using, PyTorch or Flax.
Create a virtual environment for easier management of separate projects and to avoid compatibility issues between dependencies. Use uv, a Rust-based Python package and project manager, to create a virtual environment and install Diffusers.
uv venv my-env
source my-env/bin/activate
Install Diffusers with one of the following methods.
PyTorch only supports Python 3.8 - 3.11 on Windows.
uv pip install diffusers["torch"] transformers
Use the command below for Flax.
uv pip install diffusers["flax"] transformers
Editable install
An editable install is recommended for development workflows or if you’re using the main
version of the source code. A special link is created between the cloned repository and the Python library paths. This avoids reinstalling a package after every change.
Clone the repository and install Diffusers with the following commands.
git clone https://github.com/huggingface/diffusers.git
cd diffusers
uv pip install -e ".[torch]"
You must keep the diffusers
folder if you want to keep using the library with the editable install.
Update your cloned repository to the latest version of Diffusers with the command below.
cd ~/diffusers/
git pull
Cache
Model weights and files are downloaded from the Hub to a cache, which is usually your home directory. Change the cache location with the HF_HOME or HF_HUB_CACHE environment variables or configuring the cache_dir
parameter in methods like from_pretrained().
export HF_HOME="/path/to/your/cache"
export HF_HUB_CACHE="/path/to/your/hub/cache"
Cached files allow you to use Diffusers offline. Set the HF_HUB_OFFLINE environment variable to 1
to prevent Diffusers from connecting to the internet.
export HF_HUB_OFFLINE=1
For more details about managing and cleaning the cache, take a look at the Understand caching guide.
Telemetry logging
Diffusers gathers telemetry information during from_pretrained() requests. The data gathered includes the Diffusers and PyTorch/Flax version, the requested model or pipeline class, and the path to a pretrained checkpoint if it is hosted on the Hub.
This usage data helps us debug issues and prioritize new features. Telemetry is only sent when loading models and pipelines from the Hub, and it is not collected if you’re loading local files.
Opt-out and disable telemetry collection with the HF_HUB_DISABLE_TELEMETRY environment variable.
export HF_HUB_DISABLE_TELEMETRY=1