The DiffusionPipeline is the quickest way to load any pretrained diffusion pipeline from the Hub for inference.
You shouldn’t use the DiffusionPipeline class for training or finetuning a diffusion model. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead.
The pipeline type (for example StableDiffusionPipeline) of any diffusion pipeline loaded with from_pretrained() is automatically
detected and pipeline components are loaded and passed to the __init__
function of the pipeline.
Any pipeline object can be saved locally with save_pretrained().
Base class for all pipelines.
DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and provides methods for loading, downloading and saving models. It also includes methods to:
Class attributes:
str
) — The configuration filename that stores the class and module names of all the
diffusion pipeline’s components.List[str]
) — List of all optional components that don’t have to be passed to the
pipeline to function (should be overridden by subclasses).( ) → torch.device
Returns
torch.device
The torch device on which the pipeline is located.
( *args **kwargs ) → DiffusionPipeline
Parameters
torch.dtype
, optional) —
Returns a pipeline with the specified
dtype
torch.Device
, optional) —
Returns a pipeline with the specified
device
str
, optional, defaults to False
) —
Whether to omit warnings if the target dtype
is not compatible with the target device
. Returns
The pipeline converted to specified dtype
and/or dtype
.
Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of self.to(*args, **kwargs).
If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, the returned pipeline is a copy of self with the desired torch.dtype and torch.device.
Here are the ways to call to
:
to(dtype, silence_dtype_warnings=False) → DiffusionPipeline
to return a pipeline with the specified
dtype
to(device, silence_dtype_warnings=False) → DiffusionPipeline
to return a pipeline with the specified
device
to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline
to return a pipeline with the
specified device
and
dtype
The self.components
property can be useful to run different pipelines with the same weights and
configurations without reallocating additional memory.
Returns (dict
):
A dictionary containing all the modules needed to initialize the pipeline.
Examples:
>>> from diffusers import (
... StableDiffusionPipeline,
... StableDiffusionImg2ImgPipeline,
... StableDiffusionInpaintPipeline,
... )
>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
Disable memory efficient attention from xFormers.
( pretrained_model_name **kwargs ) → os.PathLike
Parameters
str
or os.PathLike
, optional) —
A string, the repository id (for example CompVis/ldm-text2im-large-256
) of a pretrained pipeline
hosted on the Hub. str
, optional) —
Can be either:
A string, the repository id (for example CompVis/ldm-text2im-large-256
) of a pretrained
pipeline hosted on the Hub. The repository must contain a file called pipeline.py
that defines
the custom pipeline.
A string, the file name of a community pipeline hosted on GitHub under
Community. Valid file
names must match the file name and not the pipeline script (clip_guided_stable_diffusion
instead of clip_guided_stable_diffusion.py
). Community pipelines are always loaded from the
current main
branch of GitHub.
A path to a directory (./my_pipeline_directory/
) containing a custom pipeline. The directory
must contain a file called pipeline.py
that defines the custom pipeline.
🧪 This is an experimental feature and may change in the future.
For more information on how to load and create custom pipelines, take a look at How to contribute a community pipeline.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. bool
, optional, defaults to False
) —
Whether or not to resume downloading the model weights and configuration files. If set to False
, any
incompletely downloaded files are deleted. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether to only load local model weights and configuration files or not. If set to True
, the model
won’t be downloaded from the Hub. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, the token generated from
diffusers-cli login
(stored in ~/.huggingface
) is used. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
revision
when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
custom pipeline from GitHub, otherwise it defaults to "main"
when loading from the Hub. str
, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. str
, optional) —
Load weights from a specified variant filename such as "fp16"
or "ema"
. This is ignored when
loading from_flax
. bool
, optional, defaults to None
) —
If set to None
, the safetensors weights are downloaded if they’re available and if the
safetensors library is installed. If set to True
, the model is forcibly loaded from safetensors
weights. If set to False
, safetensors weights are not loaded. bool
, optional, defaults to False
) —
If set to True
, ONNX weights will always be downloaded if present. If set to False
, ONNX weights
will never be downloaded. By default use_onnx
defaults to the _is_onnx
class attribute which is
False
for non-ONNX pipelines and True
for ONNX pipelines. ONNX weights include both files ending
with .onnx
and .pb
. Returns
os.PathLike
A path to the downloaded pipeline.
Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights.
To use private or gated models, log-in with
huggingface-cli login
.
( slice_size: typing.Union[str, int, NoneType] = 'auto' )
Parameters
str
or int
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention
(SDPA) from PyTorch
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5",
... torch_dtype=torch.float16,
... use_safetensors=True,
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
( gpu_id: typing.Optional[int] = None device: typing.Union[torch.device, str] = 'cuda' )
Parameters
int
, optional) —
The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. torch.Device
or str
, optional, defaults to “cuda”) —
The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload
, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload
, but performance is much better due to the iterative execution of the unet
.
( gpu_id: typing.Optional[int] = None device: typing.Union[torch.device, str] = 'cuda' )
Parameters
int
, optional) —
The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. torch.Device
or str
, optional, defaults to “cuda”) —
The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state
dicts of all torch.nn.Module
components (except those in self._exclude_from_cpu_offload
) are saved to CPU
and then moved to torch.device('meta')
and loaded to GPU only when their specific submodule has its forward
method called. Offloading happens on a submodule basis. Memory savings are higher than with
enable_model_cpu_offload
, but performance is lower.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional) —
Override the default None
operator for use as op
argument to the
memory_efficient_attention()
function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs )
Parameters
str
or os.PathLike
, optional) —
Can be either:
CompVis/ldm-text2im-large-256
) of a pretrained pipeline
hosted on the Hub../my_pipeline_directory/
) containing pipeline weights
saved using
save_pretrained().str
or torch.dtype
, optional) —
Override the default torch.dtype
and load the model with another dtype. If “auto” is passed, the
dtype is automatically derived from the model’s weights. str
, optional) —
🧪 This is an experimental feature and may change in the future.
Can be either:
hf-internal-testing/diffusers-dummy-pipeline
) of a custom
pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines
the custom pipeline.clip_guided_stable_diffusion
instead of clip_guided_stable_diffusion.py
). Community pipelines are always loaded from the
current main branch of GitHub../my_pipeline_directory/
) containing a custom pipeline. The directory
must contain a file called pipeline.py
that defines the custom pipeline.For more information on how to load and create custom pipelines, please have a look at Loading and Adding Custom Pipelines
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Union[str, os.PathLike]
, optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. bool
, optional, defaults to False
) —
Whether or not to resume downloading the model weights and configuration files. If set to False
, any
incompletely downloaded files are deleted. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether to only load local model weights and configuration files or not. If set to True
, the model
won’t be downloaded from the Hub. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, the token generated from
diffusers-cli login
(stored in ~/.huggingface
) is used. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
revision
when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
custom pipeline from GitHub, otherwise it defaults to "main"
when loading from the Hub. str
, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. str
or Dict[str, Union[int, str, torch.device]]
, optional) —
A map that specifies where each submodule should go. It doesn’t need to be defined for each
parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
same device.
Set device_map="auto"
to have 🤗 Accelerate automatically compute the most optimized device_map
. For
more information about each option see designing a device
map.
Dict
, optional) —
A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
each GPU and the available CPU RAM if unset. str
or os.PathLike
, optional) —
The path to offload weights if device_map contains the value "disk"
. bool
, optional) —
If True
, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True
when there is some disk offload. bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True
will raise an error. bool
, optional, defaults to None
) —
If set to None
, the safetensors weights are downloaded if they’re available and if the
safetensors library is installed. If set to True
, the model is forcibly loaded from safetensors
weights. If set to False
, safetensors weights are not loaded. bool
, optional, defaults to None
) —
If set to True
, ONNX weights will always be downloaded if present. If set to False
, ONNX weights
will never be downloaded. By default use_onnx
defaults to the _is_onnx
class attribute which is
False
for non-ONNX pipelines and True
for ONNX pipelines. ONNX weights include both files ending
with .onnx
and .pb
. __init__
method. See example
below for more information. str
, optional) —
Load weights from a specified variant filename such as "fp16"
or "ema"
. This is ignored when
loading from_flax
. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights.
The pipeline is set in evaluation mode (model.eval()
) by default.
If you get the error message below, you need to finetune the weights for your downstream task:
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
To use private or gated models, log-in with
huggingface-cli login
.
Examples:
>>> from diffusers import DiffusionPipeline
>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
>>> # Download pipeline that requires an authorization token
>>> # For more information on access tokens, please refer to this section
>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens)
>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # Use a different scheduler
>>> from diffusers import LMSDiscreteScheduler
>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.scheduler = scheduler
Function that offloads all components, removes all model hooks that were added when using
enable_model_cpu_offload
and then applies them again. In case the model has not been offloaded this function
is a no-op. Make sure to add this function to the end of the __call__
function of your pipeline so that it
functions correctly when applying enable_model_cpu_offload.
Convert a NumPy image or a batch of images to a PIL image.
( save_directory: typing.Union[str, os.PathLike] safe_serialization: bool = True variant: typing.Optional[str] = None push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) —
Directory to save a pipeline to. Will be created if it doesn’t exist. bool
, optional, defaults to True
) —
Whether to save the model using safetensors
or the traditional PyTorch way with pickle
. str
, optional) —
If specified, weights are saved in the format pytorch_model.<variant>.bin
. bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id
(will default to the name of save_directory
in your
namespace). Dict[str, Any]
, optional) —
Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading method. The pipeline is easily reloaded using the from_pretrained() class method.