text
stringlengths
0
5.54k
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps")
# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
# First-time "warmup" pass (see explanation above)
_ = pipe(prompt, num_inference_steps=1)
# Results match those from the CPU device after the warmup pass.
image = pipe(prompt).images[0]
Performance Recommendations
M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does.
We recommend you use attention slicing to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed better performance in most Apple Silicon computers, unless you have 64 GB or more.
Copied
pipeline.enable_attention_slicing()
Known Issues
As mentioned above, we are investigating a strange first-time inference issue.
Generating multiple prompts in a batch crashes or doesn’t work reliably. We believe this is related to the mps backend in PyTorch. This is being resolved, but for now we recommend to iterate instead of batching.
Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image To use with Stable Diffusion XL 1.0 Copied import torch
from diffusers import DiffusionPipeline, AutoencoderTiny
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) —
Tuple of integers representing the number of output channels for each encoder block. The length of the
tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) —
Tuple of integers representing the number of output channels for each decoder block. The length of the
tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") —
Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) —
Number of channels in the latent representation. The latent space acts as a compressed representation of
the input image. upsampling_scaling_factor (int, optional, defaults to 2) —
Scaling factor for upsampling in the decoder. It determines the size of the output image during the
upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) —
Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The
length of the tuple should be equal to the number of stages in the encoder. Each stage has a different
number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) —
Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The
length of the tuple should be equal to the number of stages in the decoder. Each stage has a different
number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) —
Magnitude of the latent representation. This parameter scales the latent representation values to control
the extent of information preservation. latent_shift (float, optional, defaults to 0.5) —
Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. For this Autoencoder,
however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) —
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
can be fine-tuned / trained to a lower range without losing too much precision, in which case
force_upcast can be set to False (see this fp16-friendly
AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for
all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) —
Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method.
Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5"
pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline
repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline
repo_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline