Diffusers documentation

Distilled Stable Diffusion inference

You are viewing v0.21.0 version. A newer version v0.30.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Distilled Stable Diffusion inference

Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%.

Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model.

Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model:

from diffusers import StableDiffusionPipeline
import torch

distilled = StableDiffusionPipeline.from_pretrained(
    "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")

original = StableDiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")

Given a prompt, get the inference time for the original model:

import time

seed = 2023
generator = torch.manual_seed(seed)

NUM_ITERS_TO_RUN = 3
NUM_INFERENCE_STEPS = 25
NUM_IMAGES_PER_PROMPT = 4

prompt = "a golden vase with different flowers"

start = time.time_ns()
for _ in range(NUM_ITERS_TO_RUN):
    images = original(
        prompt,
        num_inference_steps=NUM_INFERENCE_STEPS,
        generator=generator,
        num_images_per_prompt=NUM_IMAGES_PER_PROMPT
    ).images
end = time.time_ns()
original_sd = f"{(end - start) / 1e6:.1f}"

print(f"Execution time -- {original_sd} ms\n")
"Execution time -- 45781.5 ms"

Time the distilled model inference:

start = time.time_ns()
for _ in range(NUM_ITERS_TO_RUN):
    images = distilled(
        prompt,
        num_inference_steps=NUM_INFERENCE_STEPS,
        generator=generator,
        num_images_per_prompt=NUM_IMAGES_PER_PROMPT
    ).images
end = time.time_ns()

distilled_sd = f"{(end - start) / 1e6:.1f}"
print(f"Execution time -- {distilled_sd} ms\n")
"Execution time -- 29884.2 ms"
original Stable Diffusion (45781.5 ms)
distilled Stable Diffusion (29884.2 ms)

Tiny AutoEncoder

To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE:

from diffusers import AutoencoderTiny

distilled.vae = AutoencoderTiny.from_pretrained(
    "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")

Time the distilled model and distilled VAE inference:

start = time.time_ns()
for _ in range(NUM_ITERS_TO_RUN):
    images = distilled(
        prompt,
        num_inference_steps=NUM_INFERENCE_STEPS,
        generator=generator,
        num_images_per_prompt=NUM_IMAGES_PER_PROMPT
    ).images
end = time.time_ns()

distilled_tiny_sd = f"{(end - start) / 1e6:.1f}"
print(f"Execution time -- {distilled_tiny_sd} ms\n")
"Execution time -- 27165.7 ms"
distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms)