Diffusers documentation

How to use Stable Diffusion on Habana Gaudi

You are viewing v0.12.0 version. A newer version v0.29.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

How to use Stable Diffusion on Habana Gaudi

🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum Habana.

Requirements

  • Optimum Habana 1.3 or later, here is how to install it.
  • SynapseAI 1.7.

Inference Pipeline

To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances:

When initializing the pipeline, you have to specify use_habana=True to deploy it on HPUs. Furthermore, in order to get the fastest possible generations you should enable HPU graphs with use_hpu_graphs=True. Finally, you will need to specify a Gaudi configuration which can be downloaded from the Hugging Face Hub.

from optimum.habana import GaudiConfig
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline

model_name = "stabilityai/stable-diffusion-2-base"
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion",
)

You can then call the pipeline to generate images by batches from one or several prompts:

outputs = pipeline(
    prompt=[
        "High quality photo of an astronaut riding a horse in space",
        "Face of a yellow cat, high resolution, sitting on a park bench",
    ],
    num_images_per_prompt=10,
    batch_size=4,
)

For more information, check out Optimum Habana’s documentation and the example provided in the official Github repository.

Benchmark

Here are the latencies for Habana Gaudi 1 and Gaudi 2 with the Habana/stable-diffusion Gaudi configuration (mixed precision bf16/fp32):

Latency Batch size
Gaudi 1 4.37s 4/8
Gaudi 2 1.19s 4/8