Diffusers documentation

How to use the ONNX Runtime for inference

You are viewing v0.3.0 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

How to use the ONNX Runtime for inference

🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available.

Installation

  • TODO

Stable Diffusion Inference

The snippet below demonstrates how to use the ONNX runtime. You need to use StableDiffusionOnnxPipeline instead of StableDiffusionPipeline. You also need to download the weights from the onnx branch of the repository, and indicate the runtime provider you want to use.

# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionOnnxPipeline

pipe = StableDiffusionOnnxPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    revision="onnx",
    provider="CUDAExecutionProvider",
    use_auth_token=True,
)

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

Known Issues

  • Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.