Edit model card

Stable diffusion 1.5 ONNX models optimized for ONNX Runtime CUDA Execution Provider.


pip install onnxruntime-gpu>=1.14 diffusers>=0.13.0 transformers accelerate

onnxruntime-gpu requires CUDA and cuDNN. See https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html for more information.

Stable Diffusion Inference

The snippet below demonstrates how to use the ONNX runtime to inference Stable diffusion 1.5 in Nvidia GPU:

# make sure you're logged in with `huggingface-cli login`
from diffusers import OnnxStableDiffusionPipeline

pipe = OnnxStableDiffusionPipeline.from_pretrained(

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

See this doc for more information about how the models are generated and the benchmark results.

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .