Stable diffusion 1.5 ONNX models optimized for ONNX Runtime CUDA Execution Provider.
Installation
pip install onnxruntime-gpu>=1.14 diffusers>=0.13.0 transformers accelerate
onnxruntime-gpu requires CUDA and cuDNN. See https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html for more information.
Stable Diffusion Inference
The snippet below demonstrates how to use the ONNX runtime to inference Stable diffusion 1.5 in Nvidia GPU:
# make sure you're logged in with `huggingface-cli login`
from diffusers import OnnxStableDiffusionPipeline
pipe = OnnxStableDiffusionPipeline.from_pretrained(
"tlwu/stable-diffusion-v1-5",
revision="fp16",
provider="CUDAExecutionProvider",
)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
See this doc for more information about how the models are generated and the benchmark results.
- Downloads last month
- 0