Edit model card

Stable Diffusion 1.5 for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized onnx models of Stable Diffusion 1.5 to accelerate inference with ONNX Runtime CUDA execution provider on Nvidia GPUs. It cannot run in other execution providers like CPU or DirectML.

The models are generated by Olive with command like the following:

python stable_diffusion.py --provider cuda --optimize

Model Description

Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .

Finetuned from

Collection including tlwu/stable-diffusion-v1-5-onnxruntime