Model Descriptions:

This repo contains ONNX model files for madebyollin's Tiny AutoEncoder for Stable Diffusion.

Using in 🧨 diffusers

To install the requirements for this demo, do pip install optimum["onnxruntime"].

from huggingface_hub import snapshot_download
from diffusers.pipelines import OnnxRuntimeModel
from optimum.onnxruntime import ORTStableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"

taesd_dir = snapshot_download(repo_id="deinferno/taesd-onnx")

pipeline = ORTStableDiffusionPipeline.from_pretrained(
        model_id,
        vae_decoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_decoder"),
        vae_encoder_session = OnnxRuntimeModel.from_pretrained(f"{taesd_dir}/vae_encoder"),
        revision="onnx")

prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
image.save("result.png")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.