Edit model card

Juggernaut XL v7 for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized versions of Juggernaut XL v7 to accelerate inference with ONNX Runtime CUDA execution provider for Nvidia GPUs. It cannot run in other providers like CPU or DirectML.

The models are generated by Olive with command like the following:

python stable_diffusion_xl.py --provider cuda --optimize --model_id stablediffusionapi/juggernaut-xl-v7
Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Collection including tlwu/juggernaut-xl-v7-onnxruntime